EP2772916B1 - Verfahren zur Geräuschdämpfung eines Audiosignals mit Hilfe eines Algorithmus mit variabler Spektralverstärkung mit dynamisch modulierbarer Härte - Google Patents

Verfahren zur Geräuschdämpfung eines Audiosignals mit Hilfe eines Algorithmus mit variabler Spektralverstärkung mit dynamisch modulierbarer Härte Download PDF

Info

Publication number
EP2772916B1
EP2772916B1 EP14155968.2A EP14155968A EP2772916B1 EP 2772916 B1 EP2772916 B1 EP 2772916B1 EP 14155968 A EP14155968 A EP 14155968A EP 2772916 B1 EP2772916 B1 EP 2772916B1
Authority
EP
European Patent Office
Prior art keywords
speech
time frame
current time
signal
gain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP14155968.2A
Other languages
English (en)
French (fr)
Other versions
EP2772916A1 (de
Inventor
Alexandre Briot
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Faurecia Clarion Electronics Europe SAS
Original Assignee
Parrot Automotive SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Parrot Automotive SA filed Critical Parrot Automotive SA
Publication of EP2772916A1 publication Critical patent/EP2772916A1/de
Application granted granted Critical
Publication of EP2772916B1 publication Critical patent/EP2772916B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02087Noise filtering the noise being separate speech, e.g. cocktail party
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/84Detection of presence or absence of voice signals for discriminating voice from noise

Definitions

  • the invention relates to the treatment of speech in a noisy environment.
  • These devices include one or more microphones sensing not only the voice of the user, but also the surrounding noise, noise that is a disruptive element that can go in some cases to make unintelligible the words of the speaker. It is the same if one wants to implement speech recognition techniques, because it is very difficult to perform a form recognition on words embedded in a high noise level.
  • the large distance between the microphone (placed at the dashboard or in an upper corner of the roof of the cockpit) and the speaker (whose distance is constrained by the driving position) leads to the capture of a relatively low level of speech compared to the ambient noise, which makes it difficult to extract the useful signal embedded in the noise.
  • the very noisy environment typical of the automotive environment has non-stationary spectral characteristics, that is to say, which evolve unpredictably according to the driving conditions: passage over deformed or paved roads, car radio in operation, etc.
  • the device is a headset type microphone / headset combined used for communication functions such as "hands-free" telephony functions, in addition to listening to a source audio (music for example) from a device to which the headphones are connected.
  • the headset can be used in a noisy environment (Metro, busy street, train, etc.), so that the microphone will not only pick up the words of the wearer of the helmet, but also the surrounding noise.
  • the wearer is certainly protected from this noise by the helmet, especially if it is a model with closed earphones isolating the ear from the outside, and even more if the headset is provided with an "active control of noise”.
  • the distant speaker (the one at the other end of the communication channel) will suffer from the noise picked up by the microphone and being superimposed and interfere with the speech signal of the near speaker (the helmet wearer).
  • certain speech formers essential to the understanding of the voice are often embedded in noise components commonly encountered in the usual environments.
  • the invention relates more particularly to single-channel selective denoising techniques, that is to say operating on a single signal (as opposed to techniques using several microphones whose signals are combined judiciously and are subject to a spatial or spectral coherence analysis, for example by beamforming or other techniques).
  • a single signal as opposed to techniques using several microphones whose signals are combined judiciously and are subject to a spatial or spectral coherence analysis, for example by beamforming or other techniques.
  • a spatial or spectral coherence analysis for example by beamforming or other techniques.
  • it will apply with the same relevance to a signal recomposed from several microphones by a beamforming technique , insofar as the invention presented here applies to a scalar signal.
  • the invention aims more particularly at an improvement made to the noise reduction algorithms based on signal processing in the frequency domain (thus after application of an FFT Fourier transformation) of applying a calculated spectral gain according to several estimators of probability of presence of speech.
  • the signal y from the microphone is cut into frames of fixed length, overlapping or not, and each frame of index k is transposed into the frequency domain by FFT.
  • the resulting frequency signal Y ( k, l ) which is also discrete, is then described by a set of frequency "bins" (frequency bands) of index l , typically 128 bins of positive frequencies.
  • the US 7,454,010 B1 also describes a comparable algorithm taking into account, for the calculation of the spectral gains, information of presence or absence of the voice in a current time segment.
  • Noise noise is characterized by a non-uniform residual background noise layer, favoring certain specific frequencies.
  • the tone of the noise is then no longer natural, which makes listening disturbing.
  • This phenomenon results from the fact that the frequency treatment of denoising is operated without dependence between neighboring frequencies during frequency discrimination between speech and noise, because the treatment does not include a mechanism to prevent two very different neighboring spectral gains.
  • a uniform attenuation gain is needed to preserve the tone of the noise; but in practice, if the spectral gains are not homogeneous, the residual noise becomes "musical" with the appearance of frequency notes at less attenuated frequencies, corresponding to bins falsely detected as containing useful signal. It should be noted that this phenomenon is all the more marked as it allows the application of significant attenuation gains.
  • the parameterization of such an algorithm therefore consists in finding a compromise on the aggressiveness of the denoising, so as to remove a maximum of noise without the undesirable effects of the application of too large attenuation spectral gains becoming too perceptible.
  • This last criterion proves however extremely subjective, and on a comparatively large control group of users it is difficult to find a compromise setting that can be unanimous.
  • the "OM-LSA" model provides for setting a lower bound G min for the attenuation gain (expressed on a logarithmic scale, this gain of attenuation therefore corresponds in the remainder of this document to a negative value) applied to the zones identified as noise, so as to prevent too much denoising to limit the appearance of the defects mentioned above.
  • This solution is however not optimal: it certainly helps to eliminate the undesirable effects of excessive noise reduction, but at the same time it limits the performance of denoising.
  • the problem of the invention is to overcome this limitation, by making the noise reduction system more efficient by applying a spectral gain (typically according to an OM-LSA model), while respecting the constraints mentioned above, namely effectively reduce noise without altering the natural appearance of speech (in the presence of speech) or of noise (in the presence of noise).
  • a spectral gain typically according to an OM-LSA model
  • the undesirable effects of the algorithmic processing must be made imperceptible by the remote speaker, while at the same time attenuating the noise significantly.
  • the basic idea of the invention is to modulate the calculation of the spectral gain G OMLSA - calculated in the frequency domain for each bin - by a global indicator observed at the time frame and no longer at a single level. frequency bin.
  • This modulation will be performed by a direct transformation of the lower limit G min of the attenuation gain - a terminal which is a scalar commonly referred to as "denoising hardness" - into a time function whose value will be determined according to a temporal descriptor ( or "global variable") reflected by the state of the various estimators of the algorithm.
  • denoising hardness a time function whose value will be determined according to a temporal descriptor ( or "global variable" reflected by the state of the various estimators of the algorithm.
  • the temporal modulation applied to this logarithmic G min attenuation gain may correspond to either an increment or a decrement: a decrement will be associated with a greater noise reduction hardness (gain logarithmic greater in absolute value), conversely an increment of this negative logarithmic gain will be associated with a smaller absolute value, hence a lower noise reduction hardness.
  • a frame-scale observation can often correct certain defects in the algorithm, particularly in very noisy areas where it can sometimes falsely detect a noise frequency as a frequency of speech: thus, if a frame of only noise is detected (at the level of the frame), one will be able to denoise in a more aggressive way without introducing musical noise, thanks to a more homogeneous denoising.
  • the global variable is a signal-to-noise ratio of the current time frame evaluated in the time domain.
  • the global variable is an average probability of speech, evaluated at the level of the current time frame.
  • the global variable is a Boolean voice activity detection signal for the current time frame, evaluated in the time domain by analysis of the time frame and / or by means of an external detector.
  • the Figure 1 schematically illustrates, in the form of functional blocks, the manner in which an OM-LSA type denoising treatment according to the state of the art is carried out.
  • the digitized signal y ( n ) x (n) + d ( n ) comprising a speech component x ( n ) and a noise component d (n) ( n being the rank of the sample) is cut out (block 10 ) in segments or time frames y ( k ) (k being the frame index) of fixed length, overlapping or not, usually frames of 256 samples for a signal sampled at 8 kHz ( narrowband telephone standard).
  • Each time frame of index k is then transposed into the frequency domain by a fast Fourier transform FFT (block 12): the resulting obtained signal or spectrum Y ( k , l ), also discrete, is then described by a set of Frequency bands or "bins" (where l is the bin index), for example 128 bins of positive frequencies.
  • the spectral gain G OMLSA (k, l) is calculated (block 16) as a function of a part of a probability of presence of speech p ( k , l ), which is an estimated frequency probability (block 18) for each bin and on the other hand a parameter G min , which is a scalar value of minimum gain, commonly referred to as "denoising hardness".
  • This parameter G min sets a lower limit to the attenuation gain applied to the zones identified as noise, in order to avoid that the phenomena of musical noise and robotic voice become too marked due to the application of spectral gains of too much and / or heterogeneous attenuation.
  • LSA Log-Spectral Amplitude
  • OM-LSA Optimally-Modified LSA improves the computation of the LSA gain by weighting it by the conditional probability p ( k , l ) of presence of speech or SPP ( Speech Presence Probability ), for the computation of the gain final: the noise reduction applied is all the more important (that is to say that the applied gain is even lower) that the probability of presence of speech is low.
  • the method described is not intended to identify precisely on which frequency components of which frames the speech is absent, but rather to give a confidence index between 0 and 1, a value 1 indicating that the speech is absent for sure (according to the algorithm) while a value 0 declares the opposite.
  • this index is likened to the probability of absence of speech a priori, that is to say the probability that speech is absent on a given frequency component of the frame considered.
  • This is of course a non-rigorous assimilation, in the sense that even if the presence of speech is probabilistic ex ante, the signal picked up by the microphone presents at each moment only one of two distinct states: at the moment considered, it can either include speech or not contain it. In practice, however, this assimilation gives good results, which justifies its use.
  • the resulting signal X ( k , l ) G OMLSA ( k, l ).
  • Y ( k , l ) that is to say the useful signal Y ( k , l ) to which the frequency mask G OMLSA ( k , l ) has been applied, is then subjected to an inverse Fourier transformation. iFFT (block 20), to go back from the frequency domain to the time domain.
  • the resulting time frames are then collected (block 22) to give a digitized denoised signal x ( n ).
  • the scalar value G min of the minimal gain representative of the denoising hardness was chosen more or less empirically, so that the degradation of the voice remains low audible, while ensuring an acceptable attenuation of the noise.
  • G min the scalar value of the minimal gain representative of the denoising hardness
  • the scalar value G min is transformed (block 24) into a time function G min ( k ) whose value will be determined according to a global variable (also called "temporal descriptor"). that is, a variable considered globally at the level of the frame and not the frequency bin.
  • This global variable can be reflected by the state of one or more different estimators already calculated by the algorithm, which will be chosen according to the case according to their relevance.
  • estimators can be: i) a signal-to-noise ratio, ii) an average probability of presence of speech and / or iii) a voice activity detection.
  • the denoising hardness G min becomes a time function G min ( k ) defined by the estimators, themselves temporal, making it possible to describe known situations for which it is desired to modulate the value of G min in order to influence on noise reduction by dynamically modifying the signal denoise / degradation compromise.
  • the starting point of this first implementation is the observation that a speech signal picked up in a quiet environment has little or no need to be de-noised, and that an energetic denoising applied to such a signal would lead quickly to audible artifacts, without the comfort of listening improved from the point of view of residual noise.
  • an excessively noisy signal can quickly become unintelligible or cause progressive listening fatigue; in such a case the benefit of a large denoising will be indisputable, even at the cost of an audible (however reasonable and controlled) degradation of speech.
  • the noise reduction will be all the more beneficial for the understanding of the useful signal that the untreated signal is noisy.
  • Another relevant criterion for modulating the hardness of the reduction may be the presence of speech for the time frame considered.
  • a voice activity detector or VAD (block 30) is used to perform the same type of hardness modulation as in the previous example.
  • VAD voice activity detector
  • Such a "perfect" detector delivers a binary signal (absence vs. presence of speech), and is distinguished from systems delivering only a probability of presence of variable speech between 0 and 100% continuously or in successive steps, which can introduce false important detections in noisy environments.
  • the voice activity detector 30 may be implemented in different ways, of which three examples of implementation will be given below.
  • the detection is performed from the signal y ( k ), intrinsically to the signal collected by the microphone; an analysis of the more or less harmonic nature of this signal makes it possible to determine the presence of a vocal activity, because a signal having a strong harmonicity can be considered, with a small margin of error, as being a voice signal, therefore corresponding to a presence of speech.
  • the voice activity detector 30 operates in response to the signal produced by a camera, installed for example in the passenger compartment of a motor vehicle and oriented so that its angle of view encompasses in all circumstances the head of the vehicle. driver, considered to be the close speaker.
  • the signal delivered by the camera is analyzed to determine from the movement of the mouth and lips whether the speaker speaks or not, as described inter alia in the EP 2 530 672 A1 (Parrot SA) , which can be referred to for further explanation.
  • the advantage of this image analysis technique is to have complementary information completely independent of the acoustic noise environment.
  • a third example of a sensor that can be used for voice activity detection is a physiological sensor capable of detecting certain vocal vibrations of the speaker that are not or only slightly corrupted by the surrounding noise.
  • a sensor may especially consist of an accelerometer or a piezoelectric sensor applied against the cheek or the temple of the speaker. It can be in particular incorporated in the earpad pad of a combined microphone / headset assembly, as described in the EP 2 518 724 A1 (Parrot SA), which can be referred to for more details.
  • a vibration propagates from the vocal cords to the pharynx and to the bucco-nasal cavity, where it is modulated, amplified and articulated.
  • the mouth, the soft palate, the pharynx, the sinuses and the nasal fossae then serve as a sounding board for this voiced sound and, their wall being elastic, they vibrate in turn and these vibrations are transmitted by internal bone conduction and are perceptible at the cheek and temple.
  • the spectral gain G OMLSA - calculated in the frequency domain for each bin - can be modulated indirectly, by weighting the probability of presence of frequency speech p ( k , l ) by an indicator global time observed at the level of the frame (and no longer a single particular frequency bin).
  • each frequency probability of speech should be zero, and the local frequency probability can be weighted by a global datum, this global datum making it possible to make a deduction on the real case encountered at the frame scale (speech / phase phase noise alone), that the only data in the frequency domain does not allow to formulate; in the presence of noise alone, we can reduce our to a uniform denoising, avoiding any musicality of the noise, which will keep its "grain" of origin.
  • the probability of presence of speech initially frequency will be weighted by a probability of global presence of speech at the scale of the frame: one will then strive to denoise in a homogeneous way the whole of the frame in a absence of speech (denoise uniformly when speech is absent).
  • the evaluation of the global data p glob ( k ) is schematized on the Figure 2 by the block 32, which receives as input the data P threshold (parameterizable threshold value) and P speech ( k , l ) (value itself calculated by the block 28, as described above), and outputs the value p glob ( k ) which is applied to the input 4 of the block 24.
  • P threshold parameterizable threshold value
  • P speech k , l
  • a global data item calculated at the level of the frame is used to refine the calculation of the frequency gain of denoising, and this as a function of the case encountered (absence / presence of speech).
  • the global data makes it possible to estimate the actual situation encountered at the of the frame (speech phase vs. noise phase alone), which the only frequency data would not allow to formulate. And in the presence of noise alone, we can reduce to a uniform denoising, ideal solution because the perceived residual noise will never be musical.
  • the invention is based on the demonstration that the noise denoising / degradation compromise is based on a spectral gain calculation (a function of a minimum gain scalar parameter and a probability of presence of speech) whose model is suboptimal, and proposes a formula involving a temporal modulation of these elements of calculation of the spectral gain, which become a function of relevant temporal descriptors of the noisy speech signal.
  • a spectral gain calculation a function of a minimum gain scalar parameter and a probability of presence of speech
  • the invention is based on the exploitation of a global datum in order to process each frequency band in a more relevant and adapted manner, the denoising hardness being made variable as a function of the presence of speech on a frame (the noise is no longer disconnected when the risk to have a counterpart is weak).
  • each frequency band is treated independently, and for a given frequency is not integrated a priori knowledge of other bands.
  • a broader analysis that observes the entire frame to calculate a global indicator characteristic of the frame is a useful means and effective in refining the processing at the frequency band scale.
  • the denoising gain is generally adjusted to a compromise value, typically of the order of 14 dB.
  • the implementation of the invention makes it possible to adjust this gain dynamically to a value varying between 8 dB (in the presence of speech) and 17 dB (in the presence of noise alone).
  • the noise reduction is thus much more energetic, and makes the noise virtually imperceptible (and in any case non-musical) in the absence of speech in most situations commonly encountered. And even in the presence of speech, denoising does not change the tone of the voice, whose rendering remains natural.

Claims (8)

  1. Verfahren zur Rauschunterdrückung in einem Audiosignal durch Anwendung eines Algorithmus mit variabler Spektralverstärkung als Funktion eines wahrscheinlichen Vorhandenseins von Sprache, umfassend die folgenden aufeinanderfolgenden Schritte:
    a) Erzeugung (10) von aufeinanderfolgenden Zeitrahmen (y(k)) des digitalen verrauschten Audiosignals (y(n));
    b) Anwendung einer Fouriertransformation (12) auf die Rahmen, die in Schritt a) erzeugt wurden, dergestalt, dass für jeden Zeitrahmen des Signals ein Signalspektrum (Y(k,l)) mit einer Vielzahl von zuvor bestimmten Frequenzbändern erzeugt wird;
    c) in der Frequenzdomäne:
    c1) Schätzung (18) für jedes Frequenzband jedes laufenden Zeitrahmens eines wahrscheinlichen Vorhandenseins von Sprache (p(k,l));
    c3) Berechnung (16) einer Spektralverstärkung (GOMLSA(k,l)), die für jedes Frequenzband von jedem laufenden Zeitrahmen typisch ist, in Abhängigkeit von: i) einer Schätzung der Energie des Rauschens in jedem Frequenzband, ii) dem wahrscheinlichen Vorhandensein von Sprache, das in Schritt c1) geschätzt wurde, und iii) einem Skalarwert minimaler Verstärkung (Gmin ), der ein Parameter der Härte der Rauschunterdrückung repräsentiert;
    c4) selektive Rauschreduzierung (14) durch Anwendung der Verstärkung, die in Schritt c3) berechnet wurde, auf jedes Frequenzband;
    d) Anwendung einer umgekehrten Fouriertransformation (20) auf das Signalspektrum (X̂(k,1)), das aus den Frequenzbändern besteht, die in Schritt c4) erzeugt wurden, dergestalt, dass für jedes Spektrum ein Zeitrahmen des rauschunterdrückten Signals geliefert wird; und
    e) Wiedergewinnung (22) eines rauschunterdrückten Audiosignals ausgehend von den Zeitrahmen, die in Schritt d) geliefert wurden,
    wobei das Verfahren dadurch gekennzeichnet ist, dass:
    - der Skalarwert der minimalen Verstärkung (Gmin) ein Wert (Gmin(k)) ist, der auf dynamische Art in jedem aufeinanderfolgenden Zeitrahmen (y(k)) modulierbar ist; und
    - das Verfahren außerdem, vor dem Schritt c3) der Berechnung der Spektralverstärkung, einen Schritt zur:
    c2) Berechnung (24) für den laufenden Zeitrahmen (y(k)) des modulierbaren Wertes (G min (k)) in Abhängigkeit einer globalen Variablen (SNRy(k) ; Pspeech (k) ; VAD (k)) aufweist, die in dem laufenden Zeitrahmen für alle Frequenzbänder beobachtet wird; und
    - die Berechnung des Schritts c2) die Anwendung für den laufenden Zeitrahmen eines Inkrements/Dekrements (ΔGmin(k) ; Δ1Gmin, Δ2Gmin; AGmin) aufweist, das auf einen nominalen Parameterwert (Gmin) während der minimalen Verstärkung angewendet wird.
  2. Verfahren nach Anspruch 1, wobei die globale Variable ein Signal-Rausch-Verhältnis (SNRy(k)) des laufenden Zeitrahmens ist, welches in der Zeitdomäne bewertet (26) wird.
  3. Verfahren nach Anspruch 2, wobei der Skalarwert der minimalen Verstärkung in Schritt c2) durch Anwendung des Verhältnisses berechnet wird: G min k = G min + Δ G min SNR y k
    Figure imgb0021
    wobei k der Index des laufenden Zeitrahmens ist,
    wobei Gmin(k) die minimale Verstärkung ist, die auf den laufenden Zeitrahmen anzuwenden ist,
    wobei Gmin der nominale Parameterwert der minimalen Verstärkung ist,
    wobei ΔGmin(k) das Inkrement/Dekrement ist, das auf Gmin angewendet wird, und
    wobei SNRy(k) das Signal-Rausch-Verhältnis des laufenden Zeitrahmens ist.
  4. Verfahren nach Anspruch 1, wobei die globale Variable eine durchschnittliche Wahrscheinlichkeit von Sprache (Pspeech(k)) ist, die in dem laufenden Zeitrahmen bewertet (28) wird.
  5. Verfahren nach Anspruch 4, wobei der Skalarwert der minimalen Verstärkung in Schritt c2) durch Anwendung des Verhältnisses berechnet wird: G min k = G min + P speech k - 1 . Δ 1 G min + P speech k . Δ 2 G min
    Figure imgb0022
    wobei k der Index des laufenden Zeitrahmens ist,
    wobei Gmin (k) die minimale Verstärkung ist, die auf den laufenden Zeitrahmen anzuwenden ist,
    wobei Gmin der nominale Parameterwert der minimalen Verstärkung ist,
    wobei Pspeech (k) die durchschnittliche Wahrscheinlichkeit von Sprache ist, die in dem laufenden Zeitrahmen bewertet wird,
    wobei Δ1Gmin das Inkrement/Dekrement ist, das auf Gmin in der Rauschphase angewendet wird, und
    wobei Δ2Gmin das Inkrement/Dekrement ist, das auf Gmin in der Sprachphase angewendet wird.
  6. Verfahren nach Anspruch 4, wobei die durchschnittliche Wahrscheinlichkeit von Sprache in dem laufenden Zeitrahmen durch Anwendung des Verhältnisses bewertet wird: P speech k = 1 N l N p k l
    Figure imgb0023
    wobei 1 der Index des Frequenzbandes ist,
    wobei N die Anzahl an Frequenzbändern in dem Spektrum ist, und
    wobei p(k,l)) die Wahrscheinlichkeit des Vorhandenseins von Sprache des Frequenzbandes mit Index 1 des laufenden Zeitrahmens ist.
  7. Verfahren nach Anspruch 1, wobei die globale Variable ein boolesches Signal zur Erkennung von Sprachaktivität (VAD(k)) für den laufenden Zeitrahmen ist, welches in der Zeitdomäne durch Analyse des Zeitrahmens und/oder mithilfe einer externen Erkennung bewertet (30) wird.
  8. Verfahren nach Anspruch 7, wobei der Skalarwert der minimalen Verstärkung in Schritt c2) durch Anwendung des Verhältnisses berechnet wird: G min k = G min + VAD k . Δ G min
    Figure imgb0024
    wobei k der Index des laufenden Zeitrahmens ist, wobei Gmin(k) die minimale Verstärkung ist, die auf den laufenden Zeitrahmen anzuwenden ist,
    wobei Gmin der nominale Parameterwert der minimalen Verstärkung ist,
    wobei VAD(k) der Wert des booleschen Erkennungssignals von Sprachaktivität für den laufenden Zeitrahmen ist, und
    wobei ΔGmin das Inkrement/Dekrement ist, das auf Gmin angewendet wird.
EP14155968.2A 2013-02-28 2014-02-20 Verfahren zur Geräuschdämpfung eines Audiosignals mit Hilfe eines Algorithmus mit variabler Spektralverstärkung mit dynamisch modulierbarer Härte Active EP2772916B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FR1351760A FR3002679B1 (fr) 2013-02-28 2013-02-28 Procede de debruitage d'un signal audio par un algorithme a gain spectral variable a durete modulable dynamiquement

Publications (2)

Publication Number Publication Date
EP2772916A1 EP2772916A1 (de) 2014-09-03
EP2772916B1 true EP2772916B1 (de) 2015-12-02

Family

ID=48521235

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14155968.2A Active EP2772916B1 (de) 2013-02-28 2014-02-20 Verfahren zur Geräuschdämpfung eines Audiosignals mit Hilfe eines Algorithmus mit variabler Spektralverstärkung mit dynamisch modulierbarer Härte

Country Status (4)

Country Link
US (1) US20140244245A1 (de)
EP (1) EP2772916B1 (de)
CN (1) CN104021798B (de)
FR (1) FR3002679B1 (de)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10141003B2 (en) * 2014-06-09 2018-11-27 Dolby Laboratories Licensing Corporation Noise level estimation
US9330684B1 (en) * 2015-03-27 2016-05-03 Continental Automotive Systems, Inc. Real-time wind buffet noise detection
US20160379661A1 (en) * 2015-06-26 2016-12-29 Intel IP Corporation Noise reduction for electronic devices
CN113098570A (zh) * 2015-10-20 2021-07-09 松下电器(美国)知识产权公司 通信装置及通信方法
FR3044197A1 (fr) 2015-11-19 2017-05-26 Parrot Casque audio a controle actif de bruit, controle anti-occlusion et annulation de l'attenuation passive, en fonction de la presence ou de l'absence d'une activite vocale de l'utilisateur de casque.
US11270198B2 (en) * 2017-07-31 2022-03-08 Syntiant Microcontroller interface for audio signal processing
CN111477237B (zh) * 2019-01-04 2022-01-07 北京京东尚科信息技术有限公司 音频降噪方法、装置和电子设备
WO2021003334A1 (en) * 2019-07-03 2021-01-07 The Board Of Trustees Of The University Of Illinois Separating space-time signals with moving and asynchronous arrays
US11557307B2 (en) * 2019-10-20 2023-01-17 Listen AS User voice control system

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2001241475A1 (en) * 2000-02-11 2001-08-20 Comsat Corporation Background noise reduction in sinusoidal based speech coding systems
US20030179888A1 (en) * 2002-03-05 2003-09-25 Burnett Gregory C. Voice activity detection (VAD) devices and methods for use with noise suppression systems
US6862567B1 (en) * 2000-08-30 2005-03-01 Mindspeed Technologies, Inc. Noise suppression in the frequency domain by adjusting gain according to voicing parameters
US7454010B1 (en) * 2004-11-03 2008-11-18 Acoustic Technologies, Inc. Noise reduction and comfort noise gain control using bark band weiner filter and linear attenuation
GB2426166B (en) * 2005-05-09 2007-10-17 Toshiba Res Europ Ltd Voice activity detection apparatus and method
JP4670483B2 (ja) * 2005-05-31 2011-04-13 日本電気株式会社 雑音抑圧の方法及び装置
CN100419854C (zh) * 2005-11-23 2008-09-17 北京中星微电子有限公司 一种语音增益因子估计装置和方法
US7555075B2 (en) * 2006-04-07 2009-06-30 Freescale Semiconductor, Inc. Adjustable noise suppression system
KR100821177B1 (ko) * 2006-09-29 2008-04-14 한국전자통신연구원 통계적 모델에 기반한 선험적 음성 부재 확률 추정 방법
US8081691B2 (en) * 2008-01-14 2011-12-20 Qualcomm Incorporated Detection of interferers using divergence of signal quality estimates
CN101478296B (zh) * 2009-01-05 2011-12-21 华为终端有限公司 一种多声道系统中的增益控制方法及装置
CN101510426B (zh) * 2009-03-23 2013-03-27 北京中星微电子有限公司 一种噪声消除方法及系统
US8249275B1 (en) * 2009-06-26 2012-08-21 Cirrus Logic, Inc. Modulated gain audio control and zipper noise suppression techniques using modulated gain
US8571231B2 (en) * 2009-10-01 2013-10-29 Qualcomm Incorporated Suppressing noise in an audio signal
US20110188671A1 (en) * 2009-10-15 2011-08-04 Georgia Tech Research Corporation Adaptive gain control based on signal-to-noise ratio for noise suppression
JP2012058358A (ja) * 2010-09-07 2012-03-22 Sony Corp 雑音抑圧装置、雑音抑圧方法およびプログラム
KR101726737B1 (ko) * 2010-12-14 2017-04-13 삼성전자주식회사 다채널 음원 분리 장치 및 그 방법
FR2976111B1 (fr) * 2011-06-01 2013-07-05 Parrot Equipement audio comprenant des moyens de debruitage d'un signal de parole par filtrage a delai fractionnaire, notamment pour un systeme de telephonie "mains libres"
US20120316875A1 (en) * 2011-06-10 2012-12-13 Red Shift Company, Llc Hosted speech handling

Also Published As

Publication number Publication date
CN104021798A (zh) 2014-09-03
FR3002679A1 (fr) 2014-08-29
EP2772916A1 (de) 2014-09-03
FR3002679B1 (fr) 2016-07-22
US20140244245A1 (en) 2014-08-28
CN104021798B (zh) 2019-05-28

Similar Documents

Publication Publication Date Title
EP2772916B1 (de) Verfahren zur Geräuschdämpfung eines Audiosignals mit Hilfe eines Algorithmus mit variabler Spektralverstärkung mit dynamisch modulierbarer Härte
EP2293594B1 (de) Verfahren zur Filterung von seitlichem nichtstationärem Rauschen für ein Multimikrofon-Audiogerät
CA2436318C (fr) Procede et dispositif de reduction de bruit
EP2680262B1 (de) Verfahren zur Geräuschdämpfung eines Audiosignals für eine Multimikrofon-Audiovorrichtung, die in lauten Umgebungen eingesetzt wird
EP2309499B1 (de) Verfahren zur optimierten Filterung nicht stationärer Geräusche, die von einem Audiogerät mit mehreren Mikrophonen eingefangen werden, insbesondere eine Freisprechtelefonanlage für Kraftfahrzeuge
EP1789956B1 (de) Verfahren zum verarbeiten eines rauschbehafteten tonsignals und einrichtung zur implementierung des verfahrens
EP1830349B1 (de) Verfahren zur Geräuschdämpfung eines Audiosignals
EP1154405B1 (de) Verfahren und Vorrichtung zur Spracherkennung in einer Umgebung mit variablerem Rauschpegel
EP2530673B1 (de) Audiogerät mit Rauschunterdrückung in einem Sprachsignal unter Verwendung von einem Filter mit fraktionaler Verzögerung
EP2057835B1 (de) Verfahren zur unterdrückung akustischer restechos nach echounterdrückung bei einer freisprecheinrichtung
EP2538409B1 (de) Verfahren zur Geräuschdämpfung für Audio-Gerät mit mehreren Mikrofonen, insbesondere für eine telefonische Freisprechanlage
FR3012928A1 (fr) Modificateurs reposant sur un snr estime exterieurement pour des calculs internes de mmse
FR3012929A1 (fr) Modificateur de la presence de probabilite de la parole perfectionnant les performances de suppression du bruit reposant sur le log-mmse
CA2932449A1 (fr) Procede de detection de la voix
FR3012927A1 (fr) Estimation precise du rapport signal a bruit par progression reposant sur une probabilite de la presence de la parole mmse
FR2786308A1 (fr) Procede de reconnaissance vocale dans un signal acoustique bruite et systeme mettant en oeuvre ce procede
EP0534837B1 (de) Sprachverarbeitungsverfahren bei Geräuschanwesenheit unter Verwendung eines nicht linearen spektralen Subtraktionsverfahrens und Hidden-Markov-Modellen
EP3627510A1 (de) Filterung eines tonsignals, das durch ein stimmerkennungssystem erfasst wurde
EP2515300B1 (de) Verfahren und System für die Geräuschunterdrückung
WO2017207286A1 (fr) Combine audio micro/casque comprenant des moyens de detection d'activite vocale multiples a classifieur supervise
WO2022207994A1 (fr) Estimation d'un masque optimise pour le traitement de donnees sonores acquises
FR3113537A1 (fr) Procédé et dispositif électronique de réduction du bruit multicanale dans un signal audio comprenant une partie vocale, produit programme d’ordinateur associé
EP4287648A1 (de) Elektronische vorrichtung und verarbeitungsverfahren, akustische vorrichtung und computerprogramm dafür
BE1020218A3 (fr) Procede pour ameliorer la resolution temporelle des informations fournies par un filtre compose et dispositif correspondant.
FR2878399A1 (fr) Dispositif et procede de debruitage a deux voies mettant en oeuvre une fonction de coherence associee a une utilisation de proprietes psychoacoustiques, et programme d'ordinateur correspondant

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140220

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

R17P Request for examination filed (corrected)

Effective date: 20150209

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 25/18 20130101ALN20150603BHEP

Ipc: G10L 25/84 20130101ALN20150603BHEP

Ipc: G10L 21/0208 20130101AFI20150603BHEP

INTG Intention to grant announced

Effective date: 20150629

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: PARROT AUTOMOTIVE

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 763939

Country of ref document: AT

Kind code of ref document: T

Effective date: 20151215

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602014000497

Country of ref document: DE

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 3

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 763939

Country of ref document: AT

Kind code of ref document: T

Effective date: 20151202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160302

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160303

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160229

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160404

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160402

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602014000497

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160220

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

26N No opposition filed

Effective date: 20160905

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160220

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 4

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170228

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170228

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20140220

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151202

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20190219

Year of fee payment: 6

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20200301

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200301

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230119

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20230120

Year of fee payment: 10

Ref country code: GB

Payment date: 20230121

Year of fee payment: 10

Ref country code: DE

Payment date: 20230119

Year of fee payment: 10