EP2680262B1 - Verfahren zur Geräuschdämpfung eines Audiosignals für eine Multimikrofon-Audiovorrichtung, die in lauten Umgebungen eingesetzt wird - Google Patents

Verfahren zur Geräuschdämpfung eines Audiosignals für eine Multimikrofon-Audiovorrichtung, die in lauten Umgebungen eingesetzt wird Download PDF

Info

Publication number
EP2680262B1
EP2680262B1 EP13171948.6A EP13171948A EP2680262B1 EP 2680262 B1 EP2680262 B1 EP 2680262B1 EP 13171948 A EP13171948 A EP 13171948A EP 2680262 B1 EP2680262 B1 EP 2680262B1
Authority
EP
European Patent Office
Prior art keywords
sensors
signal
denoising
speech
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP13171948.6A
Other languages
English (en)
French (fr)
Other versions
EP2680262A1 (de
Inventor
Charles Fox
Guillaume Vitte
Maurice Charbit
Jacques Prado
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Parrot SA
Original Assignee
Parrot SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Parrot SA filed Critical Parrot SA
Publication of EP2680262A1 publication Critical patent/EP2680262A1/de
Application granted granted Critical
Publication of EP2680262B1 publication Critical patent/EP2680262B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/002Damping circuit arrangements for transducers, e.g. motional feedback circuits
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming

Definitions

  • the invention relates to the treatment of speech in a noisy environment.
  • microphones include one or more microphones (“microphones”) sensitive, capturing not only the voice of the user, but also the surrounding noise, noise that is a disruptive element that can go in some cases to make unintelligible the speaker's words . It is the same if one wants to implement speech recognition techniques, because it is very difficult to perform a form recognition on words embedded in a high noise level.
  • the significant distance between the microphone (placed at the dashboard or in an angle of the cockpit of the passenger compartment) and the speaker (whose distance is constrained by the driving position) entails capturing a relatively high noise level, which makes it difficult to extract the useful signal embedded in the noise.
  • the very noisy environment typical of the automotive environment has spectral characteristics that evolve unpredictably depending on driving conditions: passage on deformed or paved roads, car radio in operation, etc.
  • the headset can be used in a noisy environment (subway, busy street, train, etc.), so that the microphone will not only pick up the word of the wearer of the helmet, but also the noise surrounding.
  • the wearer is protected from this noise by the headphones, especially if it is a model with closed headphones isolating the ear from the outside, and even more if the headset is provided with an "active noise control"".
  • the distant speaker the one at the other end of the communication channel
  • the distant speaker will suffer from the noise picked up by the microphone and being superimposed and interfere with the speech signal of the near speaker (the helmet wearer).
  • certain speech formers essential to the understanding of the voice are often embedded in noise components commonly encountered in the usual environments.
  • the invention more particularly relates to denoising techniques using a network of several microphones, judiciously combining the signals simultaneously picked up by these microphones to discriminate the useful components of speech from the noise noise components.
  • a conventional technique consists in placing and orienting one of the microphones so that it mainly captures the voice of the speaker, while the other is arranged to capture a greater noise component than the main microphone.
  • the comparison of the signals captured makes it possible to extract the voice of the ambient noise by spatial coherence analysis of the two signals, with relatively simple software means.
  • the US 2008/0280653 A1 describes such a configuration, where one of the pickups (the one that mainly picks up the voice) is that of a wireless headset carried by the driver of the vehicle, while the other (the one that captures the noise) is that of the telephone device, placed remotely in the passenger compartment of the vehicle, for example hung on the dashboard.
  • this technique has the disadvantage of requiring two remote microphones, the efficiency being even higher than the two microphones are remote. Therefore, this technique is not applicable to a device in which the two microphones are close together, for example two microphones incorporated in the facade of a car radio, or two microphones that would be arranged on one of the shells an earphone.
  • beamforming consists of creating by software means a directivity that improves the signal / noise ratio of the network or "antenna" microphones.
  • the US 2007/0165879 A1 describes such a technique, applied to a pair of non-directional microphones placed back to back.
  • An adaptive filtering of the captured signals makes it possible to derive at the output a signal in which the voice component has been reinforced.
  • the EP 2 293 594 A1 and EP 2 309 499 A1 (Parrot ) describe other techniques, also based on the assumption that the wanted signal and / or spurious noises have a certain directivity, which combine the signals from the different microphones so as to improve the signal / noise ratio according to these conditions of directivity.
  • These denoising techniques are based on the assumption that speech generally has a higher spatial coherence than noise and that, moreover, the direction of speech incidence is generally well defined and can be assumed to be known (in the case of motor vehicle, it is defined by the position of the driver, to which are turned the microphones).
  • This assumption takes into account the reverberation effect typical of the cabin of a car, where powerful reflections and many make it difficult to calculate a direction of arrival. They can also be faulted by noises with a certain directivity, such as blows of horn, passage of a scooter, overtaking by a car, etc.
  • the directivity is all the more marked as the frequency is high, so that this criterion becomes less discriminating for the lower frequencies.
  • the invention provides a method of denoising a noisy acoustic signal for a multi-microphone audio device of the general type disclosed by the aforementioned article by McCowan and S. Sridharan, wherein the device comprises a sensor array formed of a plurality of microphone sensors arranged in a predetermined configuration and able to collect the noisy signal, the sensors being grouped into two sub-networks, with a first sub-network able to collect a RF portion of the spectrum, and a second sub-network of sensors able to collect a part BF of the spectrum distinct from the part HF.
  • the first sub-array of sensors able to collect the RF portion of the spectrum may in particular comprise a linear array of at least two sensors aligned perpendicularly to the direction of the speech source, and the second sub-array of sensors adapted to collect the spectrum portion BF may comprise a linear network of at least two sensors aligned parallel to the direction of the speech source.
  • the sensors of the first sub-array of sensors are advantageously unidirectional sensors, oriented in the direction of the speech source.
  • the denoising processing of the HF part of the spectrum in step b1) can be differentially performed for a lower band and an upper band of this HF part, with the selection of different sensors among the sensors of the first sub-network, the distance between the sensors selected for denoising the upper band being smaller than the distance of the selected sensors for denoising the lower band.
  • the step b13) of estimating the transfer function of the acoustic channels can in particular be implemented by an LMS-type linear least linear adaptive adaptive filter, with modulation by the probability of presence of speech, notably a modulation. by variation of the iteration step of the adaptive filter LMS.
  • the prediction of the noise of one sensor on the other can be made in the time domain, in particular by a Wiener filter multichannel filter with weighting by the Speech distortion, SDW-MWF, including an SDW-MWF filter adaptively estimated by a gradient descent algorithm.
  • Each microphone thus captures a component of the useful signal (the speech signal) and a component of the surrounding noise, in all its forms (directive or diffuse, stationary or evolving unpredictably, etc.).
  • the network R is configured in two subnetworks R 1 and R 2 dedicated respectively to the capture and processing of the signals in the upper part (hereinafter "high frequency”, HF) of the spectrum and in the lower part (hereinafter after “low frequency”, BF) of this same spectrum.
  • the microphone M 1 which belongs to the two subnetworks R 1 and R 2 , is shared, which makes it possible to reduce the total number of microphones of the network. This pooling is advantageous, but it is not necessary.
  • the shared micro may be for example the micro M 3 , giving the whole network a configuration in the form of "T".
  • the microphone M 2 of the BF network may be an omnidirectional microphone, since the directivity is much less marked in BF than in HF.
  • the illustrated configuration showing two subnets R 1 + R 2 comprising 3 + 2 microphones (a total of 4 microphones given the pooling of one of the microphones) is not limiting.
  • the minimum configuration is a configuration with 2 + 2 microphones (a minimum of 3 microphones if one of them is shared). Conversely, it is possible to increase the number of microphones, with configurations to 4 + 2 pickups, 4 + 3 pickups, etc.
  • the increase in the number of microphones makes it possible, particularly in the high frequencies, to select different microphone configurations depending on the parts of the RF spectrum processed.
  • FIGS. 2a and 2b illustrate, respectively for an omnidirectional microphone and for a unidirectional microphone, characteristics giving, as a function of frequency, the value of the correlation function between two microphones, for several distance values d between these microphones.
  • the correlation function between two microphones distant from a distance d is a globally decreasing function of the distance between the microphones.
  • This correlation function is represented by mean squared coherence MSC ( Mean Squared Coherence ) , which varies between 1 (the two signals are perfectly coherent, they differ only from a linear filter) and 0 (totally decorrelated signals).
  • MSC Mean Squared Coherence
  • unidirectional microphones will be used because, as can be seen by comparing the Figures 2a and 2b the variation of the coherence function is much more abrupt in this case than with an omnidirectional microphone.
  • Denoising treatment description of a preferential mode
  • a high-pass filter HF 10 receives the signals from the microphones M 1 , M 3 and M 4 of the sub-network R 1 , used jointly. These signals are first subject to a fast FFT Fourier transform (block 12), then to a frequency-domain processing by an algorithm (block 14) exploiting the predictability of the useful signal of a signal.
  • microphone on the other, in this example an MMSE-STSA ( Minimum Mean-Squared Error Short-Time Spectral Amplitude ) type estimator, which will be described in detail below.
  • a low pass filter BF 16 receives as input the signals picked up by the microphones M 1 and M 2 of the subnetwork R 2 . These signals are the subject of a denoising processing (block 18) operated in the time domain by an algorithm exploiting a prediction of the noise of a microphone on the other during the periods of silence of the speaker.
  • a denoising processing (block 18) operated in the time domain by an algorithm exploiting a prediction of the noise of a microphone on the other during the periods of silence of the speaker.
  • SDW-MWF Speech Distortion Weighted Multichannel Wiener Filter
  • the resulting denoised signal is then subjected to a fast Fourier transform FFT (block 20).
  • two resulting single-channel signals are available, one for the HF part originating from block 14, the other for part BF coming from block 18 after passing into the frequency domain by block 20. .
  • an additional (single channel) processing of selective denoising (block 24) is performed on the corresponding reconstructed signal.
  • the signal resulting from this treatment is finally the subject of a transformation of Fourier fast inverse iFFT (block 26) to return to the time domain.
  • LSA Log-Spectral Amplitude
  • this particular implementation is of course not limiting, other denoising techniques can be envisaged, since they are based on the predictability of the useful signal of a microphone on the other.
  • this HF denoising is not necessarily operated in the frequency domain, it can also be operated in the time domain, by equivalent means.
  • the proposed technique consists of searching for an optimal linear "projector" for each frequency, that is to say an operator corresponding to a transformation of a plurality of signals (those collected concurrently by the various microphones of the sub-network R 1 ). in a single single channel signal.
  • This projection is an "optimal" linear projection in that it is sought that the residual noise component on the single-channel signal output is minimized and that the useful speech component is the less distorted possible.
  • the transfer function H corresponds to a pure delay, it recognizes the formula beamforming MVDR (Minimum Variance Distorsionless Response), also called beamforming Capon. It will be noted that the residual noise power is worth, after projection 1 H T ⁇ R not - 1 ⁇ H .
  • MVDR Minimum Variance Distorsionless Response
  • the selective noise denoising treatment, applied to the single-channel signal resulting from beamforming processing is advantageously the OM-LSA type treatment described above, operated by block 24 on the complete spectrum after synthesis at 22.
  • MVDR estimator (block 28) its implementation involves an estimation of the acoustic transfer functions H i between the speech source and each of the microphones M i (M 1 , M 3 or M 4 ).
  • a frequency LMS estimator (block 30) receiving as input the signals from the different microphones and outputting the estimates of the various transfer functions H.
  • x i is the sensed signal
  • h i is the impulse response between the useful signal source (speaker speech signal) and the microphone M i
  • s is the useful signal produced by the source S
  • b i is the additive noise
  • the MMSE-STSA estimator factorizes into a MVDR beamforming (block 28) followed by a single-channel estimator (the OM / LSA algorithm of block 24).
  • MVDR X H T ⁇ ⁇ bb - 1 ⁇ X H T ⁇ ⁇ bb - 1 ⁇ H
  • the adaptive MVDR beamforming thus exploits the coherence of the useful signal to estimate a transfer function H corresponding to the acoustic channel between the speaker and each of the microphones of the sub-network.
  • LMS algorithms - or NLMS Normalized LMS which is a standardized version of the LMS - are relatively simple and undemanding algorithms in terms of computing resources.
  • H can only be identified with a transfer function.
  • H can only be identified with a transfer function.
  • the LMS algorithm aims (in known manner) to estimate a filter H (block 36) by means of an adaptive algorithm, corresponding to the signal x i delivered by the microphone M 1 , by estimating the transfer of voice between the microphone M i and the microphone M 1 (taken as a reference).
  • the output of the filter 36 is subtracted at 38 from the signal x 1 picked up by the microphone M 1 , to give a prediction error signal allowing the iterative adaptation of the filter 36. It is thus possible to predict from the signal x i the speech component contained in the signal x 1 .
  • the signal x 1 is slightly delayed (block 40).
  • the error signal of the adaptive filter 36 is weighted at 42 by the probability of presence of speech SPP delivered at the output of the block 34, so as to adapt the filter only when the probability of presence of speech is high.
  • This weighting can in particular be made by modifying the adaptation step of the algorithm, as a function of the SPP probability .
  • This noise prediction present on a microphone is operated from the noise present on all the microphones considered of the second subnetwork R 2 , and this in the periods of silence of the speaker, where only the noise is present.
  • the Wiener filter (block 44) provides a noise prediction which is subtracted at 46 from the collected, non-denoised signal after applying a delay (block 48) to avoid causation problems.
  • the Wiener filter 44 is parameterized by a coefficient ⁇ (represented at 50) which determines an adjustable weighting between, on the one hand, the distortion introduced by the processing on the speech signal denoised and, on the other hand, the noise level. residual.
  • the Wiener filter used is advantageously a weighted Wiener filter (SDW-MVF), to take into account not only the energy of the noise to be eliminated by the filtering, but also the distortion introduced by this filtering, which should be minimized. .
  • SDW-MVF weighted Wiener filter
  • This filter is implemented adaptively by a gradient descent algorithm such as that set forth in the aforementioned article [6].
  • the diagram is the one illustrated Figures 3 and 4 .
  • R b ⁇ ⁇ ⁇ R b ⁇ t - 1 + 1 - ⁇ ⁇ x t ⁇ x ⁇ t T if there is no word R b ⁇ t - 1 if not ⁇ being a factor of forgetting.
  • this parameter must correspond to a spatial and temporal reality, with a sufficient number of coefficients to predict the noise temporally (temporal coherence of the noise) and spatially (spatial transfer between the microphones).
  • the parameter ⁇ is adjusted experimentally, increasing it until the distortion on the voice becomes perceptible to the ear.
  • J kr ⁇ ⁇ E ⁇ b k t - w T ⁇ b t 2 + E w T ⁇ s t 2

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Claims (14)

  1. Verfahren zur Geräuschdämpfung eines verrauschten Audiosignals für eine Multimikrofon-Audiovorrichtung, die in einer lauten Umgebung eingesetzt wird,
    wobei das verrauschte Audiosignal eine Nutzkomponente, die aus einer Sprachquelle stammt, und eine Störgeräuschkomponente aufweist,
    wobei die Vorrichtung ein Netz von Sensoren aufweist, das aus mehreren Mikrofonsensoren (M1 ... M4) gebildet ist, die gemäß einer vorbestimmten Ausgestaltung angeordnet sind und geeignet sind, das verrauschte Signal zu erfassen,
    wobei die Sensoren in zwei Unternetzen, mit einem ersten Unternetz (R1) von Sensoren, das geeignet ist, einen Hochfrequenzteil des Spektrums zu erfassen, und einem zweiten Unternetz (R2) von Sensoren, das geeignet ist, einen Niederfrequenzteil des Spektrums zu erfassen, der sich von dem Hochfrequenzteil unterscheidet, zusammengefasst sind,
    wobei dieses Verfahren die folgenden Schritte aufweist:
    a) Aufteilung des Spektrums des verrauschten Signals in den Hochfrequenzteil (HF) und den Niederfrequenzteil (NF) durch Filtern (10, 16) oberhalb beziehungsweise unterhalb einer vorbestimmten Schwenkfrequenz,
    b) Geräuschdämpfung von jedem der zwei Teile des Spektrums mit Einsatz einer Schätzfunktion mit adaptivem Algorithmus; und
    c) Rekonstruktion des Spektrums durch Kombination (22) der nach der Geräuschdämpfung der zwei Teile des Spektrums in den Schritten b1) und b2) gelieferten Signale,
    wobei das Verfahren dadurch gekennzeichnet ist, dass der Schritt b) zur Geräuschdämpfung durch verschiedene Verarbeitungen für jeden der zwei Teile des Spektrums eingesetzt wird, mit:
    b1) einer Geräuschdämpfung für den Hochfrequenzteil, die die vorhersagbare Eigenschaft des Nutzsignals eines Sensors gegenüber dem anderen unter Sensoren des ersten Unternetzes mittels einer ersten Schätzfunktion (14) mit adaptivem Algorithmus nutzt, und
    b2) einer Geräuschdämpfung für den Niederfrequenzteil durch Vorhersage des Geräusches eines Sensors gegenüber dem anderen unter Sensoren des zweiten Unternetzes mittels einer zweiten Schätzfunktion (18) mit adaptivem Algorithmus.
  2. Verfahren nach Anspruch 1, wobei das erste Unternetz von Sensoren (R1), das geeignet ist, den Hochfrequenzteil des Spektrums zu erfassen, ein lineares Netz von mindestens zwei Sensoren (M1, M3, M4) aufweist, die senkrecht zur Richtung (Δ) der Sprachquelle ausgerichtet sind.
  3. Verfahren nach Anspruch 1, wobei das zweite Unternetz von Sensoren (R2), das geeignet ist, den Niederfrequenzteil des Spektrums zu erfassen, ein lineares Netz von mindestens zwei Sensoren (M1, M2) umfasst, die parallel zur Richtung (Δ) der Sprachquelle ausgerichtet sind.
  4. Verfahren nach Anspruch 2, wobei die Sensoren (M1, M3, M4) des ersten Unternetzes von Sensoren (R1) in einer einzigen Richtung in die Richtung (Δ) der Sprachquelle ausgerichtet sind.
  5. Verfahren nach Anspruch 2, wobei die Verarbeitung zur Geräuschdämpfung des Hochfrequenzteils des Spektrums im Schritt b1) auf differenzierte Art und Weise für ein unteres Band und ein oberes Band dieses Hochfrequenzteils, mit einer Auswahl von unterschiedlichen Sensoren unter den Sensoren des ersten Unternetzes (R1), eingesetzt wird, wobei der Abstand zwischen den Sensoren (M1, M4), die für die Geräuschdämpfung des oberen Bandes ausgewählt werden, kleiner ist als derjenige der Sensoren (M3, M4), die für die Geräuschdämpfung des unteren Bandes ausgewählt werden.
  6. Verfahren nach Anspruch 1, das ferner nach dem Schritt c) der Rekonstruktion des Spektrums einen folgenden Schritt aufweist:
    d) selektive Verringerung des Geräusches (24) durch eine Verarbeitung des Typs Verstärkung mit optimierter modifizierter logarithmischer Spektralamplitude, OM-LSA, ausgehend von dem im Schritt c) erzeugten rekonstruierten Signal und einer Wahrscheinlichkeit des Vorhandenseins von Sprache.
  7. Verfahren nach Anspruch 1, wobei der Schritt b1) der Geräuschminderung des Hochfrequenzteils, der die von einem Sensor zum anderen vorhersehbare Eigenschaft des Nutzsignals nutzt, im Frequenzbereich eingesetzt wird.
  8. Verfahren nach Anspruch 7, wobei der Schritt b1) der Geräuschminderung des Hochfrequenzteils, der die von einem Sensor zum anderen vorhersehbare Eigenschaft des Nutzsignals nutzt, durch Folgendes eingesetzt wird:
    b11) Schätzung (34) einer Wahrscheinlichkeit des Vorhandenseins von Sprache (SPP) in dem erfassten verrauschten Signal;
    b12) Schätzung (32) einer spektralen Kovarianzmatrix der durch die Sensoren des ersten Unternetzes erfassten Geräusche, wobei diese Schätzung durch die Wahrscheinlichkeit des Vorhandenseins von Sprache moduliert wird;
    b13) Schätzung (30) der Übertragungsfunktion der Audiokanäle zwischen der Sprachquelle und mindestens einigen der Sensoren des ersten Unternetzes, wobei diese Schätzung in Bezug zu einer Referenz des Nutzsignals eingesetzt wird, die durch das durch einen der Sensoren des ersten Unternetzes erfasste Signal gebildet wird, und ferner durch die Wahrscheinlichkeit des Vorhandenseins von Sprache moduliert wird; und
    b14) Berechnung (28) eines optimalen linearen Projektors, der ein einziges geräuschgedämpftes kombiniertes Signal ausgehend von den durch mindestens einige der Sensoren des ersten Unternetzes erfassten Signalen ergibt, von der in Schritt b12) geschätzten spektralen Kovarianzmatrix und der im Schritt b13) geschätzten Übertragungsfunktionen.
  9. Verfahren nach Anspruch 8, wobei im Schritt b14) die Berechnung eines optimalen linearen Projektors (28) durch eine Schätzfunktion des Typs Beamforming mit verzerrungsfreier Antwort mit minimaler Varianz, MVDR, eingesetzt wird.
  10. Verfahren nach Anspruch 9, wobei der Schritt b13) zur Schätzung der Übertragungsfunktion der Audiokanäle (30) durch ein adaptives Filter (36, 38, 40) mit linearer Prädiktion des Typs Least-Mean-Square, LMS, mit Modulation (42) durch die Wahrscheinlichkeit des Vorhandenseins von Sprache eingesetzt wird.
  11. Verfahren nach Anspruch 10, wobei die Modulation durch die Wahrscheinlichkeit des Vorhandenseins von Sprache eine Modulation durch Änderung des Iterationsschritts des adaptiven LMS-Filters ist.
  12. Verfahren nach Anspruch 1, wobei für die Geräuschdämpfung des Niederfrequenzteils im Schritt b2) die Vorhersage des Geräusches eines Sensors gegenüber dem anderen im Zeitbereich eingesetzt wird.
  13. Verfahren nach Anspruch 12, wobei die Vorhersage des Geräusches von einem Sensor zum anderen durch ein Filter (44, 46, 48) des Typs Wiener-Mehrkanalfilter mit Gewichtung durch die Verzerrung der Sprache, SDW-MWF, eingesetzt wird.
  14. Verfahren nach Anspruch 13, wobei das SDW-MWF-Filter auf adaptive Art und Weise durch einen Gradientenabstiegsalgorithmus geschätzt wird.
EP13171948.6A 2012-06-26 2013-06-14 Verfahren zur Geräuschdämpfung eines Audiosignals für eine Multimikrofon-Audiovorrichtung, die in lauten Umgebungen eingesetzt wird Active EP2680262B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FR1256049A FR2992459B1 (fr) 2012-06-26 2012-06-26 Procede de debruitage d'un signal acoustique pour un dispositif audio multi-microphone operant dans un milieu bruite.

Publications (2)

Publication Number Publication Date
EP2680262A1 EP2680262A1 (de) 2014-01-01
EP2680262B1 true EP2680262B1 (de) 2015-05-13

Family

ID=47227906

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13171948.6A Active EP2680262B1 (de) 2012-06-26 2013-06-14 Verfahren zur Geräuschdämpfung eines Audiosignals für eine Multimikrofon-Audiovorrichtung, die in lauten Umgebungen eingesetzt wird

Country Status (4)

Country Link
US (1) US9338547B2 (de)
EP (1) EP2680262B1 (de)
CN (1) CN103517185B (de)
FR (1) FR2992459B1 (de)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130294616A1 (en) * 2010-12-20 2013-11-07 Phonak Ag Method and system for speech enhancement in a room
ES2727786T3 (es) * 2012-05-31 2019-10-18 Univ Mississippi Sistemas y métodos para detectar señales acústicas transitorias
JP6349899B2 (ja) * 2014-04-14 2018-07-04 ヤマハ株式会社 放収音装置
US10149047B2 (en) * 2014-06-18 2018-12-04 Cirrus Logic Inc. Multi-aural MMSE analysis techniques for clarifying audio signals
EP3230981B1 (de) 2014-12-12 2020-05-06 Nuance Communications, Inc. System und verfahren zur sprachverbesserung unter verwendung eines kohärent-diffus-tonverhältnisses
WO2016179211A1 (en) * 2015-05-04 2016-11-10 Rensselaer Polytechnic Institute Coprime microphone array system
US9691238B2 (en) * 2015-07-29 2017-06-27 Immersion Corporation Crowd-based haptics
EP3171613A1 (de) * 2015-11-20 2017-05-24 Harman Becker Automotive Systems GmbH Tonverstärkung
DE102015016380B4 (de) * 2015-12-16 2023-10-05 e.solutions GmbH Technik zum Unterdrücken akustischer Störsignale
CN107045874B (zh) * 2016-02-05 2021-03-02 深圳市潮流网络技术有限公司 一种基于相关性的非线性语音增强方法
CN106289506B (zh) * 2016-09-06 2019-03-05 大连理工大学 一种使用pod分解法消除流场壁面麦克风阵列噪声信号的方法
US9906859B1 (en) * 2016-09-30 2018-02-27 Bose Corporation Noise estimation for dynamic sound adjustment
DE112017006486T5 (de) * 2016-12-23 2019-09-12 Synaptics Incorporated Online-enthallungsalgorithmus basierend auf gewichtetem vorhersagefehler für lärmbehaftete zeitvariante umgebungen
CN107910011B (zh) * 2017-12-28 2021-05-04 科大讯飞股份有限公司 一种语音降噪方法、装置、服务器及存储介质
CN108074585A (zh) * 2018-02-08 2018-05-25 河海大学常州校区 一种基于声源特征的语音异常检测方法
CN108449687B (zh) * 2018-03-13 2019-04-26 江苏华腾智能科技有限公司 一种多麦克风阵列降噪的会议系统
CN108564963B (zh) * 2018-04-23 2019-10-18 百度在线网络技术(北京)有限公司 用于增强语音的方法和装置
CN108831495B (zh) * 2018-06-04 2022-11-29 桂林电子科技大学 一种应用于噪声环境下语音识别的语音增强方法
US11900730B2 (en) * 2019-12-18 2024-02-13 Cirrus Logic Inc. Biometric identification
CN111028857B (zh) * 2019-12-27 2024-01-19 宁波蛙声科技有限公司 基于深度学习的多通道音视频会议降噪的方法及系统
TWI789577B (zh) * 2020-04-01 2023-01-11 同響科技股份有限公司 音訊資料重建方法及系統
CN114822571A (zh) * 2021-04-25 2022-07-29 美的集团(上海)有限公司 一种回声消除方法、装置、电子设备和存储介质
CN115223582B (zh) * 2021-12-16 2024-01-30 广州汽车集团股份有限公司 一种音频的噪声处理方法、系统、电子装置及介质
US11948547B2 (en) * 2021-12-17 2024-04-02 Hyundai Motor Company Information quantity-based reference sensor selection and active noise control using the same
CN115840120B (zh) * 2023-02-24 2023-04-28 山东科华电力技术有限公司 一种高压电缆局放异常监测及预警方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8280072B2 (en) * 2003-03-27 2012-10-02 Aliphcom, Inc. Microphone array with rear venting
US7617099B2 (en) * 2001-02-12 2009-11-10 FortMedia Inc. Noise suppression by two-channel tandem spectrum modification for speech signal in an automobile
DE602004015987D1 (de) * 2004-09-23 2008-10-02 Harman Becker Automotive Sys Mehrkanalige adaptive Sprachsignalverarbeitung mit Rauschunterdrückung
CN100571295C (zh) * 2005-08-02 2009-12-16 明基电通股份有限公司 一种可降低麦克风噪声的移动装置和方法
US8488803B2 (en) * 2007-05-25 2013-07-16 Aliphcom Wind suppression/replacement component for use with electronic systems
ATE551692T1 (de) * 2008-02-05 2012-04-15 Phonak Ag Verfahren zur verringerung von rauschen in einem eingangssignal eines hörgeräts sowie ein hörgerät
US8321214B2 (en) * 2008-06-02 2012-11-27 Qualcomm Incorporated Systems, methods, and apparatus for multichannel signal amplitude balancing
FR2945696B1 (fr) * 2009-05-14 2012-02-24 Parrot Procede de selection d'un microphone parmi deux microphones ou plus, pour un systeme de traitement de la parole tel qu'un dispositif telephonique "mains libres" operant dans un environnement bruite.
KR101782050B1 (ko) * 2010-09-17 2017-09-28 삼성전자주식회사 비등간격으로 배치된 마이크로폰을 이용한 음질 향상 장치 및 방법
FR2976710B1 (fr) * 2011-06-20 2013-07-05 Parrot Procede de debruitage pour equipement audio multi-microphones, notamment pour un systeme de telephonie "mains libres"

Also Published As

Publication number Publication date
US9338547B2 (en) 2016-05-10
FR2992459B1 (fr) 2014-08-15
FR2992459A1 (fr) 2013-12-27
CN103517185B (zh) 2018-09-21
CN103517185A (zh) 2014-01-15
EP2680262A1 (de) 2014-01-01
US20130343558A1 (en) 2013-12-26

Similar Documents

Publication Publication Date Title
EP2680262B1 (de) Verfahren zur Geräuschdämpfung eines Audiosignals für eine Multimikrofon-Audiovorrichtung, die in lauten Umgebungen eingesetzt wird
EP2538409B1 (de) Verfahren zur Geräuschdämpfung für Audio-Gerät mit mehreren Mikrofonen, insbesondere für eine telefonische Freisprechanlage
EP2530673B1 (de) Audiogerät mit Rauschunterdrückung in einem Sprachsignal unter Verwendung von einem Filter mit fraktionaler Verzögerung
EP2293594B1 (de) Verfahren zur Filterung von seitlichem nichtstationärem Rauschen für ein Multimikrofon-Audiogerät
EP2309499B1 (de) Verfahren zur optimierten Filterung nicht stationärer Geräusche, die von einem Audiogerät mit mehreren Mikrophonen eingefangen werden, insbesondere eine Freisprechtelefonanlage für Kraftfahrzeuge
EP2518724B1 (de) Kombinierte Audioeinheit bestehend aus Mikrofon und Kopfhörer, die Mittel zur Geräuschdämpfung eines nahen Wortsignals umfasst, insbesondere für eine telefonische Freisprechanlage
EP2122607B1 (de) Verfahren zur aktiven minderung von störgeräuschen
US7761291B2 (en) Method for processing audio-signals
EP2772916B1 (de) Verfahren zur Geräuschdämpfung eines Audiosignals mit Hilfe eines Algorithmus mit variabler Spektralverstärkung mit dynamisch modulierbarer Härte
US9467775B2 (en) Method and a system for noise suppressing an audio signal
EP0884926B1 (de) Verfahren und Vorrichtung zur optimierten Verarbeitung eines Störsignals während einer Tonaufnahme
EP3025342B1 (de) Verfahren zur unterdrückung des späten nachhalls eines akustischen signals
FR2906070A1 (fr) Reduction de bruit multi-reference pour des applications vocales en environnement automobile
Kim et al. Probabilistic spectral gain modification applied to beamformer-based noise reduction in a car environment
FR2906071A1 (fr) Reduction de bruit multibande avec une reference de bruit non acoustique
WO2017207286A1 (fr) Combine audio micro/casque comprenant des moyens de detection d'activite vocale multiples a classifieur supervise
Plucienkowski et al. Combined front-end signal processing for in-vehicle speech systems
WO2022207994A1 (fr) Estimation d'un masque optimise pour le traitement de donnees sonores acquises
Pathrose et al. Enhancement of speech through source separation for conferencing systems.
KR20190136841A (ko) 다중 마이크로폰을 가진 디지털 보청기
CN114708882A (zh) 一种快速双麦自适应一阶差分阵列算法及系统
Zhang et al. Speech enhancement based on a combined multi-channel array with constrained iterative and auditory masked processing
FR2828326A1 (fr) Procede et dispositif de reduction d'echo a la prise de son

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130617

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20141217

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 727090

Country of ref document: AT

Kind code of ref document: T

Effective date: 20150615

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013001738

Country of ref document: DE

Effective date: 20150625

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 3

REG Reference to a national code

Ref country code: NL

Ref legal event code: T3

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 727090

Country of ref document: AT

Kind code of ref document: T

Effective date: 20150513

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150914

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150813

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150813

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150814

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150913

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013001738

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: RO

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150513

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

26N No opposition filed

Effective date: 20160216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 4

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 602013001738

Country of ref document: DE

Owner name: PARROT AUTOMOTIVE, FR

Free format text: FORMER OWNER: PARROT, PARIS, FR

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: NL

Ref legal event code: PD

Owner name: PARROT AUTOMOTIVE; FR

Free format text: DETAILS ASSIGNMENT: CHANGE OF OWNER(S), ASSIGNMENT; FORMER OWNER NAME: PARROT

Effective date: 20170125

REG Reference to a national code

Ref country code: FR

Ref legal event code: TP

Owner name: PARROT AUTOMOTIVE, FR

Effective date: 20170106

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20170223 AND 20170303

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160630

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20130614

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150513

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20190619

Year of fee payment: 7

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20200701

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200701

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20230523

Year of fee payment: 11

Ref country code: FR

Payment date: 20230523

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230523

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240521

Year of fee payment: 12