EP2538409B1 - Verfahren zur Geräuschdämpfung für Audio-Gerät mit mehreren Mikrofonen, insbesondere für eine telefonische Freisprechanlage - Google Patents
Verfahren zur Geräuschdämpfung für Audio-Gerät mit mehreren Mikrofonen, insbesondere für eine telefonische Freisprechanlage Download PDFInfo
- Publication number
- EP2538409B1 EP2538409B1 EP12170874.7A EP12170874A EP2538409B1 EP 2538409 B1 EP2538409 B1 EP 2538409B1 EP 12170874 A EP12170874 A EP 12170874A EP 2538409 B1 EP2538409 B1 EP 2538409B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- sensors
- estimation
- operated
- speech
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/0204—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L2021/02082—Noise filtering the noise being echo, reverberation of the speech
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02166—Microphone arrays; Beamforming
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L21/0232—Processing in the frequency domain
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/06—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being correlation coefficients
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/18—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/40—Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
- H04R2201/403—Linear arrays of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/13—Acoustic transducers and sound field adaptation in vehicles
Definitions
- the invention relates to the treatment of speech in a noisy environment.
- microphones sensitive not only to the voice of the user, but also capturing the surrounding noise and the echo due to the phenomenon of reverberation by the environment, typically the passenger compartment of the vehicle .
- the useful component (the speech signal of the near speaker) is thus embedded in a parasitic component of noise (external noises and reverberation) that can go, often, to make incomprehensible to the distant speaker (the one who is at the other end of the voice signal transmission path) the words of the nearby speaker.
- Some of these devices provide for the use of multiple microphones and use the average of the picked-up signals, or other more complex operations, to obtain a signal with a lower level of interference.
- so-called beamforming techniques make it possible to create, by software means, a directivity which improves the signal / noise ratio.
- the performances of this technique are very limited when only two microphones are used (concretely, it is estimated that such a method only gives good results if condition of having a network of at least eight microphones). The performance is also very degraded when the environment is reverberant.
- the aim of the invention is to propose a solution for denoising the audio signals picked up by such a multichannel, multi-microphone system, in a very noisy and very reverberant environment, typically the passenger compartment of a car.
- the main difficulty related to the methods of speech processing by multichannel systems is the difficulty of estimating useful parameters for the treatments to be applied, because the estimators are strongly related to the ambient environment.
- the EP 2 293 594 A1 (Parrot SA) describes a method for the spatial detection and filtering of nonstationary and directional noises such as horn blasts, passing a scooter, overtaking by a car, etc.
- the proposed technique consists in associating the properties of temporal and frequency non-stationarity, on the one hand, and spatial directivity, on the other hand, to detect a type of noise that is usually difficult to discriminate from speech. , in order to ensure an efficient filtering of this noise and to deduce otherwise a probability of presence of speech which will further improve the attenuation of the noise.
- the EP 2 309 499 A1 (Parrot SA) describes a system with two microphones operating a spatial coherence analysis of the signal picked up in order to determine a direction of incidence.
- the system calculates two noise references according to different methods, one according to the spatial coherence of the signals picked up (which integrates non-stationary non-directional noise) and another according to the main direction of incidence of the signals (which especially integrates directive nonstationary noises).
- This denoising technique is based on the hypothesis that speech generally has a higher spatial coherence than noise and that, moreover, the direction of speech incidence is generally well defined and can be assumed to be known: in the case of motor vehicle, it is defined by the position of the driver, to which the microphones are turned.
- the denoised signal obtained at the output satisfactorily reproduces the amplitude of the initial speech signal, but not its phase, which can cause a distortion of the voice reproduced by the device.
- the problem of the invention is to take into account a reverberant environment that does not make it possible to satisfactorily calculate a direction of arrival of the useful signal and, alternatively, to obtain a denoising which restores both the amplitude and phase of the initial signal, thus not distorting the voice of the speaker when it is reproduced by the device.
- the method of the invention is a denoising method for a device comprising a network formed of a plurality of microphone sensors arranged in a predetermined configuration.
- the calculation of the optimal linear projector of the step d) is carried out by a beamforming processing of Capon with a minimum variance response without distortion MVDR.
- step e) is effected by OM-LSA optimized modified log-spectral amplitude gain processing.
- the estimation of the transfer function of step c) is performed by calculating an adaptive filter to cancel the difference between the signal collected by the sensor whose evaluation is to be evaluated. transfer function and the signal collected by the sensor of the useful signal reference, with modulation by the probability of presence of speech.
- the adaptive filter can in particular be a LMS mean linear least linear prediction algorithm and the speech presence probability modulation, a variation modulation of the iteration pitch of the adaptive filter.
- the spectrum of the signal to be denoised is advantageously divided into a plurality of distinct spectrum parts, the sensors being grouped into a plurality of subnetworks each associated with one of the parts of the spectrum.
- the denoising process is then operated in a differentiated manner, for each part of the spectrum, on the signals collected by the sensors of the sub-network corresponding to the part of the spectrum considered.
- the spectrum of the signal to be denoised can be divided into a low frequency portion and a high frequency portion.
- the denoising processing steps are then performed only on the signals collected by the sensors furthest away from the network.
- step c) it is also possible, again with a signal spectrum to be denoised divided into a plurality of distinct spectrum parts, to differentially estimate, in step c), the transfer function of the acoustic channels by applying different treatments to each of the parts of the spectrum.
- the sensor array is a linear array of aligned sensors and the sensors are grouped into a plurality of sub-arrays each associated with one of the parts of the spectrum: for the low frequency part, the processing of denoising is performed only on the signals collected by the sensors furthest away from the network and the estimation of the transfer function is performed by calculating an adaptive filter ; and for the high frequency part, the denoising process is performed on the signals collected by all the sensors of the network, and the estimation of the transfer function is performed by a diagonalization processing.
- each sensor can be likened to a single microphone M 1 ... M n capturing a reverberated version of a speech signal emitted by a useful signal source S (the speech of a near speaker 10), which signal is added a noise.
- x i is the signal picked up
- h i being the impulse response between the useful signal source S and the sensor M i
- s being the useful signal produced by the source S (speech signal of the near speaker 10)
- b i being the additive noise
- the proposed technique consists, on the basis of the elements that have just been described, to search in the time domain for an optimal linear projector for each frequency.
- projector means an operator corresponding to a transformation of a plurality of signals, collected concurrently by a multichannel device, into a single single-channel signal.
- This projection is an "optimal" linear projection in that the residual noise component on the single-channel signal output is minimized (noise and reverberation) and that the useful speech component is the least deformed possible.
- R n is the correlation matrix between the microphones, for each frequency, and H being the acoustic channel considered.
- a first technique consists of using an LMS type algorithm in the frequency domain.
- one of the channels will be taken as a useful signal reference, for example the channel of the microphone M 1 , and the transfer functions H 2 ... H n will be calculated for the others. canals.
- the reverberation (thus parasitized) version of the speech signal S picked up by the microphone M 1 is taken as a useful signal reference, the presence of the reverberation in the signal picked up is not a problem because stage one seeks to operate a denoising and not a de-reverberation.
- the LMS algorithm aims (in known manner) to estimate a filter H (block 14) by means of an adaptive algorithm, corresponding to the signal x i delivered by the microphone M i , by estimating the transfer of noise between the microphone M i and the microphone M i (taken as reference).
- the output of the filter 14 is subtracted at 16 from the signal x 1 picked up by the microphone M 1 to give a prediction error signal allowing iterative adaptation of the filter 14. It is thus possible to predict from the signal x i the speech component (reverberated) contained in the signal x 1 .
- the signal x 1 is slightly delayed (block 18). .
- an element 20 for weighting the error signal of the adaptive filter 14 by the probability of presence of speech p delivered at the output of the block 22 is added: it is a matter of adapting the filter only when the probability of presence of speech is high. This weighting can be made in particular by modifying the adaptation step as a function of the probability p .
- H i ⁇ k + 1 H i k + ⁇ ⁇ X ⁇ k 1 T ⁇ X ⁇ k 1 - H ⁇ k i ⁇ X ⁇ k i
- Another possible technique for estimating the acoustic channel is to operate by matrix diagonalization.
- R not ⁇ k + 1 ⁇ ⁇ R not k + 1 - ⁇ ⁇ XX T ⁇ being a forgetting factor (fixed, since we take into account the entire signal).
- MSC f sinc 2 fd vs f being the frequency considered, d being the distance between the sensors, and where c is the speed of sound.
- the distance of the microphones which makes it possible to decorrelate the noises, however has the disadvantage of being translated, in the spatial field, to a sampling at a lower frequency, with the consequence of a folding of the high frequencies, which will be less well restored. .
- the invention proposes to solve this difficulty by selecting different sensor configurations according to the frequencies processed.
- the Figure 5 is a block diagram showing the different steps of signal processing from a linear array of four microphones M 1 ... M 4 such as that illustrated Figure 4 .
- the treatment that will be described is applied in the frequency domain, at each frequency bin , that is to say for each frequency band defined for the successive time frames of the signal collected by the microphones (the four microphones M 1 , M 2 , M 3 and M 4 for the high of the spectrum HF, and the two microphones M 1 and M 4 for the low of the spectrum BF).
- These signals correspond, in the frequency domain, vectors X 1 ... X n ( X 1 , X 2 , X 3 and X 4 and X 1 , X 4 , respectively).
- a block 22 produces from the signals collected by the microphones a probability p of presence of speech. As indicated above, this estimation is carried out according to a technique that is itself known, for example that described in FIG. WO 2007/099222 A1 , which can be referred to for more details.
- Block 44 schematizes a selector of the acoustic channel estimation method, ie by diagonalization on the basis of the signals collected by the four microphones M 1 , M 2 , M 3 and M 4 (block 28 of FIG. Figure 5 , for the high frequency spectrum HF), or by adaptive filter LMS on the basis of the signals collected by the two extreme microphones M 1 and M 4 (block 38 of the Figure 5 , for the low end of the BF spectrum).
- Block 46 corresponds to the estimate of the spectral noise matrix, designated R n , used for the calculation of the optimal linear projector, and also used for the diagonalization calculation of block 28 when the transfer function of the acoustic channel is estimated from this way.
- Block 48 corresponds to the calculation of the optimal linear projector.
- the projection calculated at 48 is an optimal linear projection, in that the residual noise component on the single channel signal output is minimized (noise and reverberation).
- the optimal linear projector has the particularity of recalibrating the phases of the different input signals, which makes it possible to obtain at the output a projected signal S pr which returns to the phase of the initial speech signal of the speaker (and also the amplitude of this signal, of course).
- the final step (block 50) consists of selectively reducing the noise by applying a variable gain specific to each frequency band and each time frame to the projected signal S pr .
- This denoising is also modulated by the probability of speech presence p.
- the signal S HF / BF outputted by the denoising block 50 will then undergo a fast inverse Fourier transform iFFT (blocks 30, 40 of the Figure 5 ) to obtain in the time domain the denoised speech signal S HF or S BF sought giving, after reconstruction of the complete spectrum, the final denoised speech signal s.
- iFFT fast inverse Fourier transform
- LSA Log-Spectral Amplitude
- the "OM-LSA” Optimally-Modified Log-Spectral Amplitude ) algorithm improves the calculation of the LSA gain to be applied by weighting it by the conditional probability of presence of speech p .
Claims (11)
- Verfahren zum Entrauschen eines verrauschten akustischen Signals für eine in einer verrauschten Umgebung arbeitende Multimikrofon-Audiovorrichtung, insbesondere eine "Freisprech"-Telefonvorrichtung,
wobei das verrauschte akustische Signal eine von einer Sprachquelle (S) stammende Nutzkomponente und eine Rausch-Störkomponente enthält,
wobei die Vorrichtung ein Sensorennetzwerk enthält, das von einer Vielzahl von gemäß einer vorbestimmten Konfiguration angeordneten Mikrofonsensoren (M1 ... Mn) geformt wird, die das verrauschte Signal auffangen können,
dadurch gekennzeichnet, dass es die folgenden Verarbeitungsschritte im Frequenzbereich für eine Vielzahl von definierten Frequenzbändern für aufeinanderfolgende Signalzeitrahmen aufweist:a) Schätzung (22) einer Sprachpräsenzwahrscheinlichkeit (p) im aufgefangenen verrauschten Signal;b) Schätzung (46) einer spektralen Kovarianzmatrix (Rn) des von den Sensoren aufgefangenen Rauschens, wobei diese Schätzung durch die Sprachpräsenzwahrscheinlichkeit (p) moduliert wird;c) Schätzung der Übertragungsfunktion (H1 ... Hn) der Akustikkanäle zwischen der Sprachquelle (S) und mindestens bestimmten der Sensoren (M1 ... Mn), wobei diese Schätzung bezüglich eines aus dem von einem der Sensoren (M1) aufgefangenen Signal bestehenden Nutzsignalbezugswerts durchgeführt und außerdem durch die Sprachpräsenzwahrscheinlichkeit (p) moduliert wird;d) Berechnung (48) eines optimalen linearen Projektors, der ein einziges entrauschtes kombiniertes Signal ausgehend von den von mindestens bestimmten der Sensoren aufgefangenen Signalen (X1 ... Xn), von der im Schritt b) geschätzten spektralen Kovarianzmatrix (Rn) und von den im Schritt c) geschätzten Übertragungsfunktionen (H1 ... Hn) liefert; unde) ausgehend von der Sprachpräsenzwahrscheinlichkeit (p) und von dem vom im Schritt d) berechneten Projektor gelieferten kombinierten Signal, selektive Reduzierung des Rauschens (50) durch Anwendung einer jedem Frequenzband und jedem Zeitrahmen eigenen variablen Verstärkung. - Verfahren nach Anspruch 1, wobei die Berechnung (48) des optimalen linearen Projektors des Schritts d) durch eine Verarbeitung von der Art Capon-Beamforming mit verzerrungsfreier Reaktion mit minimaler Varianz MVDR durchgeführt wird.
- Verfahren nach Anspruch 1, wobei die selektive Rauschreduzierung (50) des Schritts e) durch eine Verarbeitung der Art Verstärkung mit optimierter modifizierter log-spektraler Amplitude OM-LSA durchgeführt wird.
- Verfahren nach Anspruch 1, wobei die Schätzung der Übertragungsfunktion des Schritts c) durch Berechnung (38) eines adaptiven Filters (14), das darauf abzielt, die Differenz zwischen dem vom Sensor aufgefangenen Signal (Xi), dessen Übertragungsfunktion ermittelt werden soll, und dem vom Sensor aufgefangenen Signal (X1) des Nutzsignalbezugswerts zu unterdrücken, mit Modulation durch die Sprachpräsenzwahrscheinlichkeit (p) durchgeführt wird.
- Verfahren nach Anspruch 4, wobei das adaptive Filter (14) ein Filter mit linearem Vorhersagealgorithmus der Art kleinster mittlerer Quadrate LMS ist.
- Verfahren nach Anspruch 4, wobei die Modulation durch die Sprachpräsenzwahrscheinlichkeit (p) eine Modulation durch Variation des Iterationsschritts des adaptiven Filters (14) ist.
- Verfahren nach Anspruch 1, wobei die Schätzung der Übertragungsfunktion des Schritts c) durch eine Diagonalisierungsverarbeitung (28) durchgeführt wird, die enthält:c1)die Bestimmung einer spektralen Korrelationsmatrix (Rx) der von den Sensoren des Netzwerks aufgefangenen Signale bezüglich des Sensors des Nutzsignalbezugswerts,c2)die Berechnung der Differenz zwischen einerseits der im Schritt c1) bestimmten Matrix (Rx) und andererseits der im Schritt b) berechneten, durch die Sprachpräsenzwahrscheinlichkeit (p) modulierten spektralen Kovarianzmatrix (Rn) des Rauschens, undc3)die Diagonalisierung der im Schritt c2) berechneten Differenzmatrix.
- Verfahren nach Anspruch 1, wobei:- das Spektrum des zu entrauschenden Signals in eine Vielzahl von unterschiedlichen Spektrumsbereichen (BF, HF) aufgeteilt ist,- die Sensoren in einer Vielzahl von Teilnetzwerken (M1 ... M4; M1, M4) zusammengefasst sind, die je einem der Bereiche des Spektrums zugeordnet sind, und- die Entrauschungsverarbeitung differenziert für jeden der Bereiche des Spektrums an den von den Sensoren des dem betroffenen Bereich des Spektrums entsprechenden Teilnetzwerks aufgefangenen Signalen durchgeführt wird.
- Verfahren nach Anspruch 8, wobei:- das Sensorennetzwerk ein lineares Netzwerk von fluchtend ausgerichteten Sensoren (M1 ... M4) ist,- das Spektrum des zu entrauschenden Signals in einen Niederfrequenzbereich (BF) und einen Hochfrequenzbereich (HF) aufgeteilt ist, und- für den Niederfrequenzbereich die Schritte der Entrauschungsverarbeitung nur an den von den am weitesten entfernten Sensoren des Netzwerks (M1, M4) aufgefangenen Signalen durchgeführt wird.
- Verfahren nach Anspruch 1, wobei:- das Spektrum des zu entrauschenden Signals in eine Vielzahl von unterschiedlichen Spektrumsbereichen (BF, HF) aufgeteilt ist, und- der Schritt c) der Schätzung der Übertragungsfunktion der Akustikkanäle differenziert durch Anwendung unterschiedlicher Verarbeitungen (28, 38) für jeden der Bereiche des Spektrums durchgeführt wird.
- Verfahren nach Anspruch 10, wobei:- das Sensorennetzwerk ein lineares Netzwerk von fluchtend ausgerichteten Sensoren (M1 ... M4) ist,- die Sensoren in einer Vielzahl von Teilnetzwerken (M1 ... M4; M1, M4) zusammengefasst sind, die je einem der Bereiche des Spektrums zugeordnet sind,- für den Niederfrequenzbereich (BF) die Entrauschungsverarbeitung nur an den von den am weitesten entfernten Sensoren (M1, M4) des Netzwerks aufgefangenen Signalen durchgeführt wird, und die Schätzung der Übertragungsfunktion durch Berechnung eines adaptiven Filters (38) durchgeführt wird, und- für den Hochfrequenzbereich die Entrauschungsverarbeitung an den von allen Sensoren des Netzwerks (M1 ... M4) aufgefangenen Signalen durchgeführt wird, und die Schätzung der Übertragungsfunktion durch eine Diagonalisierungsverarbeitung (28) durchgeführt wird.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1155377A FR2976710B1 (fr) | 2011-06-20 | 2011-06-20 | Procede de debruitage pour equipement audio multi-microphones, notamment pour un systeme de telephonie "mains libres" |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2538409A1 EP2538409A1 (de) | 2012-12-26 |
EP2538409B1 true EP2538409B1 (de) | 2013-08-28 |
Family
ID=46168348
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12170874.7A Active EP2538409B1 (de) | 2011-06-20 | 2012-06-05 | Verfahren zur Geräuschdämpfung für Audio-Gerät mit mehreren Mikrofonen, insbesondere für eine telefonische Freisprechanlage |
Country Status (4)
Country | Link |
---|---|
US (1) | US8504117B2 (de) |
EP (1) | EP2538409B1 (de) |
CN (1) | CN102855880B (de) |
FR (1) | FR2976710B1 (de) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9626982B2 (en) * | 2011-02-15 | 2017-04-18 | Voiceage Corporation | Device and method for quantizing the gains of the adaptive and fixed contributions of the excitation in a CELP codec |
FR2992459B1 (fr) * | 2012-06-26 | 2014-08-15 | Parrot | Procede de debruitage d'un signal acoustique pour un dispositif audio multi-microphone operant dans un milieu bruite. |
US10540992B2 (en) * | 2012-06-29 | 2020-01-21 | Richard S. Goldhor | Deflation and decomposition of data signals using reference signals |
US10473628B2 (en) * | 2012-06-29 | 2019-11-12 | Speech Technology & Applied Research Corporation | Signal source separation partially based on non-sensor information |
US10872619B2 (en) * | 2012-06-29 | 2020-12-22 | Speech Technology & Applied Research Corporation | Using images and residues of reference signals to deflate data signals |
EP2893532B1 (de) * | 2012-09-03 | 2021-03-24 | Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. | Vorrichtung und verfahren für informierte mehrkanalige sprachpräsenzwahrscheinlichkeitsschätzung |
US9257132B2 (en) * | 2013-07-16 | 2016-02-09 | Texas Instruments Incorporated | Dominant speech extraction in the presence of diffused and directional noise sources |
CN105594131B (zh) * | 2013-11-29 | 2018-02-06 | 华为技术有限公司 | 减少通信系统自干扰信号的方法和装置 |
US9544687B2 (en) * | 2014-01-09 | 2017-01-10 | Qualcomm Technologies International, Ltd. | Audio distortion compensation method and acoustic channel estimation method for use with same |
DE112014006281T5 (de) * | 2014-01-28 | 2016-10-20 | Mitsubishi Electric Corporation | Tonsammelvorrichtung, Korrekturverfahren für Eingangssignal von Tonsammelvorrichtung und Mobilgeräte-Informationssystem |
TR201815883T4 (tr) * | 2014-03-17 | 2018-11-21 | Anheuser Busch Inbev Sa | Gürültü bastırılması. |
CN105681972B (zh) * | 2016-01-14 | 2018-05-01 | 南京信息工程大学 | 线性约束最小方差对角加载的稳健频率不变波束形成方法 |
US10657983B2 (en) | 2016-06-15 | 2020-05-19 | Intel Corporation | Automatic gain control for speech recognition |
GB2556058A (en) * | 2016-11-16 | 2018-05-23 | Nokia Technologies Oy | Distributed audio capture and mixing controlling |
US10930298B2 (en) * | 2016-12-23 | 2021-02-23 | Synaptics Incorporated | Multiple input multiple output (MIMO) audio signal processing for speech de-reverberation |
CN110731088B (zh) * | 2017-06-12 | 2022-04-19 | 雅马哈株式会社 | 信号处理装置、远程会议装置以及信号处理方法 |
US11270720B2 (en) * | 2019-12-30 | 2022-03-08 | Texas Instruments Incorporated | Background noise estimation and voice activity detection system |
CN114813129B (zh) * | 2022-04-30 | 2024-03-26 | 北京化工大学 | 基于wpe与emd的滚动轴承声信号故障诊断方法 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7103541B2 (en) * | 2002-06-27 | 2006-09-05 | Microsoft Corporation | Microphone array signal enhancement using mixture models |
US6798380B2 (en) * | 2003-02-05 | 2004-09-28 | University Of Florida Research Foundation, Inc. | Robust capon beamforming |
WO2004084187A1 (ja) * | 2003-03-17 | 2004-09-30 | Nagoya Industrial Science Research Institute | 対象音検出方法、信号入力遅延時間検出方法及び音信号処理装置 |
CN101189656A (zh) * | 2003-11-24 | 2008-05-28 | 皇家飞利浦电子股份有限公司 | 具有相对于不相关噪声的稳健性的自适应波束生成器 |
FR2898209B1 (fr) * | 2006-03-01 | 2008-12-12 | Parrot Sa | Procede de debruitage d'un signal audio |
GB2437559B (en) * | 2006-04-26 | 2010-12-22 | Zarlink Semiconductor Inc | Low complexity noise reduction method |
US7945442B2 (en) * | 2006-12-15 | 2011-05-17 | Fortemedia, Inc. | Internet communication device and method for controlling noise thereof |
US9142221B2 (en) * | 2008-04-07 | 2015-09-22 | Cambridge Silicon Radio Limited | Noise reduction |
US9224395B2 (en) * | 2008-07-02 | 2015-12-29 | Franklin S. Felber | Voice detection for automatic volume controls and voice sensors |
US8380497B2 (en) * | 2008-10-15 | 2013-02-19 | Qualcomm Incorporated | Methods and apparatus for noise estimation |
FR2948484B1 (fr) * | 2009-07-23 | 2011-07-29 | Parrot | Procede de filtrage des bruits lateraux non-stationnaires pour un dispositif audio multi-microphone, notamment un dispositif telephonique "mains libres" pour vehicule automobile |
FR2950461B1 (fr) * | 2009-09-22 | 2011-10-21 | Parrot | Procede de filtrage optimise des bruits non stationnaires captes par un dispositif audio multi-microphone, notamment un dispositif telephonique "mains libres" pour vehicule automobile |
CN101916567B (zh) * | 2009-11-23 | 2012-02-01 | 瑞声声学科技(深圳)有限公司 | 应用于双麦克风系统的语音增强方法 |
CN101894563B (zh) * | 2010-07-15 | 2013-03-20 | 瑞声声学科技(深圳)有限公司 | 语音增强的方法 |
-
2011
- 2011-06-20 FR FR1155377A patent/FR2976710B1/fr not_active Expired - Fee Related
-
2012
- 2012-06-05 EP EP12170874.7A patent/EP2538409B1/de active Active
- 2012-06-05 US US13/489,214 patent/US8504117B2/en active Active
- 2012-06-19 CN CN201210202063.6A patent/CN102855880B/zh active Active
Also Published As
Publication number | Publication date |
---|---|
FR2976710B1 (fr) | 2013-07-05 |
EP2538409A1 (de) | 2012-12-26 |
CN102855880B (zh) | 2016-09-28 |
US20120322511A1 (en) | 2012-12-20 |
FR2976710A1 (fr) | 2012-12-21 |
US8504117B2 (en) | 2013-08-06 |
CN102855880A (zh) | 2013-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2538409B1 (de) | Verfahren zur Geräuschdämpfung für Audio-Gerät mit mehreren Mikrofonen, insbesondere für eine telefonische Freisprechanlage | |
EP2680262B1 (de) | Verfahren zur Geräuschdämpfung eines Audiosignals für eine Multimikrofon-Audiovorrichtung, die in lauten Umgebungen eingesetzt wird | |
EP2309499B1 (de) | Verfahren zur optimierten Filterung nicht stationärer Geräusche, die von einem Audiogerät mit mehreren Mikrophonen eingefangen werden, insbesondere eine Freisprechtelefonanlage für Kraftfahrzeuge | |
EP2293594B1 (de) | Verfahren zur Filterung von seitlichem nichtstationärem Rauschen für ein Multimikrofon-Audiogerät | |
EP1830349B1 (de) | Verfahren zur Geräuschdämpfung eines Audiosignals | |
EP2430825B1 (de) | Verfahren zum auswählen eines von zwei oder mehr mikrofonen für ein sprachverarbeitungssystem, wie etwa eine freihand-telefoneinrichtung, die in einer geräuschbehafteten umgebung operiert | |
EP2772916B1 (de) | Verfahren zur Geräuschdämpfung eines Audiosignals mit Hilfe eines Algorithmus mit variabler Spektralverstärkung mit dynamisch modulierbarer Härte | |
FR2909773A1 (fr) | Procede de traitement radar passif multivoies d'un signal d'opportunite en fm. | |
FR2831717A1 (fr) | Methode et systeme d'elimination d'interference pour antenne multicapteur | |
WO2008125774A2 (fr) | Procede de reduction active d'une nuisance sonore | |
FR2975193A1 (fr) | Procede et systeme de localisation d'interferences affectant un signal de radionavigation par satellite | |
EP0998166A1 (de) | Anordnung zur Verarbeitung von Audiosignalen, Empfänger und Verfahren zum Filtern und Wiedergabe eines Nutzsignals in Gegenwart von Umgebungsgeräusche | |
EP0692883B1 (de) | Verfahren zur blinden Entzerrung, und dessen Anwendung zur Spracherkennung | |
EP0884926B1 (de) | Verfahren und Vorrichtung zur optimierten Verarbeitung eines Störsignals während einer Tonaufnahme | |
FR2906070A1 (fr) | Reduction de bruit multi-reference pour des applications vocales en environnement automobile | |
FR2808391A1 (fr) | Systeme de reception pour antenne multicapteur | |
EP3025342A1 (de) | Verfahren zur unterdrückung des späten nachhalls eines akustischen signals | |
FR2906071A1 (fr) | Reduction de bruit multibande avec une reference de bruit non acoustique | |
FR3116348A1 (fr) | Localisation perfectionnée d’une source acoustique | |
EP1155497B1 (de) | System und verfahren zum verarbeiten von antennensignalen | |
FR3113537A1 (fr) | Procédé et dispositif électronique de réduction du bruit multicanale dans un signal audio comprenant une partie vocale, produit programme d’ordinateur associé | |
FR3121542A1 (fr) | Estimation d’un masque optimisé pour le traitement de données sonores acquises | |
WO2018220327A1 (fr) | Procédé de suppression spatiale et temporelle d'interférences multi-trajets pour récepteur de signaux radio modulés en fréquence | |
FR2878399A1 (fr) | Dispositif et procede de debruitage a deux voies mettant en oeuvre une fonction de coherence associee a une utilisation de proprietes psychoacoustiques, et programme d'ordinateur correspondant | |
FR2724028A1 (fr) | Procede d'estimation aveugle de retards differentiels entre deux signaux |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20120605 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602012000246 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: G10L0021020000 Ipc: G10L0021020800 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 21/0208 20130101AFI20130327BHEP Ipc: H04R 3/00 20060101ALI20130327BHEP |
|
INTG | Intention to grant announced |
Effective date: 20130423 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 629726 Country of ref document: AT Kind code of ref document: T Effective date: 20130915 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: FRENCH |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602012000246 Country of ref document: DE Effective date: 20131024 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: T3 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 629726 Country of ref document: AT Kind code of ref document: T Effective date: 20130828 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131230 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130918 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131228 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131128 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131129 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602012000246 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20140530 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602012000246 Country of ref document: DE Effective date: 20140530 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140605 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140605 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 4 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 602012000246 Country of ref document: DE Owner name: PARROT AUTOMOTIVE, FR Free format text: FORMER OWNER: PARROT, PARIS, FR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20151029 AND 20151104 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: TP Owner name: PARROT AUTOMOTIVE, FR Effective date: 20151201 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: PD Owner name: PARROT AUTOMOTIVE; FR Free format text: DETAILS ASSIGNMENT: VERANDERING VAN EIGENAAR(S), OVERDRACHT; FORMER OWNER NAME: PARROT Effective date: 20151102 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150630 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150630 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 5 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140630 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20120605 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 6 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 7 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130828 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: CZ Payment date: 20190611 Year of fee payment: 8 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MM Effective date: 20200701 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200701 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20230523 Year of fee payment: 12 Ref country code: FR Payment date: 20230523 Year of fee payment: 12 Ref country code: DE Payment date: 20230523 Year of fee payment: 12 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20230523 Year of fee payment: 12 |