EP1091349A2 - Verfahren und Vorrichtung zur Geräuschunterdrückung bei der Sprachübertragung - Google Patents
Verfahren und Vorrichtung zur Geräuschunterdrückung bei der Sprachübertragung Download PDFInfo
- Publication number
- EP1091349A2 EP1091349A2 EP00250301A EP00250301A EP1091349A2 EP 1091349 A2 EP1091349 A2 EP 1091349A2 EP 00250301 A EP00250301 A EP 00250301A EP 00250301 A EP00250301 A EP 00250301A EP 1091349 A2 EP1091349 A2 EP 1091349A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- layer
- spectrum
- noise
- filter
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 230000005540 biological transmission Effects 0.000 title claims abstract description 5
- 230000009467 reduction Effects 0.000 title description 2
- 238000001228 spectrum Methods 0.000 claims abstract description 39
- 210000002569 neuron Anatomy 0.000 claims abstract description 34
- 238000006243 chemical reaction Methods 0.000 claims abstract description 30
- 238000009792 diffusion process Methods 0.000 claims abstract description 24
- 230000010354 integration Effects 0.000 claims abstract description 16
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 238000013528 artificial neural network Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000001629 suppression Effects 0.000 claims abstract description 8
- 238000001914 filtration Methods 0.000 claims abstract description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000008878 coupling Effects 0.000 claims description 6
- 238000010168 coupling process Methods 0.000 claims description 6
- 238000005859 coupling reaction Methods 0.000 claims description 6
- 230000003595 spectral effect Effects 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 238000013016 damping Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 18
- 230000002123 temporal effect Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241001014642 Rasta Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
Definitions
- the invention relates to a method and a device for noise suppression during voice transmission through a multi-layered, self-organizing, feedback neural network.
- the Wiener Komolgorov filter is derived from the optimal filter theory. (S.V. Vaseghi, Advanced Signal Processing and Digital Noise Reduction, "John Wiley and Teubner publishing house, 1996). This procedure is based on the Minimize the mean square error between the actual and expected speech signal. This filter concept requires a considerable amount Computing effort. It is also like most known ones Procedure a stationary interference signal theoretical requirement.
- the Kalman filter is based on a similar filter principle (E. Wan and A. Nelson, Removal of noise from speech using the Dual Extended Kalman Filter algorithm, Proceedings of the IEEE International Conference on Acoustics and Signal Processing (ICASSP'98), Seattle 1998).
- This filter concept has a disadvantage the long training time it takes to complete the Determine filter parameters.
- LPC requires the complex one Calculation of correlation matrices in order to use a linear prediction method filter coefficients calculate as from T. Arai, H. Hermansky, M. Paveland, C. Avendano, Intelligibility of Speech with Filtered Time Trajectories of LPC Cepstrum, The Journal of the Acoustical Society of Maerica, Vol. 100, No. 4, Pt. 2, p. 2756, 1996.
- the object of the present invention is a method to create that with little computation Speech signal based on its temporal and spectral properties recognized and freed from noise can.
- This task is solved in that a mini detection layer, a reaction layer, a diffusion layer and an integration layer a filter function Determine F (f, T) for noise filtering.
- a network designed in this way recognizes a voice signal in its temporal and spectral properties and frees it from noise. Compared to In known methods, the computation effort required is low.
- the process is characterized by a special short adaptation time, within which the system adjusts to the type of noise.
- the signal delay when processing the signal very short, so that the filter in real time for telecommunications is operational.
- FIG. 10 An overall system is shown schematically and by way of example in FIG shown for language filtering. This exists from a sampling unit 10, which is the noisy one Voice signal sampled in time t and discretized and thus generates samples x (t) that in time T can be combined into frames from n samples.
- the Fourier transform is used to transform each frame Spectrum A (f, T) determined at time T and a filter unit 11 fed with the help of a neural Network, as shown in Figure 2, a Filter function F (f, T) calculated with which the spectrum A (f, T) of the signal is multiplied by the noise-free Generate spectrum B (f, T). Subsequently becomes the signal of a synthesis unit filtered in this way (12) passed by means of inverse Fourier transformation the noise-free from the filtered spectrum B (f, T) Speech signal y (t) synthesized.
- FIG. 2 shows a mini detection layer, a reaction layer, a diffusion layer and one Neural network containing integration layer, which is particularly the subject of the invention and which the spectrum A (f, T) of the input signal supplied becomes, from which the filter function F (f, T) is calculated becomes.
- Each of the modes of the spectrum that stands out distinguish the frequency f corresponds to one single neuron per layer of the network except the integration layer.
- the individual layers are specified in more detail in the following figures.
- M (f, T) determines.
- M (f, T) is in fashion with frequency f the minimum of the averaged over m frames Amplitude A (f, T) within an interval of Time T, which corresponds to the length of 1 frame.
- Figure 4 shows a neuron of the reaction layer, which with the help of a reaction function r [S (T-1)] from the integral signal S (T-1) as detailed in Figure 6 is shown, and a freely selectable parameter K, which determines the level of noise cancellation the relative spectrum R (f, T) is determined from A (f, T) and M (f, T).
- R (f, T) has a value between zero and one.
- the reaction layer distinguishes speech from noise based on the temporal behavior of the signal.
- FIG. 5 shows a neuron of the diffusion layer in which a local coupling corresponding to the diffusion between the fashions.
- the diffusion constant D determines the strength of the resulting Smoothing over the frequencies f at a fixed time T.
- the diffusion layer determines from the relative signal R (f, T) the actual filter function F (f, T) with which the spectrum A (f, T) is multiplied by noise to eliminate.
- Speech of sounds based on spectral properties distinguished.
- Figure 6 shows this in the chosen embodiment of the invention only neuron of the integration layer that the Filter function F (f, T) with fixed time T over the frequencies f integrated and the integral signal thus obtained S (T) feeds back into the reaction layer, as Figure 2 shows.
- This global coupling ensures that when there is a high noise level, the filtering is strong while noiseless speech is transmitted unadulterated.
- FIG. 7 shows exemplary details of the filter properties of the invention for various settings of the control parameter K.
- the picture shows the Attenuation of amplitude modulated white noise in Dependence of the modulation frequency. At modulation frequencies the damping is between 0.6 Hz and 6 Hz less than 3 dB. This interval corresponds to the typical modulation of human language.
- a Speech signal that is affected by any background noise be sampled in a sampling unit 10 and digitized, as shown in FIG. 1.
- samples x (t) are obtained in time t.
- samples x (t) are combined into a frame, of which at time T using Fourier transform a spectrum A (f, T) is calculated.
- the modes of the spectrum differ in their Frequency f.
- the Spectrum A (f, T) generates a filter function F (f, T) and multiplied by the spectrum. This gives you that filtered spectrum B (f, T), from which in a synthesis unit through inverse Fourier transformation the noise-free Speech signal y (t) is generated. This can after digital-analog conversion in a speaker be made audible.
- the filter function F (f, T) is performed by a neural Network that creates a mini detection layer, a reaction layer, a diffusion layer and one Integration layer contains, as Figure 2 shows.
- the Spectrum A (f, T) generated by the sampling unit 10 is first fed to the mini detection layer, as shown in Figure 3.
- a single neuron of this layer works independently from the other neurons of the mini-detection layer a single fashion by the frequency f is marked. The neuron averages for this fashion the amplitudes A (f, T) in time T over m frames. Of the neuron then determines these averaged amplitudes over a period in T that is the length of 1 frames the minimum for his fashion. To this In this way, the neurons of the mini detection layer generate the signal M (f, T), which is then fed to the reaction layer becomes.
- Each neuron of the reaction layer as shown in FIG. 4 shows, edited a single mode of frequency f, independent of the other neurons in this layer.
- all neurons also have an externally adjustable one Paramter K fed, the size of the degree of Noise suppression of the entire filter also determines the integral signal S (T-1) from the previous frame (time T-1) in the integration layer, as shown in FIG. 6 has been.
- This signal is the argument of a non-linear reaction function r, with the help of the neurons of the reaction layer the relative spectrum R (f, T) at the time Calculate T.
- the value range of the reaction function is on an interval [r1, r2] restricted.
- the range of values of the resulting relative spectrum R (f, T) is limited to the interval [0, 1].
- the temporal behavior is in the reaction layer of the speech signal to distinguish useful and Interference signal evaluated.
- Spectral properties of the speech signal are in the Diffusion layer, as shown in FIG. 5, evaluated, whose neurons have local mode coupling according to Art perform a diffusion in the frequency domain.
- This integral signal is fed back into the reaction layer.
- This global coupling means that the strength of the signal manipulation in the filter from the interference level is dependent. Voice signals with low noise levels pass the filter practically unaffected, while at high noise levels a strong filter effect takes effect. This makes the Invention of classic bandpass filters, their influence on the signal only from the selected, fixed predetermined Parameters.
- the item has the invention no frequency response in the conventional Senses.
- the filter properties of the test signal influence When measuring with a tunable sinusoidal test signal would already change the modulation speed the filter properties of the test signal influence.
- a suitable method for analyzing the properties the filter uses an amplitude-modulated noise signal, to the depending on the modulation frequency
- This "modulation course" is shown in FIG. for different values of the control parameter K shown.
Landscapes
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
- Soundproofing, Sound Blocking, And Sound Damping (AREA)
- Noise Elimination (AREA)
- Telephone Function (AREA)
Abstract
Description
- Figur 1
- das Gesamtsystem zur Sprachfilterung;
- Figur 2
- ein eine Minimadetektions-Schicht, eine Reaktions-Schicht, eine Diffusions-Schicht und eine Integrations-Schicht enthaltendes neuronales Netzwerk;
- Figur 3
- ein Neuron der Minima-Detektions-Schicht, welche M(f,T) ermittelt;
- Figur 4
- ein Neuron der Reaktions-Schicht, welches mit Hilfe einer Reaktionsfunktion r[S(T-1)] aus dem Integralsignal S(T-1) und einem frei wählbaren Parameter K, welcher den Grad der Geräuschunterdrückung bestimmt, aus A(f,T) und M(f,T) das Relativspektrum R(f,T) ermittelt;
- Figur 5
- Neuronen der Diffusionsschicht, in welcher eine der Diffusion entsprechende, lokale Kopplung zwischen den Moden hergestellt wird;
- Figur 6
- ein Neuron der gezeigte Ausführung der Integrationsschicht;
- Figur 7
- ein Beispiel für Filtereigenschaften der Erfindung bei verschiedenen Einstellungen des Kontrollparameters K.
- 10
- Samplingeinheit, die ein Sprachsignal x(t) abtastet, digitalisiert, in Frames zerlegt und durch Fouriertransformation das Spektrum A(f,T) ermittelt
- 11
- Filtereinheit, die aus dem Spektrum A(f,T) eine Filterfunktion F(f,T) berechnet und damit das geräuschbefreite Spektrum B(f,T) erzeugt
- 12
- Syntheseeinheit, die aus dem gefilterten Spektrum B(f,T) das geräuschbefreite Sprachsignal y(t) erzeugt
- A(f,T)
- Signalspektrum, d.h. Amplitude der Mode der Frequenz f zum Zeitpunkt T
- B(f,T)
- spektrale Amplitude der Mode der Frequenz f zum Zeitpunkt T nach der Filterung
- D
- Diffusionskonstante, die die Stärke der Glättung in der Diffusions-Schicht bestimmt
- F(f,T)
- Filterfunktion, die B(f,T) aus A(f,T) erzeugt: B(f,T)=F(f,T)A(f,T) für alle f zur Zeit T
- f
- Frequenz, durch die sich die Moden eines Spektrums unterscheiden
- K
- Parameter zum Einstellen der Stärke der Geräuschunterdrückung.
- l
- Anzahl der Frames, aus denen man M(f,T) als Minimum der gemittelten A(f,T) erhält
- m
- Anzahl der Frames, über die bei der Bestimmung von M(f,T) gemittelt wird
- n
- Anzahl der Abtastwerte (Samples) pro Frame
- M(f,T)
- Minimum der über m Frames gemittelten Amplitude A(f,T) innerhalb von 1 Frames.
- R(f,T)
- Relativspektrum, das von der Reaktionsschicht erzeugt wird
- r[S(T)]
- Reaktionsfunktion der Neuronen in der Reaktionsschicht
- r1, r2
- Grenzen des Wertebereichs der Reaktionsfunktion r1<r(S(T))<r2
- S(T)
- Integralsignal, das dem Integral von F(f,T) über f zum Zeitpunkt T entspricht
- t
- Zeit in der das Sprachsignal abgetastet wird
- T
- Zeit in der das Zeitsignal zu Frames und diese zu Spektren verarbeitet werden
- x(t)
- Samples des geräuschbehafteten Sprachsignals
- y(t)
- Samples des geräuschbefreiten Sprachsignals
Claims (20)
- Verfahren zur Geräuschunterdrückung bei der Sprachübertragung durch ein mehrschichtiges, selbstorganisierendes, rückgekoppeltes neuronales Netzwerk, dadurch gekennzeichnet, daß eine Minimadetektionsschicht, eine Reaktionsschicht, eine Diffusionsschicht und eine Integrationsschicht eine Filterfunktion F(f,T) zur Geräuschfilterung bestimmen.
- Verfahren nach Anspruch 1, dadurch gekennzeichnet, daß das mittels der Filterfunktion F(f,T) von Störgeräuschen befreite Spektrum B(f,T) mittels inverser Fouriertransformation in ein geräuschbefreites Sprachsignal y(t) verwandelt wird.
- Verfahren nach den Ansprüchen 1 und 2, dadurch gekennzeichnet, daß die Signalverzögerung bei der Verarbeitung des Signals so kurz ist, daß das Filter im Echtzeitbetrieb für Telekommunikation einsatzfähig bleibt wird, wobei allen Neuronen ein extern einstellbarer Paramter K zugeführt wird, dessen Größe den Grad der Geräuschunterdrückung des gesamten Filters bestimmt.
- Verfahren nach den Ansprüchen 1 bis 3, dadurch gekennzeichnet, daß das Neuron der Intergrationsschicht die Filterfunktion F(f,T) bei festgehaltener Zeit T über die Frequenzen f integriert und das so erhaltene Integralsignal S(T) in die Reaktionsschicht zurückkoppelt wird.
- Verfahren nach den Ansprüchen 1 bis 4, dadurch gekennzeichnet, daß das von einer Samplingeinheit (10) erzeugte Spektrum A(f,T) der Minimadetektions-Schicht zugeführt wird.
- Verfahren nach den Ansprüchen 1 bis 5, dadurch gekennzeichnet, daß in einer Filtereinheit (11) aus dem Spektrum A(f,T) eine Filterfunktion F(f,T) erzeugt und mit dem Spektrum multipliziert wird.
- Verfahren nach den Ansprüchen 1 bis 6, gekennzeichnet durch einen Frame mittels dem eine Fouriertransformiation das Spektrum A(f,T) zur Zeit T ermittelt und einer Filtereinheit (11) zugeführt wird, die mit Hilfe eines neuronalen Netzwerks eine Filterfunktion F(f,T) berechnet, mit der das Spektrum A(f,T) des Signals multipliziert wird, um ein geräuschbefreites Spektrum B(f,T) zu erzeugen.
- Verfahren nach den Ansprüchen 1 bis 7, dadurch gekennzeichnet, daß das ein gefiltertes Signal einer Syntheseeinheit (12) übergeben wird, die mittels inverser Fouriertransformation aus dem gefilterten Spektrum B(f,T) ein geräuschbefreites Sprachsignal y(t) synthetisiert.
- Verfahren nach den Ansprüchen 1 bis 8, dadurch gekennzeichnet, daß ein einzelnes Neuron einer Schicht unabhängig von den anderen Neuronen der Minimadetektions-Schicht eine einzelne Mode bearbeitet, die durch die Frequenz f gekennzeichnet ist.
- Verfahren nach den Ansprüchen 1 bis 9, dadurch gekennzeichnet, daß die spektralen Eigenschaften des Sprachsignals in der Diffusionsschicht ausgewertet werden, deren Neuronen eine lokale Modenkopplung nach Art einer Diffusion im Frequenzraum durchführen.
- Verfahren nach den Ansprüchen 1 bis 10, dadurch gekennzeichnet, daß alle Moden der Filterfunktion F(f,T) zum Zeitpunkt T mit den entsprechenden Amplituden A(f,T) multipliziert werden.
- Verfahren nach den Ansprüchen 1 bis 11, dadurch gekennzeichnet, daß über die Moden der Filterfunktion F(f,T) in der Integrations-Schicht integriert wird, so daß das Integralsignal S(T) resultiert.
- Verfahren nach den Ansprüchen 1 bis 12, dadurch gekennzeichnet, daß Sprachsignale mit geringer Geräuschbelastung das Filter praktisch unbeeinflußt passieren, während bei Sprachsignalen mit hohem Geräuschpegel ein starker Filtereffekt wirksam wird.
- Vorrichtung zur Geräuschunterdruckung bei der Sprachübertragung, insbesondere bei einem Verfahren nach den Ansprüchen 1 bis 13, dadurch gekennzeichnet, daß ein eine Minimadetektions-Schicht, eine Reaktions-Schicht, eine Diffusions-Schicht und eine Integrations-Schicht enthaltendes neuronales Netzwerk vorgesehen ist.
- Vorrichtung nach Anspruch 13, dadurch gekennzeichnet, daß die Moden des Spektrums, die sich durch die Frequenz f unterscheiden, einen einzelnen Neuron pro Schicht des Netzwerks mit Ausnahme der Integrationsschicht entsprechen.
- Vorrichtung nach den Ansprüchen 13 bis 15, dadurch gekennzeichnet, daß ein Neuron der Minima-Detektions-Schicht die Funktion M(f,T) ermittelt, wobei M(f,T) in der Mode mit Frequenz f das Minimum der über m Frames gemittelten Amplitude A(f,T) innerhalb eines Zeitintervalls ist, welches der Länge von 1 Frames entspricht.
- Vorrichtung nach den Ansprüchen 13 bis 16, gekennzeichnet durch ein Neuron der Reaktions-Schicht, welches mit Hilfe einer Reaktionsfunktion r[S(T-1)] aus dem Integralsignal S(T-1) und einem frei wählbaren Parameter K, welcher den Grad der Geräuschunterdrückung bestimmt, aus A(f,T) und M(f,T) das Relativspektrum R(f,T) ermittelt, wobei das Relativspektrum R(f,T) einen Wertebereich zwischen null und eins hat.
- Vorrichtung nach den Ansprüchen 13 bis 17, dadurch gekennzeichnet, daß den Neuronen ein in der Integrations-Schicht berechnetes Integralsignal S(T-1) vom vorigen Frame (Zeitpunkt T-1) zur Verfügung steht.
- Vorrichtung nach den Ansprüchen 13 bis 18, dadurch gekennzeichnet, daß der Wertebereich der Reaktionsfunktion auf ein Intervall [r1, r2] eingeschränkt ist, wobei der Wertebereich des resultierenden Relativspektrums R(f,T) auf das Intervall [0, 1] beschränkt ist.
- Vorrichtung nach den Ansprüchen 13 bis 19, dadurch gekennzeichnet, daß für Modulationsfrequenzen zwischen 0.6 Hz und 6 Hz die Dämpfung für alle gezeigten Werte des Kontrollparameters K weniger als 3 dB beträgt.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE19948308 | 1999-10-06 | ||
DE19948308A DE19948308C2 (de) | 1999-10-06 | 1999-10-06 | Verfahren und Vorrichtung zur Geräuschunterdrückung bei der Sprachübertragung |
Publications (3)
Publication Number | Publication Date |
---|---|
EP1091349A2 true EP1091349A2 (de) | 2001-04-11 |
EP1091349A3 EP1091349A3 (de) | 2002-01-02 |
EP1091349B1 EP1091349B1 (de) | 2005-02-09 |
Family
ID=7924812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP00250301A Expired - Lifetime EP1091349B1 (de) | 1999-10-06 | 2000-09-08 | Verfahren und Vorrichtung zur Geräuschunterdrückung bei der Sprachübertragung |
Country Status (6)
Country | Link |
---|---|
US (1) | US6820053B1 (de) |
EP (1) | EP1091349B1 (de) |
AT (1) | ATE289110T1 (de) |
CA (1) | CA2319995C (de) |
DE (2) | DE19948308C2 (de) |
TW (1) | TW482993B (de) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1585112A1 (de) * | 2004-03-30 | 2005-10-12 | Dialog Semiconductor GmbH | Geräuschunterdrückung ohne Signalverzögerung |
EP2151822A1 (de) * | 2008-08-05 | 2010-02-10 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zur Verarbeitung eines Audiosignals zur Sprachverstärkung unter Anwendung einer Merkmalsextraktion |
CN104036784A (zh) * | 2014-06-06 | 2014-09-10 | 华为技术有限公司 | 一种回声消除方法及装置 |
CN114944154A (zh) * | 2022-07-26 | 2022-08-26 | 深圳市长丰影像器材有限公司 | 音频调整方法、装置、设备及存储介质 |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8606851B2 (en) | 1995-06-06 | 2013-12-10 | Wayport, Inc. | Method and apparatus for geographic-based communications service |
US5835061A (en) | 1995-06-06 | 1998-11-10 | Wayport, Inc. | Method and apparatus for geographic-based communications service |
DE102004031638A1 (de) * | 2004-06-30 | 2006-01-26 | Abb Patent Gmbh | Verfahren zum Betrieb einer magnetisch induktiven Durchflussmesseinrichtung |
DE102005039621A1 (de) | 2005-08-19 | 2007-03-01 | Micronas Gmbh | Verfahren und Vorrichtung zur adaptiven Reduktion von Rausch- und Hintergrundsignalen in einem sprachverarbeitenden System |
GB0703275D0 (en) * | 2007-02-20 | 2007-03-28 | Skype Ltd | Method of estimating noise levels in a communication system |
DE102007033484A1 (de) | 2007-07-18 | 2009-01-22 | Ruwisch, Dietmar, Dr. | Hörgerät |
US20120245927A1 (en) * | 2011-03-21 | 2012-09-27 | On Semiconductor Trading Ltd. | System and method for monaural audio processing based preserving speech information |
US8239196B1 (en) * | 2011-07-28 | 2012-08-07 | Google Inc. | System and method for multi-channel multi-feature speech/noise classification for noise suppression |
EP2590165B1 (de) | 2011-11-07 | 2015-04-29 | Dietmar Ruwisch | Verfahren und Vorrichtung zur Erzeugung eines rauschreduzierten Audiosignals |
US9258653B2 (en) | 2012-03-21 | 2016-02-09 | Semiconductor Components Industries, Llc | Method and system for parameter based adaptation of clock speeds to listening devices and audio applications |
KR101626438B1 (ko) | 2012-11-20 | 2016-06-01 | 유니파이 게엠베하 운트 코. 카게 | 오디오 데이터 프로세싱을 위한 방법, 디바이스, 및 시스템 |
US9330677B2 (en) | 2013-01-07 | 2016-05-03 | Dietmar Ruwisch | Method and apparatus for generating a noise reduced audio signal using a microphone array |
US10561361B2 (en) * | 2013-10-20 | 2020-02-18 | Massachusetts Institute Of Technology | Using correlation structure of speech dynamics to detect neurological changes |
US20160111107A1 (en) * | 2014-10-21 | 2016-04-21 | Mitsubishi Electric Research Laboratories, Inc. | Method for Enhancing Noisy Speech using Features from an Automatic Speech Recognition System |
EP3301675B1 (de) * | 2016-09-28 | 2019-08-21 | Panasonic Intellectual Property Corporation of America | Parametervorhersagevorrichtung parametervorhersageverfahren zur verarbeitung akustischer signale |
WO2018204917A1 (en) | 2017-05-05 | 2018-11-08 | Ball Aerospace & Technologies Corp. | Spectral sensing and allocation using deep machine learning |
CN109427340A (zh) * | 2017-08-22 | 2019-03-05 | 杭州海康威视数字技术股份有限公司 | 一种语音增强方法、装置及电子设备 |
US10283140B1 (en) * | 2018-01-12 | 2019-05-07 | Alibaba Group Holding Limited | Enhancing audio signals using sub-band deep neural networks |
US11182672B1 (en) | 2018-10-09 | 2021-11-23 | Ball Aerospace & Technologies Corp. | Optimized focal-plane electronics using vector-enhanced deep learning |
US10879946B1 (en) * | 2018-10-30 | 2020-12-29 | Ball Aerospace & Technologies Corp. | Weak signal processing systems and methods |
US10761182B2 (en) | 2018-12-03 | 2020-09-01 | Ball Aerospace & Technologies Corp. | Star tracker for multiple-mode detection and tracking of dim targets |
US11851217B1 (en) | 2019-01-23 | 2023-12-26 | Ball Aerospace & Technologies Corp. | Star tracker using vector-based deep learning for enhanced performance |
US11412124B1 (en) | 2019-03-01 | 2022-08-09 | Ball Aerospace & Technologies Corp. | Microsequencer for reconfigurable focal plane control |
EP3726529A1 (de) * | 2019-04-16 | 2020-10-21 | Fraunhofer Gesellschaft zur Förderung der Angewand | Verfahren und vorrichtung zur bestimmung eines tiefenfilters |
US11488024B1 (en) | 2019-05-29 | 2022-11-01 | Ball Aerospace & Technologies Corp. | Methods and systems for implementing deep reinforcement module networks for autonomous systems control |
US11303348B1 (en) | 2019-05-29 | 2022-04-12 | Ball Aerospace & Technologies Corp. | Systems and methods for enhancing communication network performance using vector based deep learning |
EP3764359B1 (de) | 2019-07-10 | 2024-08-28 | Analog Devices International Unlimited Company | Signalverarbeitungsverfahren und systeme für mehrfokusstrahlformung |
EP3764360B1 (de) | 2019-07-10 | 2024-05-01 | Analog Devices International Unlimited Company | Signalverarbeitungsverfahren und -systeme zur strahlformung mit verbessertem signal/rauschen-verhältnis |
EP3764660B1 (de) | 2019-07-10 | 2023-08-30 | Analog Devices International Unlimited Company | Signalverarbeitungsverfahren und systeme für adaptive strahlenformung |
EP3764664A1 (de) | 2019-07-10 | 2021-01-13 | Analog Devices International Unlimited Company | Signalverarbeitungsverfahren und systeme zur strahlformung mit mikrofontoleranzkompensation |
EP3764358B1 (de) | 2019-07-10 | 2024-05-22 | Analog Devices International Unlimited Company | Signalverarbeitungsverfahren und -systeme zur strahlformung mit windblasschutz |
US11828598B1 (en) | 2019-08-28 | 2023-11-28 | Ball Aerospace & Technologies Corp. | Systems and methods for the efficient detection and tracking of objects from a moving platform |
IT201900024454A1 (it) * | 2019-12-18 | 2021-06-18 | Storti Gianampellio | Apparecchio audio con basso consumo per ambienti rumorosi |
US20240112690A1 (en) * | 2022-09-26 | 2024-04-04 | Cerence Operating Company | Switchable Noise Reduction Profiles |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5878389A (en) * | 1995-06-28 | 1999-03-02 | Oregon Graduate Institute Of Science & Technology | Method and system for generating an estimated clean speech signal from a noisy speech signal |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3610831A (en) * | 1969-05-26 | 1971-10-05 | Listening Inc | Speech recognition apparatus |
US5822742A (en) * | 1989-05-17 | 1998-10-13 | The United States Of America As Represented By The Secretary Of Health & Human Services | Dynamically stable associative learning neural network system |
US5581662A (en) * | 1989-12-29 | 1996-12-03 | Ricoh Company, Ltd. | Signal processing apparatus including plural aggregates |
JPH0566795A (ja) * | 1991-09-06 | 1993-03-19 | Gijutsu Kenkyu Kumiai Iryo Fukushi Kiki Kenkyusho | 雑音抑圧装置とその調整装置 |
US5377302A (en) * | 1992-09-01 | 1994-12-27 | Monowave Corporation L.P. | System for recognizing speech |
DE4309985A1 (de) * | 1993-03-29 | 1994-10-06 | Sel Alcatel Ag | Geräuschreduktion zur Spracherkennung |
IT1270919B (it) * | 1993-05-05 | 1997-05-16 | Cselt Centro Studi Lab Telecom | Sistema per il riconoscimento di parole isolate indipendente dal parlatore mediante reti neurali |
US5649065A (en) * | 1993-05-28 | 1997-07-15 | Maryland Technology Corporation | Optimal filtering by neural networks with range extenders and/or reducers |
DE69428119T2 (de) * | 1993-07-07 | 2002-03-21 | Picturetel Corp., Peabody | Verringerung des hintergrundrauschens zur sprachverbesserung |
US5960391A (en) * | 1995-12-13 | 1999-09-28 | Denso Corporation | Signal extraction system, system and method for speech restoration, learning method for neural network model, constructing method of neural network model, and signal processing system |
US5717833A (en) * | 1996-07-05 | 1998-02-10 | National Semiconductor Corporation | System and method for designing fixed weight analog neural networks |
-
1999
- 1999-10-06 DE DE19948308A patent/DE19948308C2/de not_active Expired - Fee Related
-
2000
- 2000-09-08 AT AT00250301T patent/ATE289110T1/de not_active IP Right Cessation
- 2000-09-08 EP EP00250301A patent/EP1091349B1/de not_active Expired - Lifetime
- 2000-09-08 DE DE50009461T patent/DE50009461D1/de not_active Expired - Lifetime
- 2000-09-20 CA CA002319995A patent/CA2319995C/en not_active Expired - Fee Related
- 2000-10-05 TW TW089120732A patent/TW482993B/zh not_active IP Right Cessation
- 2000-10-06 US US09/680,981 patent/US6820053B1/en not_active Expired - Lifetime
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5878389A (en) * | 1995-06-28 | 1999-03-02 | Oregon Graduate Institute Of Science & Technology | Method and system for generating an estimated clean speech signal from a noisy speech signal |
Non-Patent Citations (2)
Title |
---|
ENMAJI A ET AL: "CONCEPTION OF SPEECH FILTERS BASED ON A NEURAL NETWORK" PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING (ICSLP). BANFF, OCT. 12 - 16, 1992, EDMONTON, UNIVERSITY OF ALBERTA, CA, Bd. 2, 12. Oktober 1992 (1992-10-12), Seiten 1387-1390, XP000871657 * |
KNECHT W G ET AL: "NEURAL NETWORK FILTERS FOR SPEECH ENHANCEMENT" IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, IEEE INC. NEW YORK, US, Bd. 3, Nr. 6, 1. November 1995 (1995-11-01), Seiten 433-438, XP000730628 ISSN: 1063-6676 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1585112A1 (de) * | 2004-03-30 | 2005-10-12 | Dialog Semiconductor GmbH | Geräuschunterdrückung ohne Signalverzögerung |
EP2151822A1 (de) * | 2008-08-05 | 2010-02-10 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zur Verarbeitung eines Audiosignals zur Sprachverstärkung unter Anwendung einer Merkmalsextraktion |
WO2010015371A1 (en) * | 2008-08-05 | 2010-02-11 | Fraunhofer-Gesellschaft Zur Förderung Der Angewandten Forschung E . V . | Apparatus and method for processing an audio signal for speech enhancement using a feature extraction |
CN102124518A (zh) * | 2008-08-05 | 2011-07-13 | 弗朗霍夫应用科学研究促进协会 | 采用特征提取处理音频信号用于语音增强的方法和装置 |
AU2009278263B2 (en) * | 2008-08-05 | 2012-09-27 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E . V . | Apparatus and method for processing an audio signal for speech enhancement using a feature extraction |
CN102124518B (zh) * | 2008-08-05 | 2013-11-06 | 弗朗霍夫应用科学研究促进协会 | 采用特征提取处理音频信号用于语音增强的方法和装置 |
US9064498B2 (en) | 2008-08-05 | 2015-06-23 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for processing an audio signal for speech enhancement using a feature extraction |
CN104036784A (zh) * | 2014-06-06 | 2014-09-10 | 华为技术有限公司 | 一种回声消除方法及装置 |
CN114944154A (zh) * | 2022-07-26 | 2022-08-26 | 深圳市长丰影像器材有限公司 | 音频调整方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP1091349A3 (de) | 2002-01-02 |
ATE289110T1 (de) | 2005-02-15 |
TW482993B (en) | 2002-04-11 |
EP1091349B1 (de) | 2005-02-09 |
DE19948308C2 (de) | 2002-05-08 |
DE19948308A1 (de) | 2001-04-19 |
CA2319995C (en) | 2005-04-26 |
CA2319995A1 (en) | 2001-04-06 |
DE50009461D1 (de) | 2005-03-17 |
US6820053B1 (en) | 2004-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE19948308C2 (de) | Verfahren und Vorrichtung zur Geräuschunterdrückung bei der Sprachübertragung | |
DE602004004242T2 (de) | System und Verfahren zur Verbesserung eines Audiosignals | |
DE112009000805B4 (de) | Rauschreduktion | |
DE60009206T2 (de) | Rauschunterdrückung mittels spektraler Subtraktion | |
DE69124005T2 (de) | Sprachsignalverarbeitungsvorrichtung | |
DE69131739T2 (de) | Einrichtung zur Sprachsignalverarbeitung für die Bestimmung eines Sprachsignals in einem verrauschten Sprachsignal | |
DE60027438T2 (de) | Verbesserung eines verrauschten akustischen signals | |
DE69428119T2 (de) | Verringerung des hintergrundrauschens zur sprachverbesserung | |
DE60310725T2 (de) | Verfahren und vorrichtung zur verarbeitung von subbandsignalen mittels adaptiver filter | |
DE60316704T2 (de) | Mehrkanalige spracherkennung in ungünstigen umgebungen | |
DE60108401T2 (de) | System zur erhöhung der sprachqualität | |
DE69509555T2 (de) | Verfahren zur veränderung eines sprachsignales mittels grundfrequenzmanipulation | |
DE602005001048T2 (de) | Erweiterung der Bandbreite eines schmalbandigen Sprachsignals | |
DE4126902A1 (de) | Sprachintervall - feststelleinheit | |
EP0668007B1 (de) | Mobilfunkgerät mit freisprecheinrichtung | |
DE69634841T2 (de) | Verfahren und Vorrichtung zur Echokompensation | |
DE69616724T2 (de) | Verfahren und System für die Spracherkennung | |
EP0747880B1 (de) | Spracherkennungssystem | |
EP0642290A2 (de) | Mobilfunkgerät mit einer Sprachverarbeitungseinrichtung | |
EP1143416A2 (de) | Geräuschunterdrückung im Zeitbereich | |
DE112017007005B4 (de) | Akustiksignal-verarbeitungsvorrichtung, akustiksignalverarbeitungsverfahren und freisprech-kommunikationsvorrichtung | |
DE69130687T2 (de) | Sprachsignalverarbeitungsvorrichtung zum Herausschneiden von einem Sprachsignal aus einem verrauschten Sprachsignal | |
DE69525396T2 (de) | Verfahren zur blinden Entzerrung, und dessen Anwendung zur Spracherkennung | |
EP3065417B1 (de) | Verfahren zur unterdrückung eines störgeräusches in einem akustischen system | |
DE3733983A1 (de) | Verfahren zum daempfen von stoerschall in von hoergeraeten uebertragenen schallsignalen |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
17P | Request for examination filed |
Effective date: 20011206 |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: RUWISCH, DIETMAR, DR. |
|
AKX | Designation fees paid | ||
REG | Reference to a national code |
Ref country code: DE Ref legal event code: 8566 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: RUWISCH, DIETMAR, DR. |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: RUWISCH, DIETMAR, DR. |
|
17Q | First examination report despatched |
Effective date: 20040622 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED. Effective date: 20050209 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050209 Ref country code: IE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050209 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050209 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: GERMAN |
|
REF | Corresponds to: |
Ref document number: 50009461 Country of ref document: DE Date of ref document: 20050317 Kind code of ref document: P |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: NV Representative=s name: KELLER & PARTNER PATENTANWAELTE AG |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050509 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050509 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050509 |
|
GBT | Gb: translation of ep patent filed (gb section 77(6)(a)/1977) |
Effective date: 20050425 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050520 |
|
NLV1 | Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act | ||
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050908 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FD4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050930 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050930 Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050930 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20051110 |
|
ET | Fr: translation filed | ||
BERE | Be: lapsed |
Owner name: RUWISCH, DIETMAR, DR. Effective date: 20050930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050709 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: CH Payment date: 20090923 Year of fee payment: 10 Ref country code: AT Payment date: 20090928 Year of fee payment: 10 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100930 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100908 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20110927 Year of fee payment: 12 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20130531 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20121001 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20190925 Year of fee payment: 20 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 50009461 Country of ref document: DE Representative=s name: BETTEN & RESCH PATENT- UND RECHTSANWAELTE PART, DE Ref country code: DE Ref legal event code: R081 Ref document number: 50009461 Country of ref document: DE Owner name: RUWISCH PATENT GMBH, DE Free format text: FORMER OWNER: RUWISCH, DIETMAR, DR., 12557 BERLIN, DE Ref country code: DE Ref legal event code: R081 Ref document number: 50009461 Country of ref document: DE Owner name: ANALOG DEVICES INTERNATIONAL UNLIMITED COMPANY, IE Free format text: FORMER OWNER: RUWISCH, DIETMAR, DR., 12557 BERLIN, DE |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20200213 AND 20200219 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20191030 Year of fee payment: 20 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R071 Ref document number: 50009461 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: PE20 Expiry date: 20200907 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION Effective date: 20200907 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 50009461 Country of ref document: DE Owner name: ANALOG DEVICES INTERNATIONAL UNLIMITED COMPANY, IE Free format text: FORMER OWNER: RUWISCH PATENT GMBH, 12459 BERLIN, DE Ref country code: DE Ref legal event code: R082 Ref document number: 50009461 Country of ref document: DE Representative=s name: BETTEN & RESCH PATENT- UND RECHTSANWAELTE PART, DE |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20201210 AND 20201216 |