EP2362681B1 - Method and device for phase-dependent processing of sound signals - Google Patents

Method and device for phase-dependent processing of sound signals Download PDF

Info

Publication number
EP2362681B1
EP2362681B1 EP11152903.8A EP11152903A EP2362681B1 EP 2362681 B1 EP2362681 B1 EP 2362681B1 EP 11152903 A EP11152903 A EP 11152903A EP 2362681 B1 EP2362681 B1 EP 2362681B1
Authority
EP
European Patent Office
Prior art keywords
calibration
phase difference
signals
sound
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP11152903.8A
Other languages
German (de)
French (fr)
Other versions
EP2362681A1 (en
Inventor
Dietmar Ruwisch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP2362681A1 publication Critical patent/EP2362681A1/en
Application granted granted Critical
Publication of EP2362681B1 publication Critical patent/EP2362681B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction

Definitions

  • the present invention relates to a method and an apparatus for processing sound signals of at least one sound source.
  • the invention is in the field of digital processing of sound signals recorded with a microphone array.
  • the invention relates to a method and an apparatus for the phase-dependent or phase-sensitive processing of sound signals recorded with a microphone array.
  • a microphone array is used when two or more spaced microphones are used to record sound signals (multi-microphone technique). This makes it possible to achieve directional sensitivity in digital signal processing.
  • the classic "shift and add” or “filter and add” methods should be mentioned, in which a microphone signal is shifted in time or filtered relative to the second, before the signals thus manipulated are added together. In this way it is possible to achieve sound cancellation ("destructive interference") for signals arriving from a particular direction. Since the underlying wave geometry is formally identical to the generation of directivity in radio applications when using multiple antennas, this is also referred to as “beam forming", wherein the "beam” of the radio waves is replaced by the direction of attenuation in the multi-microphone technology.
  • beam forming has prevailed for microphone array applications as a generic name, although here of a “beam” can actually be no question. Misleadingly, the term is used not only for the classical two- or multi-microphone technique just described, but also for more advanced, non-linear array techniques, for which the analogy with antenna technology no longer applies.
  • the classical method misses the actual desired goal. It often does not help to dampen sound signals that arrive from a certain direction. Rather, it is desirable to forward, as far as possible, only those signals originating from one (or more) particular signal source (s), such as those from a desired speaker.
  • the angle and the width of the "directional cone" for the desired signals can be controlled by means of parameters.
  • the described method calculates a signal-dependent filter function, wherein the spectral filter coefficients are calculated using a predetermined filter function whose argument is the angle of incidence of a spectral signal component.
  • the angle of incidence is determined by means of trigonometric functions or their inverse functions from the phase angle which exists between the two microphone signal components; This calculation is also spectrally resolved, ie separately for each representable frequency.
  • the angle and width of the directional cone as well as the maximum attenuation are parameters of the filter function.
  • the disclosed method suffers from several disadvantages.
  • the results achievable with the method correspond only in free field and in the near field the desired goal to separate sound signals of a particular sound source.
  • a very low tolerance of the components used and in particular the microphones used is required because disturbances in the phases of Mirkofonsignale have a negative effect on the effectiveness of the process.
  • the required tight component tolerances can be realized at least partially by means of suitable production technologies. However, this often involves higher production costs. Near field / free field restrictions are more difficult to circumvent.
  • the phase effects are significant because the reflections of the sound waves on particularly smooth surfaces, such as front or side windows, cause the sound waves to propagate on different sound paths 12 and in the vicinity of the microphones the phase relationship between disturb the signals of the two microphones so much that the result of the signal processing according to the above-mentioned method is unsatisfactory.
  • FIG. 2 shows in comparison the directions of incidence in free field ( Fig. 2a ) and reflections ( Fig. 2b ). In the free field, all the spectral components of the sound signal 15 f1 , 15 f2 ,..., 15 fn come from the direction of the sound source (in FIG. 2 not shown). According to FIG.
  • a further disadvantage of the known method is that the angle of incidence as a spatial angle for each frequency f must first be calculated using trigonometric functions or their inverse functions from the phase angle present between the two microphone signal components. This calculation is expensive, and the u.a. Required trigonometric function Arccosine (arccos) is defined only in the range [-1, 1], so that, if necessary, a corresponding correction function is necessary.
  • the method according to the invention for the phase-dependent processing of sound signals of at least one sound source basically comprises the steps of arranging at least two microphones 10, 11 in a respectively predetermined distance d from each other, detecting sound signals with both Microphones and generating associated microphone signals and processing the microphone signals.
  • a calibration mode the following steps are carried out: Determining at least one calibration position of a sound source, separately acquiring the sound signals for the calibration position with both microphones and generating calibration microphone signals assigned to the respective microphone for the calibration position, determining the frequency spectra of the assigned calibration microphone signals, and calculating the phase differences ⁇ 0 (f) of the assigned calibration microphone signals.
  • ⁇ 0 (f) is also referred to below as phase difference vector or frequency-dependent phase difference vector.
  • steps are then performed: acquiring the current sound signals with both microphones and generating associated current microphone signals, determining the current frequency spectra of the associated current microphone signals, calculating a current phase difference vector ⁇ (f) between the assigned current microphone signals from their frequency spectrums, selecting at least one of the determined calibration positions, calculating a spectral filter function F as a function of the current phase difference vector ⁇ (f) and the respective measurement position-specific phase difference vector ⁇ 0 (f) of the selected calibration position, generating a respective signal spectrum S of a signal to be output by multiplicatively coupling at least one of the two Frequency spectra of the current microphone signals with the spectral filter function F of the respective selected calibration position, wherein the filter function is selected such in that the less the difference between the current and the measurement-position-specific phase difference is for the corresponding frequency,
  • the method according to the invention or the device according to the invention provides a calibration procedure, according to US Pat at least one position of the expected useful signal source as a so-called calibration position during the calibration mode sound signals, which are generated for example by playing a test signal, are recorded with their phase effects and - interference from the microphones. From the recorded microphone signals, the phase difference vector ⁇ 0 (f) between these microphone signals is then calculated from their frequency spectra for the calibration position. In the subsequent signal processing in the operating mode of this phase difference vector ⁇ 0 (f) is then used to measure the filter function for generating the signal spectrum of the output signal, which can compensate for phase noise and effects in the sound signals.
  • a signal spectrum of the output signal is generated, which contains substantially only signals from the selected calibration position.
  • the filter function is chosen so that spectral components of sound signals that correspond to the Einmessmikrofonsignalen and thus the supposed Nutzsignalen according to their phase difference are not or less attenuated than spectral components of sound signals whose phase difference differs from the einmesspositionsspezifischen phase difference. Furthermore, the filter function is selected such that the greater the difference between the current and calibration position-specific phase difference for the corresponding frequency, the more attenuated spectral components of sound signals are.
  • the calibration procedure is applied not only model-specific, but according to one embodiment for each device, such as for each microphone array device in its operating environment, not only the model-typical or environmental, but also by component tolerances and the Operating environment caused to compensate for phase effects and disturbances of the specific device in operation.
  • This embodiment is therefore suitable Component tolerances of the microphones, such as their phase position and sensitivity in a simple and safe way to compensate.
  • effects that are not caused by changing the spatial position of the useful signal source itself, but by changes in the environment of the useful signal source, for example by opening a side window of a vehicle, can be taken into account.
  • the calibration position is defined as a state spatial position that includes, for example, the state of the room as an additional dimension.
  • the inventive method is then configured as an adaptive method in which the calibration position-specific phase difference vector ⁇ 0 (f) is calculated or updated not only from microphone signals detected once during the calibration mode, but from the microphone signals of the actual useful signals during operation.
  • the method or the device initially operates in the operating mode.
  • the method or the device switches to the calibration mode and calculates the calibration position-specific phase difference vector ⁇ 0 (f), for example, a user speaks test signals and these are detected by the microphones to generate therefrom associated Einmessmikrofonsignale. From the assigned Einmessmikrofonsignalen the einmesspositionsspezifische phase difference vector ⁇ 0 (f) is then calculated. Subsequently, it is again switched to the operating mode. In which the spectral filter functions F are calculated for each current phase difference vector as a function of the previously determined respective calibration position-specific phase difference vector.
  • the invention allows in particular a phase-dependent and at the same time frequency-dependent processing of sound signals, without it being necessary to determine the angle of incidence of the sound signals by at least one spectral component of the current sound signal as a function of the difference between its phase difference and a calibration position-specific phase difference corresponding frequency is attenuated.
  • a basic idea of the invention is to determine in a calibration procedure for desired sound signals phase-dependent calibration data which consider the application-related phase effects, and then use these calibration data in signal processing to compensate for phase noise and effects.
  • the method provides for this an arrangement of at least two microphones 10, 11 at a predetermined distance d to each other.
  • this distance is less than half the wavelength of the highest occurring frequency to choose, ie less than the quotient sound velocity / sampling rate of the microphone signals.
  • a value for the microphone distance d which is well suited in practice for speech processing is, for example, 1 cm.
  • the calibration data is generated by the sequence of steps as described in the US patent application Ser FIG. 3 are shown flowchart shown.
  • step 310 the playback of a test signal, such as white noise, from the Einmessposition as the position of the expected useful signal source and the recording of the corresponding Einmessmikrofonsignale with the microphones 10 and 11 by separately detecting the sound signals with the two microphones and generating the assigned Einmessmikrofonsignale for this calibration position.
  • a test signal such as white noise
  • phase vectors ⁇ (f, T) determined at continuous times T are then averaged over T over time, resulting in a calibration position-specific phase difference vector ⁇ 0 (f) containing the calibration data.
  • arccos the inverse function of the cosisus
  • step 410 the current sound signal with the two microphones 10 and 11 is recorded in step 410.
  • step 420 the Fourier transforms M1 (f, T) and M2 (f, T) of the microphone signals 1 and 2 are again calculated at time T and their real and imaginary parts Re1, Im1, Re2, Im2.
  • the value n is referred to below as the width parameter, since it defines the adjustable width of the directional cone. It should be noted that the larger the width parameter n is chosen, the smaller the beam width.
  • the above definition of the filter function F (f, T) is to be understood as an example, other assignment functions with similar characteristics fulfill the same purpose.
  • the soft transition chosen here between the extreme values of the filter function (zero and one) has a favorable effect on the quality of the output signal, in particular with regard to unwanted artifacts of the signal processing.
  • the determination of the angle is dispensed with and instead, during the calibration procedure, only the measuring-position-specific phase difference vector ⁇ 0 (f) is determined, which already contains the calibration information.
  • the calculation of the angle vector ⁇ 0 (f) in step 350 and thus the possibly necessary correction of the range of values of the argument for the arccos calculation are thus not required in the determination of the calibration data.
  • the method comprises the in FIG. 5 illustrated steps. First, in turn, the current sound signal with the two microphones 10 and 11 is detected in step 510.
  • the filter function is equal to one in the ideal case, ie at phase equality between phase difference vector currently measured in the operating mode and measuring position-specific phase difference vector, so that the filter function applied to the signal spectrum S is the signal to be output does not dampen. With increasing deviation of the current measuring position from the specific phase difference vector, the filter function goes to zero, resulting in a corresponding attenuation of the output signal.
  • phase difference vectors have been determined in the calibration mode for, for example, different calibration positions, it is possible to determine the filter function for one of these calibration positions and thus a desired position of the useful signal.
  • the method initially operates in the operating mode and the calibration position-specific phase difference vector ⁇ 0 (f) is set equal to zero for all frequencies f to ⁇ 0 (f). This corresponds to a so-called "Broadview" geometry without calibration. If the device for processing sound signals is now to be measured, the device is switched to the calibration mode. Assuming now that a corresponding useful signal is generated, for example by only the desired user speaking, the calibration position-specific phase difference vector ⁇ 0 (f) is calculated. In this case, the user speaks, for example, predetermined test sets, which are detected by the microphones and from which associated Einmessmikrofonsignale be generated.
  • the system or device enters the calibration mode by an external command in which it determines the ⁇ 0 (f).
  • the user speaks test sounds, eg "sch sch sch", until the system has collected sufficient calibration data, which can optionally be displayed, for example, by an LED.
  • the system then changes to the operating mode in which the calibration data is used.
  • the system then switches to the operating mode and the spectral filter function F is calculated for each current phase difference vector as a function of the previously determined respective calibration position-specific phase difference vector.
  • the device such as a mobile phone
  • the calibration procedure with the voice of the actual user in the user's preferred environment and arrangement, ie how does the user hold the mobile in relation to the user's mouth User or similar, perform.
  • the width parameter n is selected to be smaller in the operating mode with the previously calculated respective calibration position-specific phase difference vector than in the initially assumed operating mode than in the unacceptable operating state in which the device is in a default setting.
  • An initially smaller width parameter means a wider beam, so that initially tend to be less attenuated sound signals from a larger beam direction. Only when the calibration has taken place, the width parameter is chosen to be larger, because now the filter function is able to properly attenuate the sound signals arriving at the microphones, taking into account the (phase) noise occurring in the near field according to a smaller directional cone.
  • the beamwidth defined by the parameter n in the mapping function is e.g. selected smaller in operation with calibration data than in unadjusted case.
  • the calibration position in the calibration mode, is further varied in a spatial and / or state region in which the user is expected in the operating mode. Subsequently, the calibration position-specific phase difference vector ⁇ 0 (f) is calculated for these varied calibration positions.
  • other effects which are caused, for example, by an opened side window of a motor vehicle, can then be taken into account during calibration, since not only the position of the user, for example the seat position of the driver of the motor vehicle, but also the surrounding state, ie For example, the side window is open or closed, take into account.
  • an adaptive method which evaluates the actual useful signals during operation instead of calibration signals.
  • an adaptive post-measurement is carried out only in such a situation in which, apart from the useful signal, no other noise signals are picked up by the microphones, which can be recognized, for example, by the relative constancy of the phase difference vectors ⁇ (f, T) at successive times T. is.
  • the method is designed as an adaptive method.
  • the calibration position-specific phase difference vector ⁇ 0 (f) is initially set equal to zero either for all frequencies f at ⁇ 0 (f) or, for example, stored values are used for all frequencies of the calibration position-specific phase difference vector ⁇ 0 (f) from previous calibration or operating modes.
  • the calibration position-specific phase difference vector ⁇ 0 (f) is then updated by the adaptive method by interpreting the current sound signals of a sound source in the operating mode as sound signals of the selected calibration position and used for the calibration.
  • an update of the calibration data unnoticed by the user is used, wherein the update always takes place when it is assumed that the current sound signals are noise-affected useful signals in the sense of the respective application or the current configuration of the device, so that from these Sound signals then the einmesspositionsspezifische phase difference vector ⁇ 0 (f) is determined.
  • An otherwise possibly predetermined by the device switching between calibration and operating mode can thus be omitted. Rather, the measurement takes place "subliminal" during operation whenever it allows the signal quality.
  • a criterion for the signal quality can be, for example, the signal-to-noise ratio of the microphone signals.
  • the effect on the signal to be output of a disc lowered during operation can continue to be compensated inadequately or not at all in this manner, since the boundary condition of the freedom from interference noise during the detection of the sound signals for determining the calibration data can hardly be realized in this case.
  • a constantly updated, spectrally resolved noise estimate is performed, the estimated noise being subtracted from the microphone spectra before the adaptation process, before the actual compensation of the phase effects is performed.
  • the method therefore further comprises that interference signals are first calculated out of the microphone signals of the current sound signals in the operating mode with the aid of a tracking, phase-sensitive noise model before the calibration-position-specific phase difference vector ⁇ 0 (f) is updated.
  • the step of specifying at least one calibration position further comprises arranging a test signal source in or near the calibration position, transmitting a calibrated test signal through the sound signal source, acquiring the test signal with the two microphones, and generating the associated one Einmessmikrofonsignale alone from the test signal.
  • the phase angle ⁇ 0 is spectrally resolved, ie frequency-dependent, and the corresponding vector ⁇ 0 (f) is determined during the calibration procedure on the basis of the recorded test signals, whereas the broad-determining parameter n is scalar, ie the same for all frequencies.
  • n - 1 / log 2 ⁇ 1 - c ⁇ 1 ⁇ 2 f / 2 ⁇ ⁇ fd 2 .
  • ⁇ 1 ⁇ 2 (f) is a parameter vector, which is initially given for each frequency f.
  • the source of the test signals for example a so-called artificial mouth
  • the source of the test signals is no longer positioned only at the location of the expected useful signal source, but varies over a spatial range in which a variation of the position of the useful signal source is to be expected during normal operation.
  • this is intended to cover the fluctuation range which is caused by natural head movements, variable seat adjustments and different body sizes of a driver.
  • the arithmetic mean values ⁇ (f) and the standard deviations ⁇ (f) for each frequency f are calculated from the measurements for each frequency over the calculated calibration-position-specific phase difference vectors ⁇ 0 (f).
  • the mean values ⁇ (f) are arithmetic mean values of previously time-averaged variables; ⁇ (f) is now used instead of ⁇ 0 (f).
  • the previously scalar parameter n is now also made frequency-dependent and determined by the calibration procedure.
  • n f - 1 / log 2 ⁇ 1 - c ⁇ f / ⁇ fd 2 ,
  • the method according to the invention and the device according to the invention are expediently implementable by means of or in the form of a signal processing system, for example with a digital signal processor (DSP system) or as a software component of a computer program running, for example, on a PC or DSP system or any other hardware platform.
  • DSP system digital signal processor

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Description

Die vorliegende Erfindung betrifft ein Verfahren und eine Vorrichtung zum Verarbeiten von Schallsignalen zumindest einer Schallquelle. Die Erfindung liegt auf dem Gebiet der digitalen Verarbeitung von Schallsignalen, die mit einem Mikrofonarray aufgenommen werden. Insbesondere betrifft die Erfindung ein Verfahren und eine Vorrichtung zum phasenabhängigen bzw. phasenempfindlichen Verarbeiten von mit einem Mikrofonarray aufgenommenen Schallsignalen.The present invention relates to a method and an apparatus for processing sound signals of at least one sound source. The invention is in the field of digital processing of sound signals recorded with a microphone array. In particular, the invention relates to a method and an apparatus for the phase-dependent or phase-sensitive processing of sound signals recorded with a microphone array.

Von einem Mikrofonarray wird gesprochen, wenn zwei oder mehr beabstandete Mikrofone zur Aufnahme von Schallsignalen verwendet werden (Mehr-Mikrofon-Technik). Damit ist es möglich, eine Richtungsempfindlichkeit in der digitalen Signalverarbeitung zu erreichen. Hier sind zunächst die klassischen "Shift and add" bzw. "Filter and add" Verfahren zu nennen, bei denen ein Mikrofonsignal gegenüber dem zweiten zeitlich verschoben oder gefiltert wird, bevor die so manipulierten Signale addiert werden. Auf diese Weise ist es möglich, eine Schallauslöschung ("destruktive Interferenz") für Signale zu erreichen, die aus einer bestimmten Richtung eintreffen. Da die zugrundeliegende Wellengeometrie formal identisch mit der Erzeugung einer Richtwirkung in Funkanwendungen bei Verwendung mehrer Antennen ist, spricht man hier auch von "Beam Forming", wobei der "Strahl" der Radiowellen durch die Dämpfungsrichtung bei der Mehr-Mikrofon-Technik ersetzt wird. Die Bezeichnung "Beam Forming" hat sich für Mikrofonarray-Anwendungen als Gattungsbezeichnung durchgesetzt, obwohl hier von einem "Strahl" eigentlich keine Rede sein kann. Irreführender Weise wird der Begriff nicht nur für die soeben beschriebene klassische Zwei- oder Mehr-Mikrofon-Technik benutzt, sondern auch für fortschrittlichere, nicht-lineare Array-Techniken, für die die Analogie mit der Antennentechnik so nicht mehr gilt.A microphone array is used when two or more spaced microphones are used to record sound signals (multi-microphone technique). This makes it possible to achieve directional sensitivity in digital signal processing. First of all, the classic "shift and add" or "filter and add" methods should be mentioned, in which a microphone signal is shifted in time or filtered relative to the second, before the signals thus manipulated are added together. In this way it is possible to achieve sound cancellation ("destructive interference") for signals arriving from a particular direction. Since the underlying wave geometry is formally identical to the generation of directivity in radio applications when using multiple antennas, this is also referred to as "beam forming", wherein the "beam" of the radio waves is replaced by the direction of attenuation in the multi-microphone technology. The term "beam forming" has prevailed for microphone array applications as a generic name, although here of a "beam" can actually be no question. Misleadingly, the term is used not only for the classical two- or multi-microphone technique just described, but also for more advanced, non-linear array techniques, for which the analogy with antenna technology no longer applies.

In vielen Anwendungen verfehlt das klassische Verfahren das eigentlich gewünschte Ziel. Es hilft oft wenig, Schallsignale zu dämpfen, die aus einer bestimmten Richtung eintreffen. Vielmehr ist es wünschenswert, möglichst nur die von einer (oder mehreren) bestimmten Signalquelle(n) stammenden Signale weiterzuleiten bzw. weiterzuverarbeiten, wie beispielsweise die von einem erwünschten Sprecher.In many applications, the classical method misses the actual desired goal. It often does not help to dampen sound signals that arrive from a certain direction. Rather, it is desirable to forward, as far as possible, only those signals originating from one (or more) particular signal source (s), such as those from a desired speaker.

Aus der EP 1595427 B1 ist ein Verfahren zur Separierung von Schallsignalen bekannt. Gemäß dem darin beschriebenen Verfahren, können der Winkel und die Breite des "Richtkegels" für die gewünschten Signale (eigentlich kein Kegel sondern ein Rotationshyperboloid) sowie die Dämpfung für unerwünschte Signale außerhalb des Richtkegels mittels Parametern gesteuert werden. Das geschilderte Verfahren berechnet dabei eine signalabhängige Filterfunktion, wobei die spektralen Filterkoeffizienten mithilfe einer vorgegebenen Filterfunktion berechnet werden, deren Argument der Einfallswinkel einer spektralen Signalkomponente ist. Der Einfallswinkel wird mithilfe trigonometrischer Funktionen bzw. deren Umkehrfunktionen aus dem Phasenwinkel bestimmt, der zwischen den beiden Mikrofonsignalkomponenten vorliegt; diese Berechnung erfolgt ebenfalls spektral aufgelöst, also separat für jede darstellbare Frequenz. Winkel und Breite des Richtkegels sowie die maximale Dämpfung sind dabei Parameter der Filterfunktion.From the EP 1595427 B1 a method for the separation of sound signals is known. According to the method described therein, the angle and the width of the "directional cone" for the desired signals (actually no cone but a rotational hyperboloid) and the attenuation for unwanted signals outside the directional cone can be controlled by means of parameters. The described method calculates a signal-dependent filter function, wherein the spectral filter coefficients are calculated using a predetermined filter function whose argument is the angle of incidence of a spectral signal component. The angle of incidence is determined by means of trigonometric functions or their inverse functions from the phase angle which exists between the two microphone signal components; This calculation is also spectrally resolved, ie separately for each representable frequency. The angle and width of the directional cone as well as the maximum attenuation are parameters of the filter function.

Das in der EP 1595427 B1 offenbarte Verfahren leidet an mehreren Nachteilen. Die mit dem Verfahren erzielbaren Ergebnisse entsprechen nur im Freifeld und im Nahfeld dem gewünschten Ziel, Schallsignale einer bestimmten Schallquelle zu separieren. Außerdem ist eine sehr geringe Toleranz der verwendeten Bauteile und insbesondere der eingesetzten Mikrofone erforderlich, da sich Störungen in den Phasen der Mirkofonsignale negativ auf die Wirksamkeit des Verfahrens auswirken. Die benötigten engen Bauteiltoleranzen lassen sich zumindest teilweise mit Hilfe geeigneter Herstellungstechnologien realisieren. Jedoch gehen damit oft höhere Herstellungskosten einher. Schwieriger lassen sich Nahfeld/Freifeld-Einschränkungen umgehen. Von einem Freifeld spricht man, wenn die Schallwelle ungehindert an den Mikrofonen 10, 11 eintrifft, also ohne auf dem Signalweg 12 von der Schallquelle 13 reflektiert, gedämpft, oder sonst wie verändert worden zu sein, wie dies in Figur 1a dargestellt ist. Im Nahfeld zeigt sich im Gegensatz zum Fernfeld, bei dem das Schallsignal als ebene Welle eintrifft, die Krümmung der Wellenfront noch deutlich. Auch wenn dies eigentlich eine unerwünschte Abweichung von den auf ebenen Wellen basierenden Geometrieüberlegungen des Verfahrens ist, besteht normalerweise in einem wesentlichen Punkte große Ähnlichkeit zum Freifeld. Da die Signal- bzw. Schallquelle 13 so nah ist, sind die Phasenstörungen durch Reflexionen o.ä. im Vergleich zum Nutzsignal normalerweise eher gering. Figur 1b zeigt die Verwendung der Mikrofone 10, 11 und der Schallquelle 13 in einem engen Raum 14, wie z.B. einem Kfz-Innenraum. Im Einsatz in engen Räumen, sind die Phaseneffekte jedoch erheblich, da die Reflektionen der Schallwellen an insbesondere glatten Oberflächen, wie z.B. Front- oder Seitenscheiben, dazu führen, dass die Schallwellen sich auf verschiedenen Schallwegen 12 ausbreiten und in der Nähe der Mikrofone die Phasenbeziehung zwischen den Signalen der beiden Mikrofone so stark stören, dass das Ergebnis der Signalverarbeitung nach dem oben bezeichnetem Verfahren unbefriedigend ist.That in the EP 1595427 B1 The disclosed method suffers from several disadvantages. The results achievable with the method correspond only in free field and in the near field the desired goal to separate sound signals of a particular sound source. In addition, a very low tolerance of the components used and in particular the microphones used is required because disturbances in the phases of Mirkofonsignale have a negative effect on the effectiveness of the process. The required tight component tolerances can be realized at least partially by means of suitable production technologies. However, this often involves higher production costs. Near field / free field restrictions are more difficult to circumvent. One speaks of a free field, if the sound wave unhindered arrives at the microphones 10, 11, that is without reflected on the signal path 12 of the sound source 13, attenuated, or otherwise has been changed, as in FIG. 1a is shown. In the near field, in contrast to the far field, where the sound signal arrives as a plane wave, the curvature of the wavefront becomes even clearer. Although this is actually an undesirable departure from the plane wave based geometry considerations of the method, there is usually a significant similarity to the free field in one significant point. Since the signal or sound source 13 is so close, the phase disturbances by reflections o.ä. usually rather low compared to the useful signal. FIG. 1b shows the use of the microphones 10, 11 and the sound source 13 in a narrow space 14, such as a vehicle interior. However, when used in confined spaces, the phase effects are significant because the reflections of the sound waves on particularly smooth surfaces, such as front or side windows, cause the sound waves to propagate on different sound paths 12 and in the vicinity of the microphones the phase relationship between disturb the signals of the two microphones so much that the result of the signal processing according to the above-mentioned method is unsatisfactory.

Die Phasenstörungen aufgrund der Reflexionen, wie sie in Figur 1b dargestellt sind, führen dazu, dass die spektralen Komponenten des Schallsignals einer Signalquelle 13 scheinbar aus verschiedenen Richtungen auf die Mikrofone 10, 11 treffen. Figur 2 zeigt hierzu im Vergleich die Einfallsrichtungen im Freifeld (Fig. 2a) und bei Reflexionen (Fig. 2b). Im Freifeld kommen alle spektralen Komponenten des Schallsignals 15f1, 15f2, ..., 15fn aus der Richtung der Schallquelle (in Figur 2 nicht dargestellt). Gemäß Figur 2b treffen die spektralen Komponenten des Schallsignals 16f1, 16f2, ..., 16fn aufgrund der frequenzabhängigen Reflexionen jeweils mit ganz unterschiedlichen scheinbaren Einfallswinkeln ϑf1, ϑf2,..., ϑfn auf die Mikrofone 10, 11, obwohl das Schallsignal von der einen Schallquelle 13 erzeugt wurde. Eine Verarbeitung der Schallsignale in engeren Räumen, bei der nur Schallsignale aus einem bestimmten Einfallswinkel berücksichtigt werden, führt zu unbefriedigenden Ergebnissen, da dadurch bestimmte spektrale Komponenten des Schallsignals nicht oder nur unzureichend verarbeitet werden, was insbesondere Einbußen in der Signalqualität zur Folge hat.The phase disturbances due to the reflections, as in FIG. 1b are shown, cause the spectral components of the sound signal of a signal source 13 seemingly from different directions to the microphones 10, 11 meet. FIG. 2 shows in comparison the directions of incidence in free field ( Fig. 2a ) and reflections ( Fig. 2b ). In the free field, all the spectral components of the sound signal 15 f1 , 15 f2 ,..., 15 fn come from the direction of the sound source (in FIG. 2 not shown). According to FIG. 2b meet the spectral components of the sound signal 16 f1 , 16 f2 , ..., 16 fn due to the frequency-dependent reflections each with very different apparent angles of incidence θ f1 , θ f2 , ..., θ fn on the microphones 10, 11, although the sound signal was generated by a sound source 13. A processing of the sound signals in narrower spaces, in which only sound signals from a certain angle of incidence are taken into account, leads to unsatisfactory results, since As a result, certain spectral components of the sound signal are not or only insufficiently processed, resulting in particular in losses in the signal quality.

Ein weiterer Nachteil des bekannten Verfahrens liegt darin, dass der Einfallswinkel als räumlicher Winkel für jede Frequenz f zunächst mithilfe trigonometrischer Funktionen bzw. deren Umkehrfunktionen aus dem Phasenwinkel, der zwischen den beiden Mikrofonsignalkomponenten vorliegt, berechnet werden muss. Diese Berechnung ist aufwendig, und die u.a. benötigte trigonometrische Funktion Arkuskosinus (arccos) ist nur im Bereich [-1, 1] definiert, so dass ggf. zusätzlich eine entsprechende Korrekturfunktion notwendig ist.A further disadvantage of the known method is that the angle of incidence as a spatial angle for each frequency f must first be calculated using trigonometric functions or their inverse functions from the phase angle present between the two microphone signal components. This calculation is expensive, and the u.a. Required trigonometric function Arccosine (arccos) is defined only in the range [-1, 1], so that, if necessary, a corresponding correction function is necessary.

Es ist daher Aufgabe der vorliegenden Erfindung, ein Verfahren und eine Vorrichtung zum Verarbeiten von Schallsignalen vorzuschlagen, die die Nachteile des Standes der Technik möglichst vermeiden und insbesondere ein Kompensieren von Phasenstörungen oder -effekten, mit denen die Signale behaftet sind, ermöglichen. Ferner ist es Ziel der Erfindung, ein Verfahren und eine Vorrichtung zum phasenabhängigen Verarbeiten von Schallsignalen vorzuschlagen, die es erlauben, systematische Fehler in den Mikrofonsignalen, beispielsweise aufgrund von Bauteiltoleranzen, zu kompensieren und/oder ein Kalibrieren von einzelnen Bauteilen, wie z.B. den Mikrofonen oder der gesamten Vorrichtung zu ermöglichen.It is therefore an object of the present invention to propose a method and a device for processing sound signals, which avoid the disadvantages of the prior art as possible and in particular to compensate for phase disturbances or effects that are associated with the signals allow. It is a further object of the invention to propose a method and an apparatus for phase-dependent processing of sound signals which allow to compensate for systematic errors in the microphone signals, for example due to component tolerances, and / or calibration of individual components, e.g. to enable the microphones or the entire device.

Erfindungsgemäß wird hierzu ein Verfahren nach Anspruch 1 bzw. eine Vorrichtung nach Anspruch 10 vorgeschlagen. Weiterhin stellt die Erfindung ein Computerprogramm gemäß Anspruch 12 bereit. Vorteilhafte Weiterbildungen der Erfindung sind in den jeweiligen Unteransprüchen definiert.According to the invention, a method according to claim 1 or a device according to claim 10 is proposed for this purpose. Furthermore, the invention provides a computer program according to claim 12. Advantageous developments of the invention are defined in the respective subclaims.

Das erfindungsgemäße Verfahren zum phasenabhängigen Verarbeiten von Schallsignalen zumindest einer Schallquelle umfasst dabei grundsätzlich die Schritte: Anordnen von zumindest zwei Mikrofonen 10, 11 in einem jeweils vorbestimmten Abstand d zueinander, Erfassen von Schallsignalen mit beiden Mikrofonen und Erzeugen zugeordneter Mikrofonsignale sowie ein Verarbeiten der Mikrofonsignale. In einem Einmessmodus werden dabei folgende Schritte ausgeführt: Festlegen zumindest einer Einmessposition einer Schallquelle, separates Erfassen der Schallsignale für die Einmessposition mit jeweils beiden Mikrofonen und Erzeugen von dem jeweiligen Mikrofon zugeordneten Einmessmikrofonsignalen für die Einmessposition, Ermitteln der Frequenzspektren der zugeordneten Einmessmikrofonsignale, und Berechnen der Phasendifferenzen ϕ0(f) der zugeordneten Einmessmikrofonsignale. Da für jede Frequenz f eine eigener Phasendifferenzwert bestimmt wird, wird ϕ0(f) im Folgenden auch Phasendifferenzvektor oder frequenzabhängiger Phasendifferenzvektor genannt. Während eines Betriebsmodus werden dann die folgenden Schritte ausgeführt: Erfassen der aktuellen Schallsignale mit beiden Mikrofonen und Erzeugen zugeordneter aktueller Mikrofonsignale, Ermitteln der aktuellen Frequenzspektren der zugeordneten aktuellen Mikrofonsignale, Berechnen eines aktuellen Phasendifferenzvektors ϕ(f) zwischen den zugeordneten aktuellen Mikrofonsignalen aus deren Frequenzspektren, Auswählen zumindest einer der festgelegten Einmesspositionen, Berechnen einer spektralen Filterfunktion F in Abhängigkeit von dem aktuellen Phasendifferenzvektor ϕ(f) sowie dem jeweiligen einmesspositionsspezifischen Phasendifferenzvektor ϕ0(f) der ausgewählten Einmessposition, Erzeugen jeweils eines Signalspektrums S eines auszugebenden Signals durch multiplikative Verknüpfung mindestens eines der beiden Frequenzspektren der aktuellen Mikrofonsignale mit der spektralen Filterfunktion F der jeweiligen ausgewählten Einmessposition, wobei die Filterfunktion derart gewählt ist, dass spektrale Komponenten von Schallsignalen umso weniger gedämpft werden, je geringer der Differenzbetrag zwischen aktueller und einmesspositionsspezifischer Phasendifferenz für die entsprechende Frequenz ist, und Erhalten des jeweils auszugebenden Signals für die jeweilige ausgewählte Einmessposition durch inverses Transformieren des erzeugten Signalspektrums.The method according to the invention for the phase-dependent processing of sound signals of at least one sound source basically comprises the steps of arranging at least two microphones 10, 11 in a respectively predetermined distance d from each other, detecting sound signals with both Microphones and generating associated microphone signals and processing the microphone signals. In a calibration mode, the following steps are carried out: Determining at least one calibration position of a sound source, separately acquiring the sound signals for the calibration position with both microphones and generating calibration microphone signals assigned to the respective microphone for the calibration position, determining the frequency spectra of the assigned calibration microphone signals, and calculating the phase differences φ 0 (f) of the assigned calibration microphone signals. Since a separate phase difference value is determined for each frequency f, φ 0 (f) is also referred to below as phase difference vector or frequency-dependent phase difference vector. During an operating mode, the following steps are then performed: acquiring the current sound signals with both microphones and generating associated current microphone signals, determining the current frequency spectra of the associated current microphone signals, calculating a current phase difference vector φ (f) between the assigned current microphone signals from their frequency spectrums, selecting at least one of the determined calibration positions, calculating a spectral filter function F as a function of the current phase difference vector φ (f) and the respective measurement position-specific phase difference vector φ 0 (f) of the selected calibration position, generating a respective signal spectrum S of a signal to be output by multiplicatively coupling at least one of the two Frequency spectra of the current microphone signals with the spectral filter function F of the respective selected calibration position, wherein the filter function is selected such in that the less the difference between the current and the measurement-position-specific phase difference is for the corresponding frequency, the less the spectral components of sound signals are attenuated, and the respectively output signal for the respective selected calibration position being obtained by inversely transforming the generated signal spectrum.

Auf diese Weise stellt das erfindungsgemäße Verfahren bzw. die erfindungsgemäße Vorrichtung eine Einmessprozedur bereit, gemäß der für zumindest eine Position der erwarteten Nutzsignalquelle als sogenannte Einmessposition während des Einmessmodus Schallsignale, die beispielsweise durch Abspielen eines Testsignals erzeugt werden, mit ihren Phaseneffekten und - störungen von den Mikrofonen aufgenommen werden. Aus den aufgenommenen Mikrofonsignalen wird dann der Phasendifferenzvektor ϕ0(f) zwischen diesen Mikrofonsignalen aus deren Frequenzspektren für die Einmessposition berechnet. In der sich anschließenden Signalverarbeitung im Betriebsmodus wird dieser Phasendifferenzvektor ϕ0(f) dann verwendet, um die Filterfunktion zur Erzeugung des Signalspektrums des auszugebenden Signals einzumessen, wodurch sich Phasenstörungen und -effekte in den Schallsignalen kompensieren lassen. Durch die anschließende Anwendung der so eingemessenen Filterfunktion auf zumindest eines der aktuellen Mikrofonsignale durch multiplikative Verknüpfung des Spektrums des aktuellen Mikrofonsignals mit der Filterfunktion, wird ein Signalspektrum des auszugebenden Signals erzeugt, welches im Wesentlichen nur noch Signale von der ausgewählten Einmessposition enthält. Die Filterfunktion wird dabei so gewählt, dass spektrale Komponenten von Schallsignalen, die gemäß ihrer Phasendifferenz den Einmessmikrofonsignalen und damit den vermeintlichen Nutzsignalen entsprechen, nicht oder weniger stark gedämpft werden, als spektrale Komponenten von Schallsignalen, deren Phasendifferenz sich von der einmesspositionsspezifischen Phasendifferenz unterscheidet. Weiterhin wird die Filterfunktion so gewählt, dass spektrale Komponenten von Schallsignalen umso stärker gedämpft werden, je größer der Differenzbetrag zwischen aktueller und einmesspositionsspezifischer Phasendifferenz für die entsprechende Frequenz ist.In this way, the method according to the invention or the device according to the invention provides a calibration procedure, according to US Pat at least one position of the expected useful signal source as a so-called calibration position during the calibration mode sound signals, which are generated for example by playing a test signal, are recorded with their phase effects and - interference from the microphones. From the recorded microphone signals, the phase difference vector φ 0 (f) between these microphone signals is then calculated from their frequency spectra for the calibration position. In the subsequent signal processing in the operating mode of this phase difference vector φ 0 (f) is then used to measure the filter function for generating the signal spectrum of the output signal, which can compensate for phase noise and effects in the sound signals. By the subsequent application of the so-filtered filter function on at least one of the current microphone signals by multiplicatively linking the spectrum of the current microphone signal with the filter function, a signal spectrum of the output signal is generated, which contains substantially only signals from the selected calibration position. The filter function is chosen so that spectral components of sound signals that correspond to the Einmessmikrofonsignalen and thus the supposed Nutzsignalen according to their phase difference are not or less attenuated than spectral components of sound signals whose phase difference differs from the einmesspositionsspezifischen phase difference. Furthermore, the filter function is selected such that the greater the difference between the current and calibration position-specific phase difference for the corresponding frequency, the more attenuated spectral components of sound signals are.

Wird die Einmessprozedur nicht nur modellspezifisch angewandt, sondern gemäß einer Ausführungsform für jede Vorrichtung, wie beispielsweise für jedes einzelne Mikrofonarray-Gerät in seiner Betriebsumgebung durchgeführt, lässt sich auf diese Art und Weise nicht nur die modelltypischen oder umgebungsbedingten, sondern auch die durch Bauteiltoleranzen und die Betriebsumgebung verursachten Phaseneffekte und -störungen des spezifischen Geräts im Betrieb kompensieren. Diese Ausführungsform ist daher geeignet, Bauteiltoleranzen der Mikrofone, wie beispielsweise deren Phasenlage und Empfindlichkeit auf einfache und sichere Art und Weise zu kompensieren. Dabei können auch Effekte, die nicht durch das Ändern der Raumposition der Nutzsignalquelle selbst, sondern durch Änderungen in der Umgebung der Nutzsignalquelle, beispielsweise durch das Öffnen eines Seitenfensters eines Kfz, verursacht werden, berücksichtigt werden. Die Einmessposition wird dabei als Zustandsraumposition definiert, die als zusätzliche Dimension bspw. den Zustand des Raumes umfasst. Treten solche Änderungen oder Schwankungen der Einmessposition während des Betriebs auf, lassen sich diese durch ein einmaliges Einmessen prinzipiell nicht beherrschen. Hierfür wird das erfindungsgemäße Verfahren dann als adaptives Verfahren ausgestaltet, bei dem der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) nicht lediglich aus einmalig während des Einmessmodus erfassten Mikrofonsignalen, sondern aus den Mikrofonsignalen der tatsächlichen Nutzsignale während des Betriebs berechnet bzw. aktualisiert wird.If the calibration procedure is applied not only model-specific, but according to one embodiment for each device, such as for each microphone array device in its operating environment, not only the model-typical or environmental, but also by component tolerances and the Operating environment caused to compensate for phase effects and disturbances of the specific device in operation. This embodiment is therefore suitable Component tolerances of the microphones, such as their phase position and sensitivity in a simple and safe way to compensate. In this case, effects that are not caused by changing the spatial position of the useful signal source itself, but by changes in the environment of the useful signal source, for example by opening a side window of a vehicle, can be taken into account. The calibration position is defined as a state spatial position that includes, for example, the state of the room as an additional dimension. If such changes or variations in the calibration position occur during operation, they can not be mastered in principle by a single calibration. For this purpose, the inventive method is then configured as an adaptive method in which the calibration position-specific phase difference vector φ 0 (f) is calculated or updated not only from microphone signals detected once during the calibration mode, but from the microphone signals of the actual useful signals during operation.

Gemäß einer Weiterbildung der Erfindung arbeitet das Verfahren bzw. die Vorrichtung zunächst im Betriebsmodus. Der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) wird dabei für alle Frequenzen f auf ϕ0(f) = 0 gesetzt. Erst zu einem späteren Zeitpunkt schaltet das Verfahren bzw. die Vorrichtung in den Einmessmodus und berechnet den einmesspositionsspezifischen Phasendifferenzvektor ϕ0(f), wobei beispielsweise ein Nutzer Testsignale spricht und diese von den Mikrofonen erfasst werden, um daraus zugeordnete Einmessmikrofonsignale zu erzeugen. Aus den zugeordneten Einmessmikrofonsignalen wird dann der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) berechnet. Anschließend wird wiederum in den Betriebsmodus geschaltet. In dem die spektralen Filterfunktionen F für jeden aktuellen Phasendifferenzvektor in Abhängigkeit von dem vorher bestimmten jeweiligen einmesspositionsspezifischen Phasendifferenzvektor berechnet werden.According to one development of the invention, the method or the device initially operates in the operating mode. The calibration position-specific phase difference vector φ 0 (f) is set to φ 0 (f) = 0 for all frequencies f. Only at a later time, the method or the device switches to the calibration mode and calculates the calibration position-specific phase difference vector φ 0 (f), for example, a user speaks test signals and these are detected by the microphones to generate therefrom associated Einmessmikrofonsignale. From the assigned Einmessmikrofonsignalen the einmesspositionsspezifische phase difference vector φ 0 (f) is then calculated. Subsequently, it is again switched to the operating mode. In which the spectral filter functions F are calculated for each current phase difference vector as a function of the previously determined respective calibration position-specific phase difference vector.

Auf diese Weise ist zunächst ein Einsatz ohne Einmessung (oft auch Kalibrierung genannt) unter Standardeinstellungen möglich. Sobald dann in den Einmessmodus geschaltet wird, lässt sich eine Einmessung nicht nur bspw. hinsichtlich der Bauteiltoleranzen sondern auch der aktuellen Betriebsumgebung, der konkreten Einsatzbedingungen und des Nutzers erreichen.In this way, first use without calibration (often called calibration) under default settings is possible. Once in the Adjustment mode is switched, an adjustment can be achieved not only, for example, in terms of component tolerances but also the current operating environment, the specific conditions of use and the user.

In anderen Worten erlaubt die Erfindung insbesondere eine phasenabhängige und zugleich frequenzabhängige Verarbeitung von Schallsignalen, ohne dass es dabei notwendig ist, den Einfallswinkel der Schallsignale zu bestimmen, indem zumindest eine spektrale Komponente des aktuellen Schallsignals in Abhängigkeit der Differenz zwischen ihrer Phasendifferenz und einer einmesspositionsspezifischen Phasendifferenz der entsprechenden Frequenz gedämpft wird.In other words, the invention allows in particular a phase-dependent and at the same time frequency-dependent processing of sound signals, without it being necessary to determine the angle of incidence of the sound signals by at least one spectral component of the current sound signal as a function of the difference between its phase difference and a calibration position-specific phase difference corresponding frequency is attenuated.

Kurzbeschreibung der Abbildungen:

  • Figur 1 zeigt schematisch die Ausbreitung von Schallsignalen einer Schallquelle im Freifeld (a) und bei Reflexionen im Nahfeld (b).
  • Figur 2 zeigt schematisch die scheinbaren Einfallsrichtungen von Schallsignalen einer Schallquelle im Freifeld (a) und bei Reflexionen im Nahfeld (b).
  • Figur 3 zeigt ein Ablaufdiagramm zur Bestimmung der Einmessdaten im Einmessmodus gemäß einer Ausführungsform der Erfindung.
  • Figur 4 zeigt ein Ablaufdiagramm zur winkelabhängigen Bestimmung der Filterfunktion gemäß einer Ausführungsform der Erfindung.
  • Figur 5 zeigt ein Ablaufdiagramm zur phasenwinkelabhängigen Bestimmung der Filterfunktion gemäß einer Ausführungsform der Erfindung.
Brief description of the pictures:
  • FIG. 1 schematically shows the propagation of sound signals of a sound source in the free field (a) and reflections in the near field (b).
  • FIG. 2 schematically shows the apparent directions of incidence of sound signals of a sound source in the free field (a) and reflections in the near field (b).
  • FIG. 3 shows a flowchart for determining the calibration data in the calibration mode according to an embodiment of the invention.
  • FIG. 4 shows a flowchart for angle-dependent determination of the filter function according to an embodiment of the invention.
  • FIG. 5 shows a flowchart for the phase angle-dependent determination of the filter function according to an embodiment of the invention.

Ein Grundgedanke der Erfindung ist es, in einer Einmessprozedur für gewünschte Schallsignale phasenabhängige Einmessdaten zu bestimmen, welche die anwendungsbedingten Phaseneffekte berücksichtigen, und diese Einmessdaten anschließend bei der Signalverarbeitung zur Kompensation von Phasenstörungen und -effekten einzusetzen.A basic idea of the invention is to determine in a calibration procedure for desired sound signals phase-dependent calibration data which consider the application-related phase effects, and then use these calibration data in signal processing to compensate for phase noise and effects.

Das Verfahren stellt hierzu eine Anordnung von zumindest zwei Mikrofonen 10, 11 in einem vorbestimmten Abstand d zueinander bereit. Um eine Mehrdeutigkeit von Phasendifferenzen zu vermeiden, ist dieser Abstand kleiner als die halbe Wellenlänge der höchsten vorkommenden Frequenz zu wählen, d.h. kleiner als der Quotient Schallgeschwindigkeit / Abtastrate der Mikrofonsignale. Ein in der Praxis für Sprachverarbeitung gut geeigneter Wert für den Mikrofonabstand d ist beispielsweise 1 cm. Mit jedem Mikrofon werden dann die Schallsignale, die von einer in einer Einmessposition angeordneten Schallquelle erzeugt werden, jeweils separat erfasst. Jedes Mikrofon erzeugt aus den mit diesem Mikrofon erfassten Schallsignalen diesem Mikrofon zugeordnete Einmessmikrofonsignale. Aus den ermittelten Frequenzspektren der zugeordneten Einmessmikrofonsignale wird dann ein einmesspositionsspezifischer Phasendifferenzvektor ϕ0(f) berechnet. Die so zwischen den zugeordneten Einmessmikrofonsignalen aus deren Frequenzspektren bestimmten Phasendifferenzen dienen dann im Betriebsmodus als Einmessdaten zur Kompensation der entsprechenden Phasenstörungen bzw. -effekte.The method provides for this an arrangement of at least two microphones 10, 11 at a predetermined distance d to each other. To avoid ambiguity of phase differences, this distance is less than half the wavelength of the highest occurring frequency to choose, ie less than the quotient sound velocity / sampling rate of the microphone signals. A value for the microphone distance d which is well suited in practice for speech processing is, for example, 1 cm. With each microphone, the sound signals, which are generated by a sound source arranged in a measuring position, are then detected separately. Each microphone generates from the sound signals recorded with this microphone Einmessmikrofonsignale assigned to this microphone. From the determined frequency spectra of the assigned Einmessmikrofonsignale a einmesspositionsspezifische phase difference vector φ 0 (f) is then calculated. The phase differences thus determined between the assigned calibration microphone signals from their frequency spectra are then used in the operating mode as calibration data for the compensation of the corresponding phase disturbances or effects.

Gemäß einer Ausführungsform werden die Einmessdaten dabei durch die Abfolge der Schritte erzeugt, wie sie in dem in Figur 3 dargestellten Ablaufdiagramm aufgeführt sind. Zunächst erfolgt in Schritt 310 das Abspielen eines Testsignals, wie z.B. weißes Rauschen, von der Einmessposition als der Position der erwarteten Nutzsignalquelle und die Aufnahme der entsprechenden Einmessmikrofonsignale mit den Mikrofonen 10 und 11 durch separates Erfassen der Schallsignale mit den beiden Mikrofonen und Erzeugen der zugeordneten Einmessmikrofonsignale für diese Einmessposition. Anschließend werden die Fouriertransformierten M1(f,T) und M2(f,T) der Einmessmikrofonsignale zum Zeitpunkt T und die Real- und Imaginärteile Re1, Im1, Re2, Im2 der Fouriertransformierten M1(f,T) und M2(f,T) in Schritt 320 berechnet, um daraus wiederum in Schritt 330 die frequenzabhängigen Phasen ϕ(f,T) zur Zeit T zwischen den Einmessmikrofonsignalen gemäß der Formel: ϕ f T = arctan Re 1 * Im 2 - Im 1 * Re 2 / Re 1 * Re 2 + Im 1 * Im 2

Figure imgb0001

zu berechnen.In one embodiment, the calibration data is generated by the sequence of steps as described in the US patent application Ser FIG. 3 are shown flowchart shown. First, in step 310, the playback of a test signal, such as white noise, from the Einmessposition as the position of the expected useful signal source and the recording of the corresponding Einmessmikrofonsignale with the microphones 10 and 11 by separately detecting the sound signals with the two microphones and generating the assigned Einmessmikrofonsignale for this calibration position. Subsequently, the Fourier transforms M1 (f, T) and M2 (f, T) of the calibration microphone signals at time T and the real and imaginary parts Re1, Im1, Re2, Im2 of Fourier transform M1 (f, T) and M2 (f, T) calculated in step 320 to get out of it again at step 330, the frequency-dependent phases φ (f, T) at time T between the calibration microphone signals according to the formula: φ f T = arctan re 1 * in the 2 - in the 1 * re 2 / re 1 * re 2 + in the 1 * in the 2
Figure imgb0001

to calculate.

In einem nächsten Schritt 340 werden dann die zu fortlaufenden Zeitpunkten T ermittelten Phasenvektoren ϕ(f,T) zeitlich über T gemittelt, wodurch ein einmesspositionsspezifischer Phasendifferenzvektor ϕ0(f) entsteht, der die Einmessdaten enthält.In a next step 340, the phase vectors φ (f, T) determined at continuous times T are then averaged over T over time, resulting in a calibration position-specific phase difference vector φ 0 (f) containing the calibration data.

Für eine winkelabhängige Filterbestimmung, wie sie nachfolgend mit Bezug auf Figur 4 beschrieben wird, erfolgt optional in Schritt 350 die Berechnung eines Einmess-Winkelvektors ϑ0(f)=arccos(ϕ0(f)c / 2πfd) nach Korrektur des Arguments auf den erlaubten Wertebereich [-1...1]. Bei einer winkelabhängigen Filterbestimmung gemäß Fig. 4 wird im Gegensatz zur phasenwinkelabhängigen Filterbestimmung die Umkehrfunktion des Cosisus (arccos) benötigt, um aus dem Phasenwinkel einen geometrischen oder räumlichen Winkel zu bestimmen (daher auch teilweise raumwinkelabhängige Filterbestimmung genannt).For an angle-dependent filter determination, as described below with reference to FIG. 4 is optionally described in step 350, the calculation of a Einmess angle vector θ 0 (f) = arccos (φ 0 (f) c / 2πfd) after correction of the argument to the allowed value range [-1 ... 1]. In an angle-dependent filter determination according to Fig. 4 In contrast to the phase-angle-dependent filter determination, the inverse function of the cosisus (arccos) is required in order to determine a geometric or spatial angle from the phase angle (hence also called partially space-angle-dependent filter determination).

Bei einer winkelabhängigen Filterbestimmung zur Erzeugung eines Ausgangssignals s(t) im Betriebsmodus gemäß Figur 4 wird zunächst das aktuelle Schallsignal mit den zwei Mikrofonen 10 und 11 in Schritt 410 aufgenommen. In Schritt 420 werden wiederum die Fouriertransformierten M1(f,T) und M2(f,T) der Mikrofonsignale 1 und 2 zum Zeitpunkt T sowie deren Real- und Imaginärteile Re1, Im1, Re2, Im2 berechnet. Anschließend werden im Schritt 430 die frequenzabhängigen Phasen zur Zeit T ϕ(f,T)=arctan((Re1*Im2-Im1*Re2)/(Re1*Re2+Im1*Im2)) und daraus wiederum im Schritt 440 ein Winkelvektor ϑ(f)=arccos(ϕ(f)c/2πfd) einschließlich entsprechender Korrektur des Arguments auf den erlaubten Wertebereich [-1...1] für alle Frequenzen f berechnet. Im Schritt 450 wird dann die spektrale Filterfunktion, die die Dämpfungswerte für jede Frequenz f zum Zeitpunkt T enthält und wie folgt definiert ist: F(f,T) = Z(ϑ(f,T)-ϑ0(f)), mit einer unimodalen Zuordnungsfunktion wie z.B. Z(ϑ) = ((1 + cosϑ) / 2)n mit n>0 in Abhängigkeit des Einmess-Winkelvektors ϑ0(f) berechnet, wobei der Winkel ϑ so definiert ist, dass -π ≤ ϑ ≤ π gilt. Der Wert n wird im Folgenden als Breitenparameter bezeichnet, da er die einstellbare Breite des Richtkegels festlegt. Dabei ist zu beachten, dass die Richtkegelbreite umso kleiner ist, je größer der Breitenparameter n gewählt wird. Die so bestimmte Filterfunktion F(f,T) mit einem Wertebereich 0 ≤ F(f,T) ≤ 1 wird dann in Schritt 460 auf ein Spektrum der Mikrofonsignale 1 oder 2 in Form einer Multiplikation: S(f,T)=M1(f,T)F(f,T) angewandt. Aus dem so gefilterten Spektrum S(f,T) wird dann durch inverse Fouriertransformation das Ausgangssignals s(t) im Schritt 470 erzeugt. Obige Definition der Filterfunktion F(f,T) ist exemplarisch zu verstehen, andere Zuordnungsfunktionen mit ähnlicher Charakteristik erfüllen denselben Zweck. Der hier gewählte weiche Übergang zwischen den Extremwerten der Filterfunktion (null und eins) wirkt sich günstig auf die Qualität des Ausgangssignals aus, insbesondere im Hinblick auf unerwünschte Artefakte der Signalverarbeitung.In an angle-dependent filter determination for generating an output signal s (t) in the operating mode according to FIG. 4 First, the current sound signal with the two microphones 10 and 11 is recorded in step 410. In step 420, the Fourier transforms M1 (f, T) and M2 (f, T) of the microphone signals 1 and 2 are again calculated at time T and their real and imaginary parts Re1, Im1, Re2, Im2. Subsequently, in step 430, the frequency-dependent phases at time T φ (f, T) = arctan ((Re1 * Im2-Im1 * Re2) / (Re1 * Re2 + Im1 * Im2)) and from this in turn an angle vector θ (in step 440) f) = arccos (φ (f) c / 2πfd) including corresponding correction of the argument to the allowed value range [-1 ... 1] for all frequencies f. In step 450, the spectral filter function containing the attenuation values for each frequency f at time T and defined as follows is then: F (f, T) = Z (θ (f, T) -θ 0 (f)) of a unimodal mapping function such as Z (θ) = ((1 + cosθ) / 2) n where n> 0 is calculated as a function of the calibration angle vector θ 0 (f), where the angle θ is defined such that -π ≤ θ ≤ π holds. The value n is referred to below as the width parameter, since it defines the adjustable width of the directional cone. It should be noted that the larger the width parameter n is chosen, the smaller the beam width. The thus determined filter function F (f, T) with a value range 0 ≦ F (f, T) ≦ 1 is then applied in step 460 to a spectrum of the microphone signals 1 or 2 in the form of a multiplication: S (f, T) = M1 (FIG. f, T) F (f, T). From the spectrum S (f, T) filtered in this way, the output signal s (t) is then generated in step 470 by inverse Fourier transformation. The above definition of the filter function F (f, T) is to be understood as an example, other assignment functions with similar characteristics fulfill the same purpose. The soft transition chosen here between the extreme values of the filter function (zero and one) has a favorable effect on the quality of the output signal, in particular with regard to unwanted artifacts of the signal processing.

Gemäß einer Weiterbildung der Erfindung wird auf die Bestimmung des Winkels verzichtet und stattdessen während der Einmessprozedur lediglich der einmesspositionsspezifischen Phasendifferenzvektor ϕ0(f) bestimmt, der bereits die Einmessinformation enthält. In dieser Ausführungsform entfällt damit bei der Bestimmung der Einmessdaten die Berechnung des Winkelvektors ϑ0(f) in Schritt 350 und damit die ggf. notwendige Korrektur des Wertebereichs des Arguments für die arccos-Berechnung. Während des Betriebsmodus umfasst das Verfahren dabei die in Figur 5 dargestellten Schritte. Zunächst wird wiederum das aktuelle Schallsignal mit den zwei Mikrofonen 10 und 11 im Schritt 510 erfasst. Aus den daraus erzeugten Mikrofonsignalen 1 und 2 werden die aktuellen Frequenzspektren durch Berechnen der Fouriertransformierten M1(f,T) und M2(f,T) zum Zeitpunkt T sowie deren Real- und Imaginärteile Re1, Im1, Re2, Im2 im Schritt 520 ermittelt. Anschließend wird im Schritt 530 der aktuelle Phasendifferenzvektor aus deren Frequenzspektren gemäß ϕ f T = arctan Re 1 * Im 2 - Im 1 * Re 2 / Re 1 * Re 2 + Im 1 * Im 2

Figure imgb0002

berechnet. Die spektrale Filterfunktion wird nun gemäß der Formel F ϕ f T = 1 - ϕ f T - ϕ 0 f c / 2 πfd 2 n mit n > 0 ,
Figure imgb0003

im Hinblick auf den einmesspositionsspezifischen Phasendifferenzvektor ϕ0(f) in Schritt 540 berechnet, wobei c die Schallgeschwindigkeit, f die Frequenz der Schallsignalkomponenten, T die Zeitbasis der Spektrumserzeugung, d der vorbestimmte Abstand der beiden Mikrofone, und n der Breitenparameter für den Richtkegel ist. Beim Betrachten der Formel, welche wie zuvor exemplarisch zu verstehen ist, wird klar, dass die Filterfunktion im Idealfall, d.h. bei Phasengleichheit zwischen aktuell im Betriebsmodus gemessenem und einmesspositionsspezifischem Phasendifferenzvektor, gleich Eins wird, so dass die auf das Signalspektrum S angewandte Filterfunktion das auszugebende Signal nicht dämpft. Bei zunehmender Abweichung des aktuellen vom einmesspositionsspezifischen Phasendifferenzvektor geht die Filterfunktion gegen Null, was zu einer entsprechenden Dämpfung des auszugebenden Signals führt.According to a development of the invention, the determination of the angle is dispensed with and instead, during the calibration procedure, only the measuring-position-specific phase difference vector φ 0 (f) is determined, which already contains the calibration information. In this embodiment, the calculation of the angle vector θ 0 (f) in step 350 and thus the possibly necessary correction of the range of values of the argument for the arccos calculation are thus not required in the determination of the calibration data. During the operating mode, the method comprises the in FIG. 5 illustrated steps. First, in turn, the current sound signal with the two microphones 10 and 11 is detected in step 510. From the microphone signals 1 and 2 generated therefrom, the current frequency spectra are calculated by calculating the Fourier transforms M1 (f, T) and M2 (f, T) at time T and their real and imaginary parts Re1, Im1, Re2, Im2 determined in step 520. Subsequently, in step 530, the current phase difference vector is determined from the frequency spectra thereof φ f T = arctan re 1 * in the 2 - in the 1 * re 2 / re 1 * re 2 + in the 1 * in the 2
Figure imgb0002

calculated. The spectral filter function will now be according to the formula F φ f T = 1 - φ f T - φ 0 f c / 2 πfd 2 n with n > 0 .
Figure imgb0003

with respect to the calibration position-specific phase difference vector φ 0 (f) in step 540, where c is the sound velocity, f is the frequency of the sound signal components, T is the time base of the spectrum generation, d is the predetermined distance of the two microphones, and n is the width parameter for the directional cone. When considering the formula, which is to be understood as an example, it becomes clear that the filter function is equal to one in the ideal case, ie at phase equality between phase difference vector currently measured in the operating mode and measuring position-specific phase difference vector, so that the filter function applied to the signal spectrum S is the signal to be output does not dampen. With increasing deviation of the current measuring position from the specific phase difference vector, the filter function goes to zero, resulting in a corresponding attenuation of the output signal.

Falls im Einmessmodus mehrere Phasendifferenzvektoren für bspw. verschiedene Einmesspositionen bestimmt wurden, besteht die Möglichkeit, die Filterfunktion für eine dieser Einmesspositionen und damit eine gewünschte Position des Nutzsignals zu bestimmen.If several phase difference vectors have been determined in the calibration mode for, for example, different calibration positions, it is possible to determine the filter function for one of these calibration positions and thus a desired position of the useful signal.

Im Schritt 550 wird dann das Signalspektrum S des eingemessenen Signals durch Anwenden der Filterfunktion F(f,T) auf eines der Mikrofonspektren M1 oder M2 in Form einer Multiplikation gemäß der Formel (hier für Mikrofonspektrum M1): S f T = M 1 f T F f T

Figure imgb0004

erzeugt, woraus dann wiederum im Schritt 560 das auszugebende Signal s(t) durch inverse Fouriertransformation von S(f,T) bestimmt wird.In step 550, the signal spectrum S of the measured signal is then applied by applying the filter function F (f, T) to one of the microphone spectra M1 or M2 in the form of a multiplication according to the formula (here for microphone spectrum M1): S f T = M 1 f T F f T
Figure imgb0004

is generated, from which in turn, in step 560, the signal s (t) to be output is determined by inverse Fourier transformation of S (f, T).

Gemäß einer Weiterbildung der Erfindung arbeitet das Verfahren zunächst im Betriebsmodus und der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) wird für alle Frequenzen f auf ϕ0(f) gleich Null gesetzt. Das entspricht einer sogenannten "Broadview"-Geometrie ohne Einmessen. Soll die Vorrichtung zum Verarbeiten von Schallsignalen nun eingemessen werden, wird die Vorrichtung in den Einmessmodus geschaltet. Unter der Annahme, dass nun ein entsprechendes Nutzsignal generiert wird, indem z.B. lediglich der gewünschte Nutzer spricht, wird der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) berechnet. Hierbei spricht der Nutzer beispielsweise vorgegebene Testsätze, die von den Mikrofonen erfasst und aus denen zugeordnete Einmessmikrofonsignale erzeugt werden. Beispielsweise geht das System oder die Vorrichtung durch ein Kommando von außen in den EinmessModus, in welchem es die ϕ0(f) bestimmt. Dazu spricht der Nutzer Testlaute, z.B. "sch sch sch", bis das System ausreichende Einmessdaten gesammelt hat, was optional z.B. durch eine LED angezeigt werden kann. Danach wechselt das System in den Betriebsmodus, in welchem die Einmessdaten benutzt werden.According to a development of the invention, the method initially operates in the operating mode and the calibration position-specific phase difference vector φ 0 (f) is set equal to zero for all frequencies f to φ 0 (f). This corresponds to a so-called "Broadview" geometry without calibration. If the device for processing sound signals is now to be measured, the device is switched to the calibration mode. Assuming now that a corresponding useful signal is generated, for example by only the desired user speaking, the calibration position-specific phase difference vector φ 0 (f) is calculated. In this case, the user speaks, for example, predetermined test sets, which are detected by the microphones and from which associated Einmessmikrofonsignale be generated. For example, the system or device enters the calibration mode by an external command in which it determines the φ 0 (f). For this purpose, the user speaks test sounds, eg "sch sch sch", until the system has collected sufficient calibration data, which can optionally be displayed, for example, by an LED. The system then changes to the operating mode in which the calibration data is used.

Anschließend wird in den Betriebsmodus geschaltet und die spektrale Filterfunktion F wird für jeden aktuellen Phasendifferenzvektor in Abhängigkeit von dem vorher bestimmten jeweiligen einmesspositionsspezifischen Phasendifferenzvektor berechnet. Somit ist es z.B. möglich, die Vorrichtung, wie z.B. ein Mobiltelefon zunächst in einer Grundeinstellung auszuliefern und dann die Einmessprozedur mit der Stimme des tatsächlichen Nutzers in der vom Nutzer bevorzugten Einsatzumgebung und -anordnung, d.h. wie hält der Nutzer das Mobiltelefon im Verhältnis zum Mund des Nutzers o.ä., durchzuführen.The system then switches to the operating mode and the spectral filter function F is calculated for each current phase difference vector as a function of the previously determined respective calibration position-specific phase difference vector. Thus, it is possible, for example, to first deliver the device, such as a mobile phone, in a basic setting and then the calibration procedure with the voice of the actual user in the user's preferred environment and arrangement, ie how does the user hold the mobile in relation to the user's mouth User or similar, perform.

Gemäß einer Weiterbildung der Erfindung wird im Betriebsmodus mit dem vorher berechneten jeweiligen einmesspositionsspezifischen Phasendifferenzvektor gegenüber dem zunächst eingenommenen Betriebsmodus der Breitenparameter n kleiner gewählt als im uneingemessenen Betriebszustand, in dem sich die Vorrichtung in einer Grundeinstellung befindet. Ein zunächst kleinerer Breitenparameter bedeutet einen breiteren Richtkegel, so dass tendenziell zunächst Schallsignale aus einem größeren Richtkegel weniger stark gedämpft werden. Erst wenn die Einmessung erfolgt ist, wird der Breitenparameter größer gewählt, weil nun die Filterfunktion in der Lage ist, die an den Mikrofonen ankommenden Schallsignale auch unter Berücksichtigung der im Nahfeld auftretenden (Phasen-)Störungen gemäß eines kleineren Richtkegels entsprechend richtig zu dämpfen. Die Richtkegelbreite, die durch den Parameter n in der Zuordnungsfunktion festgelegt ist, wird z.B. im Betrieb mit Einmessdaten kleiner gewählt als im uneingemessenen Fall. Durch das Einmessen kennt das Verfahren die Position der Signalquelle ja sehr genau, so dass man dann mit einem "schärferen" Beam-Forming und daher mit einem schmaleren Richtkegel arbeiten kann als im uneingemessenen Fall, wo die Position der Quelle höchstens ungefähr bekannt ist.According to a development of the invention, the width parameter n is selected to be smaller in the operating mode with the previously calculated respective calibration position-specific phase difference vector than in the initially assumed operating mode than in the unacceptable operating state in which the device is in a default setting. An initially smaller width parameter means a wider beam, so that initially tend to be less attenuated sound signals from a larger beam direction. Only when the calibration has taken place, the width parameter is chosen to be larger, because now the filter function is able to properly attenuate the sound signals arriving at the microphones, taking into account the (phase) noise occurring in the near field according to a smaller directional cone. The beamwidth defined by the parameter n in the mapping function is e.g. selected smaller in operation with calibration data than in unadjusted case. By measuring the method knows the position of the signal source so very accurate, so that you can then work with a "sharper" beam forming and therefore with a narrower beam than in the unreasonable case, where the position of the source is at most approximately known.

Gemäß einer Weiterbildung der Erfindung wird im Einmessmodus ferner die Einmessposition in einem Raum- und/oder Zustandsbereich variiert, in dem der Nutzer im Betriebsmodus erwartet wird. Anschließend wird der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) für diese variierten Einmesspositionen berechnet. Hierdurch lassen sich dann neben unterschiedlichen Raumpositionen auch andere Effekte, die z.B. durch ein geöffnetes Seitenfenster eines Kfz verursacht werden, beim Einmessen berücksichtigen, da nicht nur die Position des Nutzers, bspw. die Sitzposition des Fahrers des Kfz, sondern auch der Umgebungszustand, d.h. ob z.B. das Seitenfenster geöffnet oder geschlossen ist, berücksichtigen.According to one embodiment of the invention, in the calibration mode, the calibration position is further varied in a spatial and / or state region in which the user is expected in the operating mode. Subsequently, the calibration position-specific phase difference vector φ 0 (f) is calculated for these varied calibration positions. In addition to different spatial positions, other effects which are caused, for example, by an opened side window of a motor vehicle, can then be taken into account during calibration, since not only the position of the user, for example the seat position of the driver of the motor vehicle, but also the surrounding state, ie For example, the side window is open or closed, take into account.

Während des Betriebs auftretende Schwankungen lassen sich durch ein einmaliges Einmessen prinzipiell nicht beherrschen. Hierfür kommt gemäß einer Weiterbildung der Erfindung ein adaptives Verfahren zum Einsatz, das anstelle von Einmess-Signalen die tatsächlichen Nutzsignale während des Betriebs auswertet. Gemäß einer solchen Ausführungsform wird ein adaptives Nach-Einmessen" nur in solchen Situation durchgeführt, in denen außer dem Nutzsignal keine anderen Störgeräuschsignale von den Mikrofonen aufgenommen werden, was beispielsweise an der relativen Konstanz der Phasendifferenzvektoren ϕ(f,T) zu aufeinanderfolgenden Zeiten T erkennbar ist.In principle, fluctuations occurring during operation can not be controlled by a single calibration. This comes in accordance with a According to a further development of the invention, an adaptive method is used which evaluates the actual useful signals during operation instead of calibration signals. According to such an embodiment, an adaptive post-measurement is carried out only in such a situation in which, apart from the useful signal, no other noise signals are picked up by the microphones, which can be recognized, for example, by the relative constancy of the phase difference vectors φ (f, T) at successive times T. is.

Gemäß einer Ausführungsform ist das Verfahren als adaptives Verfahren ausgestaltet. Der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) wird dabei anfänglich entweder für alle Frequenzen f auf ϕ0(f) gleich Null gesetzt oder es werden beispielsweise gespeicherte Werte für alle Frequenzen des einmesspositionsspezifischen Phasendifferenzvektors ϕ0(f) aus früheren Einmess- oder Betriebsmodi verwendet. Alternativ wird nach einem anfänglichen Durchlaufen des Einmessmodus zum Berechnen des aktuellen einmesspositionsspezifischen Phasendifferenzvektors ϕ0(f) in den Betriebsmodus geschaltet. Im weiteren Betrieb wird der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) dann durch das adaptive Verfahren aktualisiert, indem die aktuellen Schallsignale einer Schallquelle im Betriebsmodus als Schallsignale der ausgewählten Einmessposition interpretiert und für die Einmessung verwendet werden. Es kommt somit eine für den Nutzer unbemerkte Aktualisierung der Einmessdaten zur Anwendung, wobei die Aktualisierung immer dann stattfindet, wenn davon ausgegangen wird, dass die aktuellen Schallsignale störgeräuschunbehaftete Nutzsignale im Sinne der jeweiligen Anwendung bzw. der aktuellen Konfiguration der Vorrichtung sind, so dass aus diesen Schallsignalen dann der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) bestimmt wird. Ein ansonsten möglicherweise durch die Vorrichtung vorbestimmtes Umschalten zwischen Einmess- und Betriebsmodus kann somit entfallen. Vielmehr erfolgt das Einmessen "unterschwellig" während des Betriebes immer dann, wenn es die Signalqualität zulässt. Ein Kriterium für die Signalqualität kann beispielsweise der Signal-Rauschabstand der Mikrofonsignale sein.According to one embodiment, the method is designed as an adaptive method. The calibration position-specific phase difference vector φ 0 (f) is initially set equal to zero either for all frequencies f at φ 0 (f) or, for example, stored values are used for all frequencies of the calibration position-specific phase difference vector φ 0 (f) from previous calibration or operating modes. Alternatively, after initially passing through the calibration mode for calculating the current calibration-position-specific phase difference vector φ 0 (f), it is switched to the operation mode. In further operation, the calibration position-specific phase difference vector φ 0 (f) is then updated by the adaptive method by interpreting the current sound signals of a sound source in the operating mode as sound signals of the selected calibration position and used for the calibration. Thus, an update of the calibration data unnoticed by the user is used, wherein the update always takes place when it is assumed that the current sound signals are noise-affected useful signals in the sense of the respective application or the current configuration of the device, so that from these Sound signals then the einmesspositionsspezifische phase difference vector φ 0 (f) is determined. An otherwise possibly predetermined by the device switching between calibration and operating mode can thus be omitted. Rather, the measurement takes place "subliminal" during operation whenever it allows the signal quality. A criterion for the signal quality can be, for example, the signal-to-noise ratio of the microphone signals.

Die Auswirkung auf das auszugebende Signal einer während des Betriebs heruntergelassenen Scheibe kann auf diese Weise aber weiterhin nur unzureichend oder gar nicht kompensiert werden, denn die Randbedingung der Störgeräuschfreiheit bei der Erfassung der Schallsignale zur Bestimmung der Einmessdaten lässt sich in diesem Fall kaum realisieren. Um die Adaption störgeräuschfest zu machen, wird gemäß einer Weiterbildung der Erfindung daher eine ständig zu aktualisierende, spektral aufgelöste Störgeräusch-Schätzung durchgeführt, wobei die geschätzten Störsignale vor dem Adaptionsprozess von den Mikrofonspektren subtrahiert werden, bevor die eigentliche Kompensation der Phaseneffekte durchgeführt wird. Gemäß einer Ausführungsform umfasst das Verfahren daher weiterhin, dass aus den Mikrofonsignalen der aktuellen Schallsignale im Betriebsmodus zunächst mit Hilfe eines mitlaufenden, phasenempfindlichen Geräuschmodells Störsignale herausrechnet werden, bevor der einmesspositionsspezifische Phasendifferenzvektor ϕ0(f) aktualisiert wird.However, the effect on the signal to be output of a disc lowered during operation can continue to be compensated inadequately or not at all in this manner, since the boundary condition of the freedom from interference noise during the detection of the sound signals for determining the calibration data can hardly be realized in this case. In order to make the adaptation noise-resistant, according to an embodiment of the invention, therefore, a constantly updated, spectrally resolved noise estimate is performed, the estimated noise being subtracted from the microphone spectra before the adaptation process, before the actual compensation of the phase effects is performed. According to one embodiment, the method therefore further comprises that interference signals are first calculated out of the microphone signals of the current sound signals in the operating mode with the aid of a tracking, phase-sensitive noise model before the calibration-position-specific phase difference vector φ 0 (f) is updated.

Gemäß einer Weiterbildung der Erfindung umfasst der Schritt des Festlegens von zumindest einer Einmessposition weiterhin das Anordnen einer Testsignalquelle in der Einmessposition bzw. in deren Nähe, das Aussenden eines kalibrierten Testsignals durch die Schallsignalquelle, das Erfassen des Testsignals mit den beiden Mikrofonen und das Erzeugen der zugeordneten Einmessmikrofonsignale allein aus dem Testsignal. Bisher wurde davon ausgegangen, dass der Phasenwinkel ϕ0 spektral aufgelöst, also frequenzabhängig ist, und der entsprechende Vektor ϕ0(f) während der Einmessprozedur anhand der aufgenommenen Testsignale bestimmt wird, wohingegen der breitenbestimmende Parameter n skalar, also für alle Frequenzen gleich ist. Definiert man eine Halbwertsphasendifferenz ϕ½(f), bei der die Filterfunktion F(ϕ(f,T)) auf den Wert 1/2 abgefallen ist, so hängt der Breitenparameter n mit ϕ½(f) bei obiger Definition der Filterfunktion F(ϕ(f,T)) wie folgt zusammen: n = - 1 / log 2 1 - ½ f / 2 πfd 2 ,

Figure imgb0005
According to a development of the invention, the step of specifying at least one calibration position further comprises arranging a test signal source in or near the calibration position, transmitting a calibrated test signal through the sound signal source, acquiring the test signal with the two microphones, and generating the associated one Einmessmikrofonsignale alone from the test signal. So far, it has been assumed that the phase angle φ 0 is spectrally resolved, ie frequency-dependent, and the corresponding vector φ 0 (f) is determined during the calibration procedure on the basis of the recorded test signals, whereas the broad-determining parameter n is scalar, ie the same for all frequencies. Defining a half-value phase difference φ ½ (f) at which the filter function F (φ (f, T)) has dropped to the value 1/2, the width parameter n depends on φ ½ (f) in the above definition of the filter function F (FIG. φ (f, T)) is composed as follows: n = - 1 / log 2 1 - ½ f / 2 πfd 2 .
Figure imgb0005

ϕ½(f) ist dabei ein Parametervektor, der zunächst für jede Frequenz f vorgegeben ist.φ ½ (f) is a parameter vector, which is initially given for each frequency f.

Für eine erweiterte Einmessprozedur wird nun die Quelle der Testsignale, beispielsweise ein sogenannter künstliche Mund, nicht mehr nur am Ort der erwarteten Nutzsignalquelle positioniert, sondern über einen Raumbereich variiert, in dem bei normalem Betrieb auch eine Variation der Position der Nutzsignalquelle zu erwarten ist. In einer Kfz-Anwendung soll damit beispielsweise die Schwankungsbreite abgedeckt werden, die durch natürliche Kopfbewegungen, variable Sitzeinstellungen und unterschiedliche Körpergrößen eines Fahrers bewirkt werden. Für jede Messung mit verschiedenen Orten der Testsignalquelle wird nun wie zuvor beschrieben ein Vektor ϕ0(f) bestimmt. Anschließend werden aus diesen Messungen für jede Frequenz die arithmetischen Mittelwerte µ(f) und die Standardabweichungen σ(f) für jede Frequenz f über die berechneten einmesspositionsspezifischen Phasendifferenzvektoren ϕ0(f) berechnet. Hierbei ist zu beachten, dass es sich bei den Mittelwerten µ(f) um arithmetische Mittelwerte von zuvor bereits zeitlich gemittelten Variablen handelt; µ(f) wird nun anstelle von ϕ0(f) verwendet. Der zuvor skalare Parameter n wird nun ebenfalls frequenzabhängig gemacht und durch die Einmessprozedur bestimmt. Dazu wird die Halbwertsphasendifferenz ϕ½(f) über eine Konstante k mit der Standardabweichung verknüpft ϕ½ (f)=k σ(f). Wird nun für die Messwerte ϕ0(f) eine Normalverteilung angenommen, was nicht notwendigerweise der Fall ist, mangels besseren Wissens gemäß dem Verfahren aber dennoch angenommen wird, lägen 95% aller Messergebnisse innerhalb des Bereichs ± ϕ½(f), wenn man k=2 wählt. Für den breitenbestimmenden Parameter n(f) gilt dann: n f = - 1 / log 2 1 - f / πfd 2 .

Figure imgb0006
For an extended calibration procedure, the source of the test signals, for example a so-called artificial mouth, is no longer positioned only at the location of the expected useful signal source, but varies over a spatial range in which a variation of the position of the useful signal source is to be expected during normal operation. In a motor vehicle application, for example, this is intended to cover the fluctuation range which is caused by natural head movements, variable seat adjustments and different body sizes of a driver. For each measurement with different locations of the test signal source, a vector φ 0 (f) is now determined as described above. Subsequently, the arithmetic mean values μ (f) and the standard deviations σ (f) for each frequency f are calculated from the measurements for each frequency over the calculated calibration-position-specific phase difference vectors φ 0 (f). It should be noted that the mean values μ (f) are arithmetic mean values of previously time-averaged variables; μ (f) is now used instead of φ 0 (f). The previously scalar parameter n is now also made frequency-dependent and determined by the calibration procedure. For this purpose, the half-value phase difference φ ½ (f) is linked to the standard deviation via a constant k φ ½ (f) = k σ (f). If a normal distribution is now assumed for the measured values φ 0 (f), which is not necessarily the case, but for better knowledge according to the method is nevertheless assumed, would be 95% of all measurement results within the range ± φ ½ (f), if k = 2 selects. For the broad determining parameter n (f) then: n f = - 1 / log 2 1 - f / πfd 2 ,
Figure imgb0006

Mit dieser Erweiterung des Einmessvorgangs trägt man der Tatsache Rechnung, dass nicht nur die Einfalls- bzw. Phasenwinkel durch Reflexionen frequenzabhängig verändert werden, sondern dass auch die Stärke dieser Veränderung frequenzabhängig sein kann, was durch eine spektral aufgelöste "Beam-Breite" gemäß dem Verfahren kompensierbar ist.With this extension of the Einmessvorgangs one takes into account the fact that not only the incidence or phase angle by reflections Frequency-dependent, but that the strength of this change can be frequency-dependent, which can be compensated by a spectrally resolved "beam width" according to the method.

Weiterhin sei noch erwähnt, dass alle beschriebenen Vorrichtungen, Verfahren und Verfahrensbestandteile natürlich nicht auf den Einsatz beispielsweise in einem Kfz beschränkt sind. Auf dieselbe Weise kann z.B. auch ein Mobiltelefon oder jedes andere (Sprach-)Signalverarbeitungsgerät eingemessen werden, dass eine Mikrofonarraytechnologie verwendet.Furthermore, it should be mentioned that all described devices, methods and process components are of course not limited to use, for example in a motor vehicle. In the same way, e.g. Also, a mobile phone or any other (voice) signal processing device are used that uses a microphone array technology.

Das erfindungsgemäße Verfahren und die erfindungsgemäße Vorrichtung sind zweckmäßigerweise mithilfe bzw. in Form eines Signalverarbeitungssystems z.B. mit einem digitalen Signalprozessor (DSP-System) oder als Softwarekomponente eines Computerprogramms, das beispielsweise auf einem PC oder DSP-System oder jeder anderen Hardwareplattform läuft, realisierbar.The method according to the invention and the device according to the invention are expediently implementable by means of or in the form of a signal processing system, for example with a digital signal processor (DSP system) or as a software component of a computer program running, for example, on a PC or DSP system or any other hardware platform.

Bezugszeichenliste:LIST OF REFERENCE NUMBERS

10,10
11 beabstandete Mikrofone;11 spaced microphones;
M1(f,T), M2(f,T)M1 (f, T), M2 (f, T)
Fouriertransformierte der Mikrofonsignale (spektrale Amplitude bei der Frequenz f zum Zeitpunkt T);Fourier transforming the microphone signals (spectral amplitude at frequency f at time T);
dd
Abstand zwischen Mikrofonen MIK1 und MIK2;Distance between microphones MIK1 and MIK2;
ff
Frequenz;Frequency;
TT
Zeitpunkt der Bestimmung eines Spektrums bzw. eines AusgangssignalsTime of determination of a spectrum or an output signal
ϕ0(f)φ 0 (f)
zeitlich gemittelter Phasendifferenzvektor im Einmessmodus;time averaged phase difference vector in the calibration mode;
ϕ(f,T)φ (f, T)
Phasendifferenzvektor der Mikrofonsignale während des Betriebs;Phase difference vector of the microphone signals during operation;
Re1(f), Im1(f)Re1 (f), Im1 (f)
Real- und Imaginärteile der spektralen Komponenten des ersten Freisprechmikrofonsignals (Mikrofon 1);Real and imaginary parts of the spectral components of the first hands-free microphone signal (microphone 1);
Re2(f),Re2 (f),
Im2(f) Real- und Imaginärteile der spektralen Komponenten des zweiten Freisprechmikrofonsignals (Mikrofon 2);Im2 (f) real and imaginary parts of the spectral components of the second handsfree microphone signal (microphone 2);
ϑ0(f)θ 0 (f)
zeitlich gemittelter frequenzabhängiger Einfallswinkel des ersten Test-Audiosignals im Einmessmodus;time averaged frequency-dependent angle of incidence of the first test audio signal in the calibration mode;
ϑ(f,T)θ (f, T)
frequenzabhängiger Einfallswinkel der Mikrofonsignale während des Betriebs;frequency-dependent angle of incidence of the microphone signals during operation;
µ(f)μ (f)
arithmetische Mittelwerte für jede Frequenz f über die ϕ0(f);arithmetic mean values for each frequency f over the φ 0 (f);
σ(f)σ (f)
Standardabweichungen für jede Frequenz f über die ϕ0(f);Standard deviations for each frequency f over the φ 0 (f);
nn
Breitenparameter;Width parameter;
n(f)n (f)
frequenzabhängiger Breitenparameter, mit ϕ½(f) = kσ(f), wobei ϕ½(f) die frequenzabhängige Phasendifferenz ist, bei der die Filterfunktion F bei der Frequenz f den Wert 1/2 annimmt;frequency-dependent width parameter, where φ ½ (f) = kσ (f), where φ ½ (f) is the frequency-dependent phase difference at which the filter function F assumes the value 1/2 at the frequency f;
F(f,T)F (f, T)
Filterfunktion;Filter function;
ZZ
unimodale Zuordnungsfunktion;unimodal assignment function;
S(f,T)S (f, T)
Signalspektrum des auszugebenden Signals;Signal spectrum of the signal to be output;
s(t)s (t)
auszugebendes Signal.output signal.

Claims (12)

  1. A method for phase-sensitive processing of sound signals of at least one sound source, comprising the steps of:
    - arranging two microphones (10, 11) at a distance d from each other;
    - capturing sound signals with both microphones, and generating associated microphone signals; and
    - processing the sound signals of the microphones;
    wherein during a calibration mode, the method comprises the following steps:
    - defining at least one calibration position of the sound source where the sound source is positioned in the calibration mode;
    - capturing separately the sound signals for the at least one predetermined calibration position with both microphones, and generating associated calibration microphone signals for the calibration position;
    - determining the frequency spectra of the associated calibration microphone signals;
    - calculating a calibration-position-specific phase difference vector ϕ0(f) between the associated calibration microphone signals from their frequency spectra for the at least one predetermined calibration position;
    the method further comprising the following steps during an operating mode in which a desired signal source is positioned at or near one of the at least one predetermined positions:
    - capturing the current sound signals with both microphones and generating associated current microphone signals;
    - determining the current frequency spectra of the associated current microphone signals;
    - calculating a current phase difference vector ϕ(f) between the associated current microphone signals from their frequency spectra at the time T;
    - selecting one of the at least one predetermined calibration position as a position of the desired signal source;
    - calculating a spectral filter function F depending on the current phase difference vector and the respective calibration-position-specific phase difference vector of the selected calibration position;
    - generating, respectively, a signal spectrum S of a signal to be output by multiplication of at least one of the two frequency spectra of the current microphone signals with the spectral filter function F of the respective selected calibration position, the filter function being chosen so that the smaller the absolute value of the difference between current and calibration-position-specific phase difference for the corresponding frequency, the smaller the attenuation of spectral components of sound signals; and
    - obtaining the respective signal to be output for the respective selected calibration position by inverse Fourier transformation of the generated signal spectrum.
  2. The method according to claim 1, the method further comprising the following steps during calibration mode:
    - calculating M1(f, T) and M2(f,T) of the spectral amplitude at frequencies f at time T by Fourier transformation of the calibration microphone signals;
    - calculating the real and imaginary parts Re1, Im1, Re2, Im2 of the Fourier transforms M1(f, T) and M2 (f, T);
    - calculating the phase difference ϕ(f, T) at time T between the calibration microphone signals, according to the formula: ϕ f T = arctan Re 1 * Im 2 - Im 1 * Re 2 / Re 1 * Re 2 + Im 1 * Im 2 ;
    Figure imgb0013

    and
    - averaging ϕ(f, T) temporally over subsequent times T in order to obtain the calibration-position-specific phase differences ϕ0(f), which contain the result of the calibration process.
  3. The method according to claim 2, the method during operating mode further comprising the following steps:
    - calculating the spectral filter function according to the formula: F ϕ f T = 1 - ϕ f T - ϕ 0 f c / 2 πfd 2 n ,
    Figure imgb0014

    where n > 0;
    - generating the signal spectrum S by applying the filter function F(f,T) to a microphone spectrum M1 in the form of a multiplication, according to the formula: S f T = M 1 f T F f T ;
    Figure imgb0015
    - generating the output signal s(t) by inverse Fourier transformation of S(f,T);
    where:
    c is the speed of sound,
    f is the frequency of the sound signal components,
    T is the time base of the spectrum generation,
    d is the distance between the two microphones, and
    n is a parameter n>0 to be determined, whereby the width of the filter function is determined.
  4. The method according to any one of the preceding claims, the method first working in operating mode, and the calibration-position-specific phase difference vector ϕ0 (f) being set to ϕ0(f) = 0 for all frequencies f, and the method further comprising:
    - switching into calibration mode and calculating the calibration-position-specific phase difference vector ϕ0(f), a user speaking test signals, which are captured by the microphones, and associated calibration microphone signals are generated from them;
    - switching into operating mode, and calculating the spectral filter function F for each current frequency-dependent phase difference vector depending on the respective previously determined calibration-position-specific phase difference vector.
  5. The method according to claim 4, related to claim 3, wherein in operating mode with the previously calculated calibration-position-specific phase difference vector, the width parameter n is chosen to be greater than in the initially taken uncalibrated operating mode.
  6. The method according to any one of the preceding claims, the method in calibration mode further comprising the following steps:
    - varying the calibration position in a spatial and/or state range in which the user is expected in operating mode;
    - calculating calibration-position-specific phase difference vectors ϕ0(f) for varied calibration positions;
    - calculating the arithmetic means µ(f) and standard deviations σ(f) for each frequency f of the calculated calibration-position-specific phase difference vectors ϕ0(f); and
    the method during operating mode further comprising the following steps:
    - calculating the spectral filter function according to the formula: F ϕ f T = 1 - ϕ f T - ϕ 0 f c 2 n f
    Figure imgb0016

    with a frequency-dependent width parameter n(f), according to the formula: n f = - 1 / log 2 1 - f / πfd 2 ;
    Figure imgb0017
    - generating the signal spectrum S by applying the filter function F(f,T) to a microphone spectrum M1 in the form of a multiplication according to the formula: S f T = M 1 f T F f T ;
    Figure imgb0018
    where:
    c is the speed of sound,
    f is the frequency of the sound signal components,
    T is the time base of the spectrum generation,
    d is the distance between the two microphones,
    n(f) is the frequency-dependent width parameter which is defined by ϕ1/2(f) = kσ(f), and
    ϕ1/2(f) is the frequency-dependent phase difference at which the filter function F at frequency f takes the value F(f) = 1/2.
  7. The method according to any one of the preceding claims, wherein the step of defining at least one calibration position further includes:
    - arranging a test signal source near the specified calibration position;
    - the sound signal source sending a calibrated test signal;
    - both microphones capturing the test signal, and the associated calibration microphone signals being generated from the test signal only.
  8. The method according to one of claims 1 to 6, the method being in the form of an adaptive method, and wherein after passing through calibration mode initially, a switch into operating mode takes place to calculate the current calibration-position-specific phase difference vector ϕ0(f), and in further operation, the calibration-position-specific, frequency-dependent phase difference vector ϕ0(f) is updated, the current sound signals of a sound source being interpreted in operating mode as sound signals of the selected calibration position.
  9. The method according to claim 8, wherein interference signals are first calculated out of the microphone signals of the current sound signals in operating mode using a concurrent, phase-sensitive noise model, before the calibration-position-specific phase difference vector ϕ0(f) is updated.
  10. A device for phase-sensitive processing of sound signals of at least one sound source, comprising:
    - two microphones (10, 11), which are arranged at a predetermined distance (d) from each other, to capture sound signals and generate microphone signals;
    - a processing unit which is connected to the microphone unit, to process the microphone signals;
    wherein during a calibration mode, the processing unit with the microphones is set up and at least one calibration position of the sound source is determined, wherein the processing unit further is adapted for:
    - capturing separately the sound signals for the calibration position with both microphones, and generating associated calibration microphone signals for the at least one determined calibration position;
    - determining the frequency spectra of the associated calibration microphone signals;
    - calculating a calibration-position-specific phase difference vector ϕ0(f) between the associated calibration microphone signals from their frequency spectra for the at least one determined calibration position; and
    during an operating mode, the processing unit with the microphones is adapted and one of the at least one determined calibration positions is selected as a position of a desired signal source, and the processing unit further is adapted for:
    - capturing the current sound signals with both microphones and generating associated current microphone signals;
    - determining the current frequency spectra of the associated current microphone signals;
    - calculating a current phase difference vector ϕ(f) between the associated current microphone signals from their frequency spectra at the time T;
    - calculating a spectral filter function (F) depending on the current phase difference vector and the respective calibration-position-specific, frequency-dependent phase difference vector of the selected calibration position;
    - generating, respectively, a signal spectrum S of an output signal by multiplication of at least one of the two frequency spectra of the current microphone signals with the spectral filter function F of the respective selected calibration position, the filter function being chosen so that the smaller the absolute value of the difference between current and calibration-position-specific phase difference for the corresponding frequency, the smaller the attenuation of spectral components of sound signals; and
    the device further includes an output unit to output the signal to be output for the relevant selected calibration position, with means for inverse Fourier transformation of the respective generated signal spectrum.
  11. The device according to claim 10, further set up to carry out the method according to one of claims 2 to 9.
  12. A computer program containing program code which, when executed on a data processing unit, realizes the method of one of claims 1 to 9.
EP11152903.8A 2010-02-15 2011-02-01 Method and device for phase-dependent processing of sound signals Active EP2362681B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102010001935A DE102010001935A1 (en) 2010-02-15 2010-02-15 Method and device for phase-dependent processing of sound signals

Publications (2)

Publication Number Publication Date
EP2362681A1 EP2362681A1 (en) 2011-08-31
EP2362681B1 true EP2362681B1 (en) 2015-04-08

Family

ID=43923655

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11152903.8A Active EP2362681B1 (en) 2010-02-15 2011-02-01 Method and device for phase-dependent processing of sound signals

Country Status (3)

Country Link
US (2) US8340321B2 (en)
EP (1) EP2362681B1 (en)
DE (1) DE102010001935A1 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010001935A1 (en) 2010-02-15 2012-01-26 Dietmar Ruwisch Method and device for phase-dependent processing of sound signals
EP2590165B1 (en) 2011-11-07 2015-04-29 Dietmar Ruwisch Method and apparatus for generating a noise reduced audio signal
KR101361265B1 (en) * 2012-05-08 2014-02-12 (주)카카오 Method of alerting of mobile terminal using a plarality of alert modes and mobile terminal thereof
US9330677B2 (en) 2013-01-07 2016-05-03 Dietmar Ruwisch Method and apparatus for generating a noise reduced audio signal using a microphone array
EP2928211A1 (en) * 2014-04-04 2015-10-07 Oticon A/s Self-calibration of multi-microphone noise reduction system for hearing assistance devices using an auxiliary device
JP2015222847A (en) * 2014-05-22 2015-12-10 富士通株式会社 Voice processing device, voice processing method and voice processing program
US9984068B2 (en) * 2015-09-18 2018-05-29 Mcafee, Llc Systems and methods for multilingual document filtering
CN108269582B (en) * 2018-01-24 2021-06-01 厦门美图之家科技有限公司 Directional pickup method based on double-microphone array and computing equipment
US11902758B2 (en) * 2018-12-21 2024-02-13 Gn Audio A/S Method of compensating a processed audio signal
CN113874922B (en) * 2019-05-29 2023-08-18 亚萨合莱有限公司 Determining a position of a mobile key device based on a phase difference of samples
EP3745155A1 (en) 2019-05-29 2020-12-02 Assa Abloy AB Determining a position of a mobile key device based on phase difference of samples
EP3764660B1 (en) 2019-07-10 2023-08-30 Analog Devices International Unlimited Company Signal processing methods and systems for adaptive beam forming
EP3764359A1 (en) 2019-07-10 2021-01-13 Analog Devices International Unlimited Company Signal processing methods and systems for multi-focus beam-forming
EP3764358B1 (en) 2019-07-10 2024-05-22 Analog Devices International Unlimited Company Signal processing methods and systems for beam forming with wind buffeting protection
EP3764360B1 (en) 2019-07-10 2024-05-01 Analog Devices International Unlimited Company Signal processing methods and systems for beam forming with improved signal to noise ratio
EP3764664A1 (en) 2019-07-10 2021-01-13 Analog Devices International Unlimited Company Signal processing methods and systems for beam forming with microphone tolerance compensation
CN110361696B (en) * 2019-07-16 2023-07-14 西北工业大学 Closed space sound source positioning method based on time reversal technology
CN115776626B (en) * 2023-02-10 2023-05-02 杭州兆华电子股份有限公司 Frequency response calibration method and system for microphone array

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003260926A1 (en) 2002-10-23 2004-05-13 Koninklijke Philips Electronics N.V. Controlling an apparatus based on speech
EP1453348A1 (en) * 2003-02-25 2004-09-01 AKG Acoustics GmbH Self-calibration of microphone arrays
DE102004005998B3 (en) * 2004-02-06 2005-05-25 Ruwisch, Dietmar, Dr. Separating sound signals involves Fourier transformation, inverse transformation using filter function dependent on angle of incidence with maximum at preferred angle and combined with frequency spectrum by multiplication
DE102009029367B4 (en) 2009-09-11 2012-01-12 Dietmar Ruwisch Method and device for analyzing and adjusting the acoustic properties of a hands-free car kit
DE102010001935A1 (en) 2010-02-15 2012-01-26 Dietmar Ruwisch Method and device for phase-dependent processing of sound signals

Also Published As

Publication number Publication date
DE102010001935A1 (en) 2012-01-26
US8477964B2 (en) 2013-07-02
EP2362681A1 (en) 2011-08-31
US8340321B2 (en) 2012-12-25
US20130094664A1 (en) 2013-04-18
US20110200206A1 (en) 2011-08-18

Similar Documents

Publication Publication Date Title
EP2362681B1 (en) Method and device for phase-dependent processing of sound signals
EP2296356B1 (en) Method and device for analysing and calibrating acoustic characteristics of a motor vehicle hands-free kit
DE60125553T2 (en) METHOD OF INTERFERENCE SUPPRESSION
EP1595427B1 (en) Method and device for the separation of sound signals
EP1853089B1 (en) Method for elimination of feedback and for spectral expansion in hearing aids.
DE102010023615B4 (en) Signal processing apparatus and signal processing method
DE10392425B4 (en) Audio feedback processing system
EP1771034A2 (en) Microphone calibration in a RGSC-beamformer
EP1473967B1 (en) Method for suppressing at least one acoustic noise signal and apparatus for carrying out the method
EP2226795B1 (en) Hearing aid and method for reducing noise in a hearing aid
EP3337189A1 (en) Method for determining a position of a signal source
EP3197181B1 (en) Method for reducing latency of a filter bank for filtering an audio signal and method for low latency operation of a hearing system
DE102014017293A1 (en) Method for distortion compensation in the auditory frequency range and method to be used for estimating acoustic channels
EP2981099B1 (en) Method and device for suppressing feedback
DE102008004674A1 (en) Signal recording with variable directional characteristics
DE10313331B4 (en) Method for determining an incident direction of a signal of an acoustic signal source and apparatus for carrying out the method
WO2000068703A2 (en) Method for localising direction and localisation arrangement
EP2200341B1 (en) Method for operating a hearing aid and hearing aid with a source separation device
EP3065417B1 (en) Method for suppressing interference noise in an acoustic system
DE102015221764A1 (en) Method for adjusting microphone sensitivities
WO2001047335A2 (en) Method for the elimination of noise signal components in an input signal for an auditory system, use of said method and a hearing aid
DE2748288C2 (en)
DE102019105458B4 (en) System and method for time delay estimation
DE112019007387T5 (en) Method and system for room calibration in a loudspeaker system
DE10140523B4 (en) Device for feedback canceling the output of microphone signals through loudspeakers

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

17P Request for examination filed

Effective date: 20120229

17Q First examination report despatched

Effective date: 20140509

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20141212

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502011006489

Country of ref document: DE

Effective date: 20150513

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 721331

Country of ref document: AT

Kind code of ref document: T

Effective date: 20150515

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20150408

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150708

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150810

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150808

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150709

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 502011006489

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: RO

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150408

26N No opposition filed

Effective date: 20160111

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160229

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160201

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160229

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160229

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160201

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 7

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 721331

Country of ref document: AT

Kind code of ref document: T

Effective date: 20160201

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160201

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 8

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20110201

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150408

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 502011006489

Country of ref document: DE

Representative=s name: BETTEN & RESCH PATENT- UND RECHTSANWAELTE PART, DE

Ref country code: DE

Ref legal event code: R081

Ref document number: 502011006489

Country of ref document: DE

Owner name: RUWISCH PATENT GMBH, DE

Free format text: FORMER OWNER: RUWISCH, DIETMAR, DR., 12557 BERLIN, DE

Ref country code: DE

Ref legal event code: R081

Ref document number: 502011006489

Country of ref document: DE

Owner name: ANALOG DEVICES INTERNATIONAL UNLIMITED COMPANY, IE

Free format text: FORMER OWNER: RUWISCH, DIETMAR, DR., 12557 BERLIN, DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20190321 AND 20190327

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 502011006489

Country of ref document: DE

Owner name: ANALOG DEVICES INTERNATIONAL UNLIMITED COMPANY, IE

Free format text: FORMER OWNER: RUWISCH PATENT GMBH, 12459 BERLIN, DE

Ref country code: DE

Ref legal event code: R082

Ref document number: 502011006489

Country of ref document: DE

Representative=s name: BETTEN & RESCH PATENT- UND RECHTSANWAELTE PART, DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20201210 AND 20201216

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230119

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240123

Year of fee payment: 14

Ref country code: GB

Payment date: 20240123

Year of fee payment: 14