EP2518724A1 - Microphone/headphone audio headset comprising a means for suppressing noise in a speech signal, in particular for a hands-free telephone system - Google Patents
Microphone/headphone audio headset comprising a means for suppressing noise in a speech signal, in particular for a hands-free telephone system Download PDFInfo
- Publication number
- EP2518724A1 EP2518724A1 EP12164777A EP12164777A EP2518724A1 EP 2518724 A1 EP2518724 A1 EP 2518724A1 EP 12164777 A EP12164777 A EP 12164777A EP 12164777 A EP12164777 A EP 12164777A EP 2518724 A1 EP2518724 A1 EP 2518724A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- speech
- headset
- physiological sensor
- microphone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000009467 reduction Effects 0.000 claims description 21
- 238000001228 spectrum Methods 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 15
- 230000001755 vocal effect Effects 0.000 claims description 9
- 230000003595 spectral effect Effects 0.000 claims description 5
- 210000000988 bone and bone Anatomy 0.000 claims description 4
- 230000003111 delayed effect Effects 0.000 claims description 4
- 238000012805 post-processing Methods 0.000 claims description 4
- 230000005236 sound signal Effects 0.000 claims description 2
- 238000011282 treatment Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000000034 method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 230000003071 parasitic effect Effects 0.000 description 3
- 210000003800 pharynx Anatomy 0.000 description 3
- 210000001260 vocal cord Anatomy 0.000 description 3
- 235000005921 Cynara humilis Nutrition 0.000 description 2
- 240000002228 Cynara humilis Species 0.000 description 2
- 241000287531 Psittacidae Species 0.000 description 2
- 210000002837 heart atrium Anatomy 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000010363 phase shift Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 206010011878 Deafness Diseases 0.000 description 1
- 241001644893 Entandrophragma utile Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 210000000613 ear canal Anatomy 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 210000001584 soft palate Anatomy 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0316—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
- G10L21/0364—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1083—Reduction of ambient noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L2021/02085—Periodic noise
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02165—Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/13—Hearing devices using bone conduction transducers
Definitions
- the invention relates to a headset type microphone / headset combined.
- Such a headset can in particular be used for communication functions such as "hands-free" telephony functions, in addition to listening to an audio source (music for example) coming from a device on which the headphones are connected. .
- one of the difficulties is to ensure sufficient intelligibility of the signal picked up by the microphone (“microphone”), that is to say the speech signal of the close speaker (the helmet wearer).
- microphone that is to say the speech signal of the close speaker (the helmet wearer).
- the helmet can indeed be used in a noisy environment (metro, busy street, train, etc.), so that the microphone will not only capture the speech of the wearer of the helmet, but also the surrounding noise.
- the wearer can be protected from these noises by the helmet, especially if it is a model with closed headphones isolating the ear from the outside, and even more if the headset is provided with an "active control of noise".
- the distant speaker the one at the other end of the communication channel
- the noise picked up by the microphone coming to overlap and interfere with the speech signal of the nearby speaker (the helmet wearer).
- the signal collected by the physiological sensor is usable only in the low frequencies.
- the noises generally encountered in a usual environment street, metro, train .
- the physiological sensor delivers a signal naturally devoid of noise component noise (which is not possible with a classic microphone).
- the EP 0 683 621 A2 for its part, it describes more precisely how to integrate the physiological sensor and the external microphone into one and the same ear canal.
- the signal collected by the physiological sensor is not strictly speaking speech since speech is not only formed of voiced sounds, it contains components that are not born at the level of the vocal cords: the frequency content is for example much richer with the sound coming from the throat and emitted through the mouth.
- the internal bone conduction and the crossing of the skin has the effect of filtering certain vocal components, which makes that the signal delivered by the physiological sensor is exploitable only in the lowest part of the spectrum. That is why this signal is supplemented by another signal, delivered by a conventional microphonic sensor, to which it is combined.
- the general problem of the invention is, in such a context, to deliver to the remote speaker a voice signal representative of the speech transmitted by the near speaker, a signal which is freed from the parasitic components of external noise present in the environment of the close speaker .
- Another aspect of the invention resides in the ability to efficiently use the signal from the physiological sensor to control various signal processing functions. This signal makes it possible to access new information concerning the content of the speech, which will then be used for the denoising as well as for various auxiliary functions that will be explained below, in particular the calculation of a cutoff frequency of a dynamic filter.
- the microphone / headset combination comprises low-pass filtering means of the first speech signal before combination by the mixing means, and / or high-pass filtering means of the second speech signal before denoising and combination by the means. mixing.
- These low-pass and / or high-pass filtering means comprise an adjustable cutoff frequency filter
- the headset comprises means for calculating the cutoff frequency, operating as a function of the signal delivered by the physiological sensor.
- the means for calculating the cutoff frequency may in particular comprise means for analyzing the spectral content of the signal delivered by the physiological sensor, able to determine the cutoff frequency as a function of the relative levels of the signal / noise ratio evaluated in a plurality of distinct frequency bands of the signal delivered by the physiological sensor.
- the denoising means of the second speech signal are non-frequency noise reduction means with, in a particular embodiment of the invention, the microphone assembly which comprises two microphones, and the noise reduction means.
- non-frequency which comprise a combiner able to apply a delay to the signal delivered by one of the microphones and to subtract this delayed signal from the signal delivered by the other microphone.
- the two microphones can be aligned in a linear array in a main direction directed towards the mouth of the wearer of the helmet.
- denoising means of the third speech signal delivered by the mixing means including frequency noise reduction means.
- input receiving means and operating an intercorrelation between the first and the third speech signal, and outputting a speech presence probability signal. function of the result of the intercorrelation.
- the denoising means of the third speech signal receive as input this speech presence probability signal for selectively: i) making a noise reduction differentiated according to the frequency bands as a function of the value of the speech presence probability signal, and ii) perform a maximum noise reduction on all the frequency bands in the absence of speech.
- post-processing means capable of selectively frequency band equalizing in the part of the spectrum corresponding to the signal collected by the physiological sensor. These means determine an equalization gain for each of the frequency bands, this gain being calculated from the respective frequency coefficients of the signals delivered by the microphone (s) and signals delivered by the physiological sensor, considered in the frequency domain. They also operate smoothing on a plurality of successive signal frames of the calculated equalization gain.
- the reference 10 generally designates the helmet according to the invention, which comprises two atria 12 joined by a hoop.
- Each of the atria is preferably constituted by a closed shell 12, housing a sound reproduction transducer, applied around the ear of the user with the interposition of an insulating pad 16 isolating the ear from the outside.
- This helmet is provided with a physiological sensor 18 for collecting the vibrations produced by a voiced signal emitted by the wearer of the helmet, and which can be picked up at the level of the cheek or the temple.
- the sensor 18 is preferably an accelerometer integrated in the pad 16 so as to be applied against the cheek or the temple of the user with the closest possible coupling.
- the physiological sensor may in particular be placed on the inside of the skin of the pad so that, once the helmet is in place, the physiological sensor is applied against the cheek or the temple of the user under the effect of a slight pressure resulting from the crushing of the material of the pad, with only the interposition of the skin of the pad.
- the headset also comprises a microphone array or antenna, for example two omnidirectional microphones 20, 22, placed on the shell of the earpiece 12. These two front and rear mics 22 and 20 are omnidirectional microphones arranged relative to each other. other so that their alignment direction 24 is approximately directed towards the mouth 26 of the helmet wearer.
- the Figure 2 is a block diagram showing the different blocks and functions implemented by the method of the invention as well as their interactions.
- the method of the invention is implemented by software means, which can be broken down and schematized by a number of illustrated blocks 30 to 64 Figure 2 . These processes are implemented in the form of appropriate algorithms executed by a microcontroller or a digital signal processor. Although, for the sake of clarity, these various treatments are presented in the form of separate blocks, they implement common elements and correspond in practice to a plurality of functions globally executed by the same software.
- the reference 28 also designates the sound reproduction transducer placed inside the hull. of the earpiece.
- These various elements deliver signals that are processed by the block referenced 30, which can be coupled to an interface 32 to the communication circuits (telephone circuits) and receives at the input E the sound intended to be reproduced by the transducer 28 (speech of the remote speaker during a telephone call, music source out periods of telephone communication), and delivers on the output S a signal representative of the speech of the next speaker, that is to say, the wearer of the headset.
- the signal to be reproduced applied to the input E is a digital signal converted into analog by the converter 34, then amplified by the amplifier 36 for reproduction by the transducer 28.
- the signal collected by the physiological sensor 18 is a signal mainly comprising components in the lower region of the sound spectrum (typically 0-1500 Hz). As explained above, this signal is naturally non-noisy.
- the signals collected by the microphones 20, 22 will be used mainly for the high spectrum (above 1500 Hz), but these signals are strongly noisy and it will be essential to carry out a strong denoising treatment to eliminate the components of parasitic noise, the level of which may be such, in certain environments, that they completely obscure the speech signal picked up by these microphones 20, 22.
- the first stage of the treatment is an anti-echo treatment, applied to the signals of the physiological sensor and the microphones.
- the sound reproduced by the transducer 28 is captured by the physiological sensor 18 and the microphones 20, 22, generating an echo that disrupts the operation of the system, and must be eliminated upstream.
- This anti-echo treatment is implemented by the blocks 38, 40 and 42, each of these blocks receiving on a first input the signal emitted by the sensor 18, 20 or else 22 and on a second input the signal reproduced by the transducer. 28 (echo generator signal), and outputs, for further processing, a signal whose echo has been eliminated.
- the anti-echo treatment is for example carried out by an adaptive algorithm treatment such as that described in FIG. FR 2 792 146 A1 (Parrot SA), which can be referred to for more details.
- This is an echo cancellation or AEC technique consisting in dynamically defining a compensation filter modeling the acoustic coupling between the transducer 28 and the physiological sensor 18 (or the microphone 20, or the microphone 22, respectively) by a linear transformation between the signal reproduced by the transducer 28 (that is to say the signal E applied at the input of the blocks 38, 40 or 42) and the echo picked up by the physiological sensor 18 (or the microphone 20 or 22).
- This transformation defines an adaptive filter which is applied to the reproduced incident signal, and the result of this filtering is subtracted from the signal collected by the physiological sensor 18 (or the microphone 20 or 22), which has the effect of canceling the major part acoustic echo.
- This modeling is based on the search for a correlation between the signal reproduced by the transducer 28 and the signal collected by the physiological sensor 18 (or the microphone 20 or 22), that is to say on an estimate of the impulse response.
- the coupling constituted by the body of the earphone 12 supporting these various elements.
- the processing is performed in particular by an adaptive APA ( Affine Projection Algorithm ) algorithm , which provides fast convergence, well suited to hands-free applications with intermittent speech rate and a level that can quickly vary.
- adaptive APA Affine Projection Algorithm
- the iterative algorithm is executed with a variable pitch, as described in FIG. FR 2 792 146 A1 supra.
- the pitch ⁇ varies continuously according to the energy levels of the signal picked up by the microphone, before and after filtering. This step is increased when the energy of the sensed signal is dominated by the energy of the echo, and, conversely, reduced when the energy of the signal picked up is dominated by the energy of the background noise and / or the speech from the remote speaker.
- the signal collected by the physiological sensor 18 after the anti-echo processing by the block 38 will be used as the input signal of a block 44 for calculating a cutoff frequency FC.
- the next step consists in filtering the signals, with a low-pass filter 48 for the signal of the physiological sensor 18 and with a filter high pass 50, 52 for the signals collected by the microphones 20, 22, respectively.
- These filters 48, 50 and 52 are preferably infinite impulse response type IIR (recursive filter) type digital filters, which have a relatively steep transition between the bandwidth and the rejected band.
- IIR infinite impulse response type
- these filters are adaptive filters whose cutoff frequency is variable and determined dynamically by the block 44.
- the cut-off frequency FC which is preferably the same for the low-pass filter 48 and the high-pass filters 50 and 52, is determined from the signal of the physiological sensor 18 after the anti-echo treatment 38.
- an algorithm calculates the signal-to-noise ratio for a plurality of frequency bands in a range between, for example, 0 and 2500 Hz (the noise level being given by a calculation of the energy in a higher frequency band, for example between 3000 and 4000 Hz, because it is known that in this zone the signal can only be noise, because of the properties of the component constituting the physiological sensor 18).
- the cutoff frequency chosen will correspond to the maximum frequency for which the signal / noise ratio exceeds a predetermined threshold, for example 10 dB.
- the following step consists in operating, by means of block 54, a mix to reconstruct the complete spectrum with, on the one hand, the lower region of the spectrum given by the filtered signal of the physiological sensor 18 and, on the other hand, the top of the spectrum given by the filtered signal of the microphones 20 and 22 after passing through a combiner-phase shifter 56 for operating a denoising in this part of the spectrum.
- This reconstruction is performed by summing the two signals, which are applied synchronously to the mixing block 54 so as to avoid any deformation.
- the signal that we want to denoise (that is, the signal from the near speaker located in the upper part of the spectrum, typically the components of frequency greater than 1500 Hz) is derived from the two microphones 20, 22 disposed a few centimeters from each other on the shell 14 of one of the earphones of the helmet. As indicated, these two microphones are arranged relative to each other so that the direction 24 they define is approximately oriented in the direction of the mouth 26 of the helmet wearer. As a result, a speech signal emitted from the mouth will reach the microphone before 20 and then the rear microphone 22 with a delay, and therefore a substantially constant phase shift, while the ambient noise will be picked up without phase shift by the two microphones 20 and 22. (which are omnidirectional microphones), given the distance of sources of parasitic noise compared to the two microphones 20 and 22.
- phase shifter combiner 56 which comprises a phase-shifter 58 applying a delay ⁇ to the signal of the rear microphone 22 and a combiner 60 for subtracting this delayed signal from the signal from the microphone before 20.
- a differential network of first-order microphones equivalent to a single virtual microphone whose directivity can be adjusted as a function of the value of ⁇ , with 0 ⁇ ⁇ ⁇ ⁇ A ( ⁇ A being the value corresponding to the natural phase difference between the two microphones 20 and 22, equal to the distance between the two microphones divided by the speed of sound, a delay of about 30 microseconds for a spacing of 1 cm).
- An appropriate choice of this parameter can be achieved by attenuating about 6 dB on surrounding diffuse noises. For more details on this technique, we can for example refer to:
- This signal is subjected by the block 62 to a frequency noise reduction.
- this frequency noise reduction is operated differently in the presence or absence of speech, by evaluating a probability p of absence of speech from the signal collected by the physiological sensor 18.
- this probability of absence of speech is derived from the information given by the physiological sensor.
- the signal delivered by this sensor has a very good signal / noise ratio up to the cutoff frequency FC determined by the block 44. But beyond this cutoff frequency the signal / noise ratio is still good, and often better than that of the microphones 20 and 22.
- the sensor information is exploited by calculating (block 64) the frequency intercorrelation between the combined signal delivered by the mixing block 54 and the signal unfiltered physiological sensor, before filtering by the low-pass filter 48.
- Smix ( f ) and smix ( f ) being the frequency (complex) vector representations, for the n- frame, respectively of the combined signal delivered by the mixing block 54, and of the signal of the physiological sensor 18.
- the algorithm searches for frequencies for which there is only noise (situation of absence of speech): on the spectrogram of the signal delivered by the mixing block 54 certain harmonics are embedded in the noise, while they stand out more on the signal of the physiological sensor.
- the peaks P 1 , P 2 , P 3 , P 4 , ... of this intercorrelation calculation indicate a strong correlation between the combined signal delivered by the mixing block 54, and the signal of the physiological sensor 18, and the Emergence of these correlated frequencies indicates the likely presence of speech for these frequencies.
- the value coefficient_normalization makes it possible to regulate the distribution of the probabilities according to the value of intercorrelation, and to obtain values between 0 and 1.
- the system that has just been described makes it possible to obtain excellent overall performance, typically of the order of 30 to 40 dB of noise reduction on the speech signal of the nearby speaker.
- This gives the impression to the distant speaker (the one with which the wearer of the headset is in communication ) that his interlocutor (the helmet wearer) is in a quiet room.
- the low frequency content collected at the cheek or temple by the physiological sensor 18 is different from the low frequency content of the sound emitted by the mouth of the user, as it would be captured by a microphone located a few centimeters from the mouth, or even by the ear of an interlocutor.
- the use of the physiological sensor and the filtering described above certainly makes it possible to obtain a very good signal in terms of signal-to-noise ratio, but which may present for the interlocutor who hears it a tone a little deaf and unnatural.
- the equalization can be performed automatically, from the signal delivered by the microphones 20, 22, before filtering.
- the Figure 4 shows an example, in the frequency domain (thus after Fourier transform) of the ACC signal produced by the physiological sensor 18, with respect to a MIC microphone signal that would be captured a few centimeters from the mouth.
- differentiated gains G 1 , G 2 , G 3 , G 4 ,... are applied to different frequency bands of the part of the spectrum located in the low frequencies.
- the algorithm calculates respective Fourier transforms of the two signals, providing a series of frequency coefficients (expressed in dB) NormPhysioFreq_dB (i) and NormMicFreq_dB (i) respectively corresponding to the standard of the ⁇ th Fourier coefficient physiological sensor signal and the standard of the ⁇ th Fourier coefficient of the microphonic signal.
- DifferenceFreq_dB i NormPhysioFreq _ d ⁇ B i - NormMicFreq _db i .
- the gain that will be applied will be less than unity (negative in dB); Conversely, if the difference is negative, the gain to be applied will be greater than unity (positive in dB).
- Gain_dB i ⁇ .
Landscapes
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Circuit For Audible Band Transducer (AREA)
- Soundproofing, Sound Blocking, And Sound Damping (AREA)
- Details Of Audible-Bandwidth Transducers (AREA)
- Telephone Set Structure (AREA)
Abstract
Description
L'invention concerne un casque audio du type micro/casque combinés.The invention relates to a headset type microphone / headset combined.
Un tel casque peut notamment être utilisé pour des fonctions de communication telles que des fonctions de téléphonie "mains libres", en complément de l'écoute d'une source audio (musique par exemple) provenant d'un appareil sur lequel est branché le casque.Such a headset can in particular be used for communication functions such as "hands-free" telephony functions, in addition to listening to an audio source (music for example) coming from a device on which the headphones are connected. .
Dans les fonctions de communication, l'une des difficultés est d'assurer une intelligibilité suffisante du signal capté par le microphone ("micro"), c'est-à-dire le signal de parole du locuteur proche (le porteur du casque). Le casque peut en effet être utilisé dans un environnement bruyant (métro, rue passante, train, etc.), de sorte que le micro captera non seulement la parole du porteur du casque, mais également les bruits parasites environnants.In the communication functions, one of the difficulties is to ensure sufficient intelligibility of the signal picked up by the microphone ("microphone"), that is to say the speech signal of the close speaker (the helmet wearer). . The helmet can indeed be used in a noisy environment (metro, busy street, train, etc.), so that the microphone will not only capture the speech of the wearer of the helmet, but also the surrounding noise.
Le porteur peut être protégé de ces bruits par le casque, notamment s'il s'agit d'un modèle à écouteurs fermés isolant l'oreille de l'extérieur, et encore plus si le casque est pourvu d'un "contrôle actif de bruit". En revanche le locuteur distant (celui se trouvant à l'autre bout du canal de communication) souffrira des bruits parasites captés par le micro, venant se superposer et interférer avec le signal de parole du locuteur proche (le porteur du casque).The wearer can be protected from these noises by the helmet, especially if it is a model with closed headphones isolating the ear from the outside, and even more if the headset is provided with an "active control of noise". On the other hand, the distant speaker (the one at the other end of the communication channel) will suffer from the noise picked up by the microphone, coming to overlap and interfere with the speech signal of the nearby speaker (the helmet wearer).
En particulier, certains formants de la parole essentiels à la compréhension de la voix sont souvent noyés dans des composantes de bruit couramment rencontrées dans les environnements habituels, composantes qui sont majoritairement concentrées dans les basses fréquences.In particular, some speech formers essential to the understanding of the voice are often embedded in noise components commonly encountered in the usual environments, components which are mainly concentrated in the low frequencies.
Il a été proposé de recueillir certaines vibrations vocales au moyen d'un capteur physiologique appliqué contre la joue ou la tempe du porteur du casque. En effet, lorsqu'une personne émet un son voisé (c'est-à-dire une composante de parole dont la production s'accompagne d'une vibration des cordes vocales), une vibration se propage depuis les cordes vocales jusqu'au pharynx et à la cavité bucco-nasale, où elle est modulée, amplifiée et articulée. La bouche, le voile du palais, le pharynx, les sinus les fosses nasales servent de caisse de résonance ce son voisé et, leurs parois étant élastiques, elles vibrent à leur tour, et ces vibrations sont transmises par conduction osseuse interne et sont perceptibles au niveau de la joue et de la tempe.It has been proposed to collect certain vocal vibrations by means of a physiological sensor applied against the cheek or the temple of the helmet wearer. Indeed, when a person makes a voiced sound (that is, a speech component whose production is accompanied by a vibration of the vocal cords), a vibration propagates from the vocal cords to the pharynx and to the bucco-nasal cavity, where it is modulated, amplified and articulated. The mouth, the soft palate, the pharynx, the sinuses and the nasal fossae serve as a sounding board for this voiced sound and, their walls being elastic, they vibrate in turn, and these vibrations are transmitted by internal bone conduction and are perceptible at level of the cheek and temple.
Ces vibrations vocales au niveau de la joue et de la tempe présentent la caractéristique d'être, par nature, très peu corrompues par le bruit environnant : en effet, en présence de bruits extérieurs, les tissus de la joue et de la tempe ne vibrent quasiment pas et ceci, quelle que soit la composition spectrale du bruit extérieur.These vocal vibrations at the level of the cheek and the temple have the characteristic of being, by nature, very little corrupted by the surrounding noise: indeed, in the presence of external noises, the fabrics of the cheek and the temple do not vibrate. almost no, whatever the spectral composition of the external noise.
Par ailleurs, en raison du filtrage engendré par la propagation des vibrations jusqu'à la tempe, le signal recueilli par le capteur physiologique est utilisable uniquement dans les basses fréquences. Mais comme les bruits généralement rencontrés dans un environnement habituel (rue, métro, train ...) sont majoritairement concentrés dans les basses fréquences, le capteur physiologique délivre un signal naturellement dépourvu de composante parasite de bruit (ce qui n'est pas possible avec un micro classique).Moreover, because of the filtering generated by the propagation of vibrations to the temple, the signal collected by the physiological sensor is usable only in the low frequencies. But as the noises generally encountered in a usual environment (street, metro, train ...) are mainly concentrated in the low frequencies, the physiological sensor delivers a signal naturally devoid of noise component noise (which is not possible with a classic microphone).
Le
- deux écouteurs comportant chacun un transducteur de reproduction sonore d'un signal audio ;
- un capteur physiologique, apte à venir en contact avec la joue ou la tempe du porteur du casque pour y être couplé et capter les vibrations vocales non acoustiques transmises par conduction osseuse interne, ce capteur physiologique délivrant un premier signal de parole ;
- un ensemble microphonique, comprenant au moins un microphone apte à capter les vibrations vocales acoustiques transmises par voie aérienne depuis la bouche du porteur du casque, cet ensemble microphonique délivrant un second signal de parole ; et
- des moyens de mixage, pour combiner le premier signal de parole et le second signal de parole, et donner en sortie un troisième signal de parole représentatif de la parole émise par le porteur du casque.
- two earphones each having a sound reproduction transducer of an audio signal;
- a physiological sensor, adapted to come into contact with the cheek or the temple of the helmet wearer to be coupled thereto and to capture the non-acoustic vocal vibrations transmitted by internal bone conduction, this physiological sensor delivering a first speech signal;
- a microphone assembly, comprising at least one microphone capable of capturing the acoustic vocal vibrations transmitted by air from the mouth of the helmet wearer, this microphone assembly delivering a second speech signal; and
- mixing means, for combining the first speech signal and the second speech signal, and outputting a third speech representative signal of the speech transmitted by the helmet wearer.
Le
Bien sûr, le signal recueilli par le capteur physiologique n'est pas à proprement parler de la parole puisque la parole n'est pas seulement formée de sons voisés, elle contient des composantes qui ne naissent pas au niveau des cordes vocales : le contenu fréquentiel est par exemple beaucoup plus riche avec le son provenant de la gorge et émis par la bouche. De plus, la conduction osseuse interne et la traversée de la peau a pour effet de filtrer certaines composantes vocales, qui fait que le signal délivré par le capteur physiologique n'est exploitable que dans la partie la plus basse du spectre. C'est pour cela que ce signal est complété par un autre signal, délivré par un capteur microphonique conventionnel, auquel il est combiné.Of course, the signal collected by the physiological sensor is not strictly speaking speech since speech is not only formed of voiced sounds, it contains components that are not born at the level of the vocal cords: the frequency content is for example much richer with the sound coming from the throat and emitted through the mouth. Moreover, the internal bone conduction and the crossing of the skin has the effect of filtering certain vocal components, which makes that the signal delivered by the physiological sensor is exploitable only in the lowest part of the spectrum. That is why this signal is supplemented by another signal, delivered by a conventional microphonic sensor, to which it is combined.
Le problème général de l'invention est, dans un tel contexte, de délivrer au locuteur distant un signal vocal représentatif de la parole émise par le locuteur proche, signal qui soit débarrassé des composantes parasites de bruits extérieurs présents dans l'environnement du locuteur proche.The general problem of the invention is, in such a context, to deliver to the remote speaker a voice signal representative of the speech transmitted by the near speaker, a signal which is freed from the parasitic components of external noise present in the environment of the close speaker .
Un aspect important de ce problème est la nécessité de restituer un signal de parole naturel et intelligible, c'est-à-dire non distordu et dont la plage des fréquences utiles ne soit pas amputée par les traitements de combinaison des signaux issus de capteurs exploitant des vibrations qui sont de nature différente et transmises par des voies différentes.An important aspect of this problem is the need to restore a natural and intelligible speech signal, that is to say, undistorted and whose range of useful frequencies is not amputated by the combination of signal processing from operating sensors vibrations that are different in nature and transmitted by different paths.
Un autre aspect de l'invention réside dans la possibilité d'utiliser de façon efficace le signal issu du capteur physiologique pour contrôler diverses fonctions de traitement du signal. Ce signal permet en effet d'accéder à de nouvelles informations concernant le contenu de la parole, qui seront ensuite utilisées pour le débruitage ainsi que pour diverses fonctions auxiliaires que l'on exposera plus bas, notamment le calcul d'une fréquence de coupure d'un filtre dynamique.Another aspect of the invention resides in the ability to efficiently use the signal from the physiological sensor to control various signal processing functions. This signal makes it possible to access new information concerning the content of the speech, which will then be used for the denoising as well as for various auxiliary functions that will be explained below, in particular the calculation of a cutoff frequency of a dynamic filter.
Pour résoudre ces problèmes, l'invention propose un combiné micro/casque du type exposé ci-dessus tel qu'enseigné par le
- le capteur physiologique est incorporé à un coussinet circumaural d'une coque de l'un des écouteurs ;
- l'ensemble microphonique comprend deux microphones placés sur la coque de l'un des écouteurs ;
- les deux microphones sont alignés en un réseau linéaire suivant une direction principale dirigée vers la bouche du porteur du casque ; et
- il est prévu des moyens de réduction de bruit non fréquentielle du second signal de parole, comprenant un combineur apte à appliquer un retard au signal délivré par l'un des microphones et à soustraire ce signal retardé du signal délivré par l'autre microphone, de manière à opérer un débruitage du signal de parole proche émis par le porteur du casque.
- the physiological sensor is incorporated in a circumaural pad of a shell of one of the earphones;
- the microphone set includes two microphones placed on the shell of one of the earphones;
- the two microphones are aligned in a linear array in a principal direction directed towards the mouth of the wearer of the helmet; and
- there is provided non-frequency noise reduction means of the second speech signal, comprising a combiner able to apply a delay to the signal delivered by one of the microphones and to subtract this delayed signal from the signal delivered by the other microphone, way to operate a denoising of the near speech signal emitted by the wearer of the helmet.
Avantageusement, le combiné micro/casque comprend des moyens de filtrage passe-bas du premier signal de parole avant combinaison par les moyens de mixage, et/ou des moyens de filtrage passe-haut du second signal de parole avant débruitage et combinaison par les moyens de mixage. Ces moyens de filtrage passe-bas et/ou passe-haut comprennent un filtre à fréquence de coupure ajustable, et le casque comprend des moyens de calcul de la fréquence de coupure, opérant en fonction du signal délivré par le capteur physiologique. Les moyens de calcul de la fréquence de coupure peuvent en particulier comprendre des moyens d'analyse du contenu spectral du signal délivré par le capteur physiologique, aptes à déterminer la fréquence de coupure en fonction des niveaux relatifs du rapport signal/bruit évalué dans une pluralité de bandes de fréquences distinctes du signal délivré par le capteur physiologique.Advantageously, the microphone / headset combination comprises low-pass filtering means of the first speech signal before combination by the mixing means, and / or high-pass filtering means of the second speech signal before denoising and combination by the means. mixing. These low-pass and / or high-pass filtering means comprise an adjustable cutoff frequency filter, and the headset comprises means for calculating the cutoff frequency, operating as a function of the signal delivered by the physiological sensor. The means for calculating the cutoff frequency may in particular comprise means for analyzing the spectral content of the signal delivered by the physiological sensor, able to determine the cutoff frequency as a function of the relative levels of the signal / noise ratio evaluated in a plurality of distinct frequency bands of the signal delivered by the physiological sensor.
De préférence, les moyens de débruitage du second signal de parole sont des moyens de réduction de bruit non fréquentielle avec, dans une forme de réalisation particulière de l'invention, l'ensemble microphonique qui comprend deux microphones, et les moyens de réduction de bruit non fréquentielle qui comprennent un combineur apte à appliquer un retard au signal délivré par l'un des microphones et à soustraire ce signal retardé du signal délivré par l'autre microphone.Preferably, the denoising means of the second speech signal are non-frequency noise reduction means with, in a particular embodiment of the invention, the microphone assembly which comprises two microphones, and the noise reduction means. non-frequency which comprise a combiner able to apply a delay to the signal delivered by one of the microphones and to subtract this delayed signal from the signal delivered by the other microphone.
Les deux microphones peuvent en particulier être alignés en un réseau linéaire suivant une direction principale dirigée vers la bouche du porteur du casque.In particular, the two microphones can be aligned in a linear array in a main direction directed towards the mouth of the wearer of the helmet.
De préférence également, il est prévu des moyens de débruitage du troisième signal de parole délivré par les moyens de mixage, notamment des moyens de réduction de bruit fréquentielle.Also preferably, there is provided denoising means of the third speech signal delivered by the mixing means, including frequency noise reduction means.
À cet effet, et selon un aspect original de l'invention, il est prévu des moyens recevant en entrée, et opérant une intercorrélation entre, le premier et le troisième signal de parole, et délivrant en sortie un signal de probabilité de présence de parole fonction du résultat de l'intercorrélation. Les moyens de débruitage du troisième signal de parole reçoivent en entrée ce signal de probabilité de présence de parole pour, sélectivement : i) opérer une réduction de bruit différenciée selon les bandes de fréquences en fonction de la valeur du signal de probabilité de présence de parole, et ii) opérer une réduction de bruit maximale sur toutes les bandes de fréquences en l'absence de parole.For this purpose, and according to an original aspect of the invention, there is provided input receiving means, and operating an intercorrelation between the first and the third speech signal, and outputting a speech presence probability signal. function of the result of the intercorrelation. The denoising means of the third speech signal receive as input this speech presence probability signal for selectively: i) making a noise reduction differentiated according to the frequency bands as a function of the value of the speech presence probability signal, and ii) perform a maximum noise reduction on all the frequency bands in the absence of speech.
Il peut en outre être prévu des moyens de post-traitement, aptes à opérer une égalisation sélective par bandes de fréquences dans la partie du spectre correspondant au signal recueilli par le capteur physiologique. Ces moyens déterminent un gain d'égalisation pour chacune des bandes de fréquences, ce gain étant calculé à partir des coefficients fréquentiels respectifs des signaux délivrés par le(s) microphones et des signaux délivrés par le capteur physiologique, considérés dans le domaine fréquentiel. Ils opèrent en outre un lissage sur une pluralité des trames successives de signal du gain d'égalisation calculé.In addition, there may be provided post-processing means capable of selectively frequency band equalizing in the part of the spectrum corresponding to the signal collected by the physiological sensor. These means determine an equalization gain for each of the frequency bands, this gain being calculated from the respective frequency coefficients of the signals delivered by the microphone (s) and signals delivered by the physiological sensor, considered in the frequency domain. They also operate smoothing on a plurality of successive signal frames of the calculated equalization gain.
On va maintenant décrire un exemple de mise en oeuvre du dispositif de l'invention, en référence aux dessins annexés où les mêmes références numériques désignent d'une figure à l'autre des éléments identiques ou fonctionnellement semblables.
- La
Figure 1 illustre de façon générale le casque de l'invention, posé sur la tête d'un utilisateur. - La
Figure 2 est un schéma d'ensemble, sous forme de blocs fonctionnels, expliquant la manière dont est réalisé le traitement du signal permettant de délivrer en sortie un signal débruité représentatif de la parole émise par le porteur du casque. - La
Figure 3 est une représentation spectrale amplitude/fréquence illustrant le calcul d'intercorrélation servant à évaluer une probabilité de présence de parole. - La
Figure 4 est une représentation spectrale amplitude/fréquence illustrant le traitement final d'égalisation automatique opéré après la réduction de bruit.
- The
Figure 1 generally illustrates the headset of the invention, placed on the head of a user. - The
Figure 2 is a block diagram, in the form of functional blocks, explaining the manner in which the signal processing is carried out making it possible to output a speechless signal representative of the speech transmitted by the helmet wearer. - The
Figure 3 is an amplitude / frequency spectral representation illustrating the intercorrelation calculation used to evaluate a probability of presence of speech. - The
Figure 4 is an amplitude / frequency spectral representation illustrating the final automatic equalization processing performed after the noise reduction.
Sur la
Ce casque est pourvu d'un capteur physiologique 18 permettant de recueillir les vibrations produites par un signal voisé émis par le porteur du casque, et qui peuvent être captées au niveau de la joue ou de la tempe. Le capteur 18 est de préférence un accéléromètre intégré dans le coussinet 16 de manière à venir s'appliquer contre la joue ou la tempe de l'utilisateur avec un couplage le plus étroit possible. Le capteur physiologique peut notamment être placé sur la face intérieure de la peau du coussinet de sorte que, une fois le casque mis en place, le capteur physiologique soit appliqué contre la joue ou la tempe de l'utilisateur sous l'effet une légère pression résultant de l'écrasement du matériau du coussinet, avec seulement interposition de la peau du coussinet.This helmet is provided with a
Le casque comporte également un réseau ou antenne de microphones, par exemple deux micros omnidirectionnels 20, 22, placés sur la coque de l'écouteur 12. Ces deux micros avant 20 et arrière 22 sont des micros omnidirectionnels disposés l'un par rapport à l'autre de manière que leur direction d'alignement 24 soit approximativement dirigée vers la bouche 26 du porteur du casque.The headset also comprises a microphone array or antenna, for example two
La
Le procédé de l'invention est mis en oeuvre par des moyens logiciels, qu'il est possible de décomposer et schématiser par un certain nombre de blocs 30 à 64 illustrés
On retrouve sur cette figure le capteur physiologique 18 et les deux micros omnidirectionnels avant 20 et arrière 22. La référence 28 désigne par ailleurs le transducteur de reproduction sonore placé à l'intérieur de la coque de l'écouteur. Ces divers éléments délivrent des signaux qui font l'objet d'un traitement par le bloc référencé 30, qui peut être couplé à une interface 32 aux circuits de communication (circuits téléphoniques) et reçoit en entrée E le son destiné à être reproduit par le transducteur 28 (parole du locuteur distant pendant une communication téléphonique, source musicale hors des périodes de communication téléphonique), et délivre sur la sortie S un signal représentatif de la parole du locuteur proche, c'est-à-dire du porteur du casque.In this figure, we find the
Le signal à reproduire appliqué sur l'entrée E est un signal numérique converti en analogique par le convertisseur 34, puis amplifié par l'amplificateur 36 pour reproduction par le transducteur 28.The signal to be reproduced applied to the input E is a digital signal converted into analog by the
On va maintenant décrire la manière dont est produit le signal débruité représentatif de la parole du locuteur proche, à partir des signaux respectifs recueillis par le capteur physiologique 18 et les micros 20 et 22.The manner in which the speech signal representative of the speech of the near speaker is produced from the respective signals collected by the
Le signal recueilli par le capteur physiologique 18 est un signal comprenant principalement des composantes dans la région inférieure du spectre sonore (typiquement 0-1500 Hz). Comme on l'a expliqué plus haut, ce signal est naturellement non bruité.The signal collected by the
Les signaux recueillis par les micros 20, 22 seront utilisés principalement pour le haut du spectre (au-dessus de 1500 Hz), mais ces signaux sont fortement bruités et il sera indispensable d'opérer un traitement de débruitage fort pour en éliminer les composantes de bruit parasites, dont le niveau peut être tel, dans certains environnements, qu'elles occultent complètement le signal de parole capté par ces micros 20, 22.The signals collected by the
La première étape du traitement est un traitement anti-écho, appliqué aux signaux du capteur physiologique et des micros.The first stage of the treatment is an anti-echo treatment, applied to the signals of the physiological sensor and the microphones.
En effet, le son reproduit par le transducteur 28 est capté par le capteur physiologique 18 et les micros 20, 22, générant un écho qui perturbe le fonctionnement du système, et qui doit donc être éliminé en amont.Indeed, the sound reproduced by the
Ce traitement anti-écho est mis en oeuvre par les blocs 38, 40 et 42, chacun de ces blocs recevant sur une première entrée le signal émis par le capteur 18, 20 ou bien 22 et sur une second entrée le signal reproduit par le transducteur 28 (signal générateur d'écho), et délivre en sortie, pour traitement ultérieur, un signal dont l'écho a été éliminé.This anti-echo treatment is implemented by the
Le traitement anti-écho est par exemple réalisé par un traitement à algorithme adaptatif tel que celui décrit dans le
Cette modélisation repose sur la recherche d'une corrélation entre le signal reproduit par le transducteur 28 et le signal recueilli par le capteur physiologique 18 (ou le micro 20 ou 22), c'est-à-dire sur une estimation de la réponse impulsionnelle du couplage constituée par le corps de l'écouteur 12 supportant ces divers éléments.This modeling is based on the search for a correlation between the signal reproduced by the
Le traitement est notamment opéré par un algorithme de type APA (Affine Projection Algorithm) adaptatif, qui assure une convergence rapide, bien adaptée à des applications de type "mains libres" avec un débit vocal intermittent et un niveau pouvant rapidement varier.The processing is performed in particular by an adaptive APA ( Affine Projection Algorithm ) algorithm , which provides fast convergence, well suited to hands-free applications with intermittent speech rate and a level that can quickly vary.
Avantageusement, l'algorithme itératif est exécuté avec un pas variable, comme décrit dans le
Le signal recueilli par le capteur physiologique 18 après le traitement anti-écho par le bloc 38 sera utilisé comme signal d'entrée d'un bloc 44 de calcul d'une fréquence de coupure FC.The signal collected by the
L'étape suivante consiste à opérer un filtrage des signaux, avec un filtre passe-bas 48 pour le signal du capteur physiologique 18 et avec un filtre passe-haut 50, 52 pour les signaux recueillis par les micros 20, 22, respectivement.The next step consists in filtering the signals, with a low-
Ces filtres 48, 50 et 52 sont de préférence des filtres numériques du type à réponse impulsionnelle infinie IIR (filtres récursifs), qui présentent une transition relativement abrupte entre la bande passante et la bande rejetée.These
Avantageusement, ces filtres sont des filtres adaptatifs dont la fréquence de coupure est variable et déterminée dynamiquement par le bloc 44.Advantageously, these filters are adaptive filters whose cutoff frequency is variable and determined dynamically by the
Ceci permet d'adapter le filtrage aux conditions particulières d'utilisation du casque : voix plus ou moins haute du porteur lorsqu'il parle, couplage plus ou moins étroit entre le capteur physiologique 18 et la joue ou la tempe du porteur, etc. La fréquence de coupure FC, qui est de préférence la même pour le filtre passe-bas 48 et les filtres passe-haut 50 et 52, est déterminée à partir du signal du capteur physiologique 18 après le traitement anti-écho 38. Pour cela, un algorithme calcule le rapport signal/bruit pour plusieurs bandes de fréquences situées dans une plage comprise entre par exemple 0 et 2500 Hz (le niveau de bruit étant donné par un calcul de l'énergie dans une bande de fréquences plus haute, par exemple entre 3000 et 4000 Hz, car l'on sait que dans cette zone le signal ne peut être que du bruit, du fait des propriétés du composant constituant le capteur physiologique 18). La fréquence de coupure choisie correspondra à la fréquence maximale pour laquelle le rapport signal/bruit dépasse un seuil prédéterminé, par exemple 10 dB.This makes it possible to adapt the filtering to the particular conditions of use of the headset: the voice of the wearer when he speaks, more or less close coupling between the
L'étape suivante consiste à opérer au moyen du bloc 54 un mixage pour reconstruire le spectre complet avec, d'une part, la région inférieure du spectre donnée par le signal filtré du capteur physiologique 18 et, d'autre part, le haut du spectre donné par le signal filtré des micros 20 et 22 après passage dans un combineur-déphaseur 56 permettant d'opérer un débruitage dans cette partie du spectre. Cette reconstruction est opérée par sommation des deux signaux, qui sont appliqués en synchronisme au bloc de mixage 54 de manière à éviter toute déformation.The following step consists in operating, by means of
On va maintenant décrire plus précisément la manière dont est opérée la réduction du bruit par le combineur-déphaseur 56.We shall now describe more precisely the manner in which the noise reduction is performed by the phase-
Le signal que l'on souhaite débruiter (c'est-à-dire le signal du locuteur proche situé dans la partie haute du spectre, typiquement les composantes de fréquence supérieure à 1500 Hz) est issu des deux micros 20, 22 disposés à quelques centimètres l'un de l'autre sur la coque 14 de l'un des écouteurs du casque. Comme on l'a indiqué, ces deux micros sont disposés l'un par rapport à l'autre de manière que la direction 24 qu'ils définissent soit approximativement orientée dans la direction de la bouche 26 du porteur du casque. De ce fait, un signal de parole émis depuis la bouche atteindra le micro avant 20 puis le micro arrière 22 avec un retard, et donc un déphasage, sensiblement constant, tandis que les bruits ambiants seront captés sans déphasage par les deux micros 20 et 22 (qui sont des micros omnidirectionnels), compte tenu de l'éloignement des sources de bruits parasites par rapport aux deux micros 20 et 22.The signal that we want to denoise (that is, the signal from the near speaker located in the upper part of the spectrum, typically the components of frequency greater than 1500 Hz) is derived from the two
La réduction de bruit sur les signaux captés par les micros 20 et 22 n'est pas opérée dans le domaine fréquentiel (comme cela est souvent le cas), mais dans le domaine temporel, au moyen du combineur-déphaseur 56 qui comprend un déphaseur 58 appliquant un retard τ au signal du micro arrière 22 et un combineur 60 permettant de soustraire ce signal retardé au signal issu du micro avant 20.The noise reduction on the signals picked up by the
On constitue ainsi un réseau différentiel de micros du premier ordre, équivalent à un micro virtuel unique dont la directivité pourra être ajustée en fonction de la valeur de τ, avec 0 ≤ τ ≤ τA (τA étant la valeur correspondant au déphasage naturel entre les deux micros 20 et 22, égale à la distance entre les deux micros divisée par la vitesse du son, soit un retard d'environ 30 µs pour un espacement de 1 cm). Une valeur τ = τA donnera un diagramme de directivité cardioïde, une valeur τ = τA /3 un diagramme hypercardioïde, et une valeur τ = 0 un diagramme dipolaire. On peut obtenir par un choix approprié de ce paramètre une atténuation d'environ 6 dB sur des bruits diffus environnants. Pour plus de détails sur cette technique, on pourra par exemple se référer à :Thus, a differential network of first-order microphones equivalent to a single virtual microphone whose directivity can be adjusted as a function of the value of τ, with 0 ≤ τ ≤ τ A (τ A being the value corresponding to the natural phase difference between the two
[1]
On va maintenant décrire les traitements opérés sur le signal global (haut et bas du spectre) délivré en sortie des moyens de mixage 54.[1]
We will now describe the operations performed on the overall signal (top and bottom of the spectrum) delivered at the output of the mixing means 54.
Ce signal est soumis par le bloc 62 à une réduction de bruit fréquentielle.This signal is subjected by the
De préférence, cette réduction de bruit fréquentielle est opérée de façon différente en présence ou en l'absence de parole, en évaluant une probabilité p d'absence de parole à partir du signal recueilli par le capteur physiologique 18.Preferably, this frequency noise reduction is operated differently in the presence or absence of speech, by evaluating a probability p of absence of speech from the signal collected by the
Avantageusement, cette probabilité d'absence de parole est dérivée de l'information donnée par le capteur physiologique.Advantageously, this probability of absence of speech is derived from the information given by the physiological sensor.
En effet, comme on l'a indiqué plus haut, le signal délivré par ce capteur présente un très bon rapport signal/bruit jusqu'à la fréquence de coupure FC déterminée par le bloc 44. Mais au-delà de cette fréquence de coupure le rapport signal/bruit reste encore bon, et souvent meilleur que celui des micros 20 et 22. L'information du capteur est exploitée en calculant (bloc 64) l'intercorrélation fréquentielle entre le signal combiné délivré par le bloc de mixage 54 et le signal non filtré du capteur physiologique, avant filtrage par le filtre passe-bas 48.Indeed, as indicated above, the signal delivered by this sensor has a very good signal / noise ratio up to the cutoff frequency FC determined by the
Ainsi, pour chaque fréquence f comprise par exemple entre FC et 4000 Hz, et pour chaque trame n, le calcul suivant est réalisé par le bloc 64 :
Smix(f)et smix(f) étant les représentations vectorielles (complexes) fréquentielles, pour la trame n, respectivement du signal combiné délivré par le bloc de mixage 54, et du signal du capteur physiologique 18. Smix ( f ) and smix ( f ) being the frequency (complex) vector representations, for the n- frame, respectively of the combined signal delivered by the mixing
Pour évaluer une probabilité d'absence de parole, l'algorithme recherche les fréquences pour lesquelles il n'y a que du bruit (situation d'absence de parole) : sur le spectrogramme du signal délivré par le bloc de mixage 54 certaines harmoniques sont noyées dans le bruit, alors qu'elles ressortent plus sur le signal du capteur physiologique.To evaluate a probability of absence of speech, the algorithm searches for frequencies for which there is only noise (situation of absence of speech): on the spectrogram of the signal delivered by the mixing
Le calcul d'intercorrélation par la formule décrite ci-dessus produit un résultat dont la
Les pics P1, P2, P3, P4, ... de ce calcul d'intercorrélation indiquent une forte corrélation entre le signal combiné délivré par le bloc de mixage 54, et le signal du capteur physiologique 18, et l'émergence de ces fréquences corrélées indique la présence probable de parole pour ces fréquences.The peaks P 1 , P 2 , P 3 , P 4 , ... of this intercorrelation calculation indicate a strong correlation between the combined signal delivered by the mixing
Pour obtenir une probabilité d'absence de parole (bloc 66), on considère la valeur complémentaire :
La valeur coefficient_normalisation permet de régler la répartition des probabilités en fonction de la valeur de l'intercorrélation, et obtenir des valeurs entre 0 et 1.The value coefficient_normalization makes it possible to regulate the distribution of the probabilities according to the value of intercorrelation, and to obtain values between 0 and 1.
La probabilité p d'absence de parole ainsi obtenue est appliquée au bloc 62 qui opère sur le signal délivré par le bloc de mixage 54 une réduction de bruit fréquentielle de façon sélective par rapport à un seuil donné de probabilité d'absence de parole :
- en l'absence probable de parole, la réduction de bruit est appliquée sur toutes les bandes de fréquences, c'est-à-dire que le gain maximal de réduction est appliqué de la même façon sur toutes les composantes du signal (puisque dans ce cas celui-ci ne contient vraisemblablement pas de composante utile) ;
- en revanche, en présence probable de parole, la réduction de bruit est une réduction de bruit fréquentielle appliquée sélectivement selon les différentes bandes de fréquences en fonction de la valeur p de la probabilité de présence de parole, selon un schéma classique, par exemple comparable à celui décrit dans le
WO 2007/099222 A1 (Parrot
- in the probable absence of speech, the noise reduction is applied to all the frequency bands, ie the maximum reduction gain is applied in the same way to all the signal components (since in this case this one probably does not contain a useful component);
- on the other hand, in the probable presence of speech, the noise reduction is a frequency noise reduction applied selectively according to the different frequency bands as a function of the value p of the probability of presence of speech, according to a conventional scheme, for example comparable to the one described in
WO 2007/099222 A1 (Parrot
Le système que l'on vient de décrire permet d'obtenir d'excellentes performances globales, typiquement de l'ordre de 30 à 40 dB de réduction de bruit sur le signal de parole du locuteur proche. Grâce à l'élimination de tous les bruits parasites, notamment les plus gênants (train, métro, etc.) qui sont concentrés dans les basses fréquences, cela donne l'impression au locuteur distant (celui avec lequel le porteur du casque est en communication) que son interlocuteur (le porteur du casque) se trouve dans une pièce silencieuse.The system that has just been described makes it possible to obtain excellent overall performance, typically of the order of 30 to 40 dB of noise reduction on the speech signal of the nearby speaker. By eliminating all the noises, especially the most troublesome (train, metro, etc.) that are concentrated in the low frequencies, this gives the impression to the distant speaker (the one with which the wearer of the headset is in communication ) that his interlocutor (the helmet wearer) is in a quiet room.
Enfin, il est avantageux d'appliquer au signal une égalisation finale (bloc 68), notamment sur le bas du spectre.Finally, it is advantageous to apply to the signal a final equalization (block 68), especially on the low end of the spectrum.
En effet, le contenu basse fréquence recueilli au niveau de la joue ou de la tempe par le capteur physiologique 18 est différent du contenu basse fréquence du son émis par la bouche de l'utilisateur, tel qu'il serait capté par un micro situé à quelques centimètres de la bouche, ou même par l'oreille d'un interlocuteur. L'utilisation du capteur physiologique et le filtrage que l'on a décrit plus haut permet certes d'obtenir un signal très bon en termes de rapport signal/bruit, mais qui peut présenter pour l'interlocuteur qui l'entend un timbre un peu sourd et peu naturel.Indeed, the low frequency content collected at the cheek or temple by the
Pour pallier cette difficulté, il est avantageux d'opérer une égalisation du signal de sortie avec des gains ajustés sélectivement sur différentes bandes de fréquences dans la région du spectre correspondant au signal recueilli par le capteur physiologique. L'égalisation peut être réalisée de manière automatique, à partir du signal délivré par les micros 20, 22, avant filtrage.To overcome this difficulty, it is advantageous to operate an equalization of the output signal with gains adjusted selectively on different frequency bands in the region of the spectrum corresponding to the signal collected by the physiological sensor. The equalization can be performed automatically, from the signal delivered by the
La
De manière à optimiser le rendu du signal recueilli par le capteur physiologique, des gains différenciés G1, G2, G3, G4, ... sont appliqués à différentes bandes de fréquences de la partie du spectre située dans les basses fréquences.In order to optimize the rendering of the signal collected by the physiological sensor, differentiated gains G 1 , G 2 , G 3 , G 4 ,... Are applied to different frequency bands of the part of the spectrum located in the low frequencies.
Ces gains sont évalués par comparaison des signaux captés, dans une bande de fréquences commune, à la fois par le capteur physiologique 18 et par les micros 20 et/ou 22.These gains are evaluated by comparing the signals picked up, in a common frequency band, by both the
Plus précisément, l'algorithme calcule les transformées de Fourier respectives de ces deux signaux, donnant une série de coefficients fréquentiels (exprimés en dB) NormPhysioFreq_dB(i) et NormMicFreq_dB(i), correspondant respectivement à la norme du ¡ ième coefficient de Fourier du signal du capteur physiologique et à la norme du ¡ ième coefficient Fourier du signal microphonique.Specifically, the algorithm calculates respective Fourier transforms of the two signals, providing a series of frequency coefficients (expressed in dB) NormPhysioFreq_dB (i) and NormMicFreq_dB (i) respectively corresponding to the standard of the ¡th Fourier coefficient physiological sensor signal and the standard of the ¡ th Fourier coefficient of the microphonic signal.
Pour chaque coefficient fréquentiel de rang i, si la différence :
est positive, le gain qui sera appliqué sera inférieur à l'unité (négatif en dB) ; réciproquement si la différence est négative le gain à appliquer sera supérieur à l'unité (positif en dB).For each frequency coefficient of rank i , if the difference:
is positive, the gain that will be applied will be less than unity (negative in dB); Conversely, if the difference is negative, the gain to be applied will be greater than unity (positive in dB).
Si le gain était appliqué tel quel, les différences n'étant pas exactement constantes d'une trame à une autre, notamment lorsqu'il ne s'agit pas de sons voisés, il y aurait des variations importantes d'égalisation dans le timbre. Pour éviter ces variations, l'algorithme opère un lissage de la différence, qui permet d'affiner l'égalisation :
Plus le coefficient λ sera proche de 1, moins l'information de la trame courante sera prise en compte pour le calcul du gain du i ième coefficient. Inversement, plus le coefficient λ sera proche de 0, plus l'information instantanée sera prise en compte. En pratique, pour que le lissage soit efficace, on prendra une valeur λ proche de 1, par exemple λ = 0,99. Le gain appliqué sur chaque bande de fréquences du signal issu du capteur physiologique donnera, pour la i ième fréquence modifiée :
C'est cette norme qui sera utilisée par l'algorithme d'égalisation.It is this standard that will be used by the equalization algorithm.
L'application de gains différenciés permet de rendre plus naturel le signal de parole dans le bas du spectre. Une étude subjective a montré que, dans un environnement silencieux et lorsqu'une telle égalisation est appliquée, la différence entre un signal microphonique de référence et le signal produit par le capteur physiologique dans le bas du spectre est pratiquement imperceptible.The application of differentiated gains makes it possible to make the speech signal more natural in the lower part of the spectrum. A subjective study has shown that in a quiet environment and when such equalization is applied, the difference between a reference microphonic signal and the signal produced by the physiological sensor in the low end of the spectrum is practically imperceptible.
Claims (9)
de manière à opérer un débruitage du signal de parole proche émis par le porteur du casque.
in order to operate a denoising of the close speech signal emitted by the wearer of the helmet.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1153572A FR2974655B1 (en) | 2011-04-26 | 2011-04-26 | MICRO / HELMET AUDIO COMBINATION COMPRISING MEANS FOR DEBRISING A NEARBY SPEECH SIGNAL, IN PARTICULAR FOR A HANDS-FREE TELEPHONY SYSTEM. |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2518724A1 true EP2518724A1 (en) | 2012-10-31 |
EP2518724B1 EP2518724B1 (en) | 2013-10-02 |
Family
ID=45939241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12164777.0A Not-in-force EP2518724B1 (en) | 2011-04-26 | 2012-04-19 | Microphone/headphone audio headset comprising a means for suppressing noise in a speech signal, in particular for a hands-free telephone system |
Country Status (5)
Country | Link |
---|---|
US (1) | US8751224B2 (en) |
EP (1) | EP2518724B1 (en) |
JP (1) | JP6017825B2 (en) |
CN (1) | CN102761643B (en) |
FR (1) | FR2974655B1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015144708A1 (en) * | 2014-03-25 | 2015-10-01 | Elno | Acoustic apparatus comprising at least one electroacoustic microphone, an osteophonic microphone and means for calculating a corrected signal, and associated item of headwear |
EP2945399A1 (en) | 2014-05-16 | 2015-11-18 | Parrot | Audio headset with active noise control anc with prevention of the effects of saturation of a microphone signal feedback |
EP3163572A1 (en) * | 2015-10-29 | 2017-05-03 | BlackBerry Limited | Method and device for supressing ambient noise in a speech signal generated at a microphone of the device |
EP3171612A1 (en) | 2015-11-19 | 2017-05-24 | Parrot Drones | Audio headphones with active noise control, anti-occlusion control and passive attenuation cancellation, based on the presence or the absence of a vocal activity of the headphone user |
CN110447073B (en) * | 2017-03-20 | 2023-11-03 | 伯斯有限公司 | Audio signal processing for noise reduction |
Families Citing this family (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9247346B2 (en) | 2007-12-07 | 2016-01-26 | Northern Illinois Research Foundation | Apparatus, system and method for noise cancellation and communication for incubators and related devices |
US9135915B1 (en) * | 2012-07-26 | 2015-09-15 | Google Inc. | Augmenting speech segmentation and recognition using head-mounted vibration and/or motion sensors |
US9704486B2 (en) * | 2012-12-11 | 2017-07-11 | Amazon Technologies, Inc. | Speech recognition power management |
CN103208291A (en) * | 2013-03-08 | 2013-07-17 | 华南理工大学 | Speech enhancement method and device applicable to strong noise environments |
US9560444B2 (en) * | 2013-03-13 | 2017-01-31 | Cisco Technology, Inc. | Kinetic event detection in microphones |
JP6123503B2 (en) * | 2013-06-07 | 2017-05-10 | 富士通株式会社 | Audio correction apparatus, audio correction program, and audio correction method |
CN109327789B (en) | 2013-06-28 | 2021-07-13 | 哈曼国际工业有限公司 | Method and system for enhancing sound reproduction |
DE102013216133A1 (en) * | 2013-08-14 | 2015-02-19 | Sennheiser Electronic Gmbh & Co. Kg | Handset or headset |
US9180055B2 (en) * | 2013-10-25 | 2015-11-10 | Harman International Industries, Incorporated | Electronic hearing protector with quadrant sound localization |
US20150118960A1 (en) * | 2013-10-28 | 2015-04-30 | Aliphcom | Wearable communication device |
US9036844B1 (en) | 2013-11-10 | 2015-05-19 | Avraham Suhami | Hearing devices based on the plasticity of the brain |
EP2882203A1 (en) | 2013-12-06 | 2015-06-10 | Oticon A/s | Hearing aid device for hands free communication |
US10219067B2 (en) | 2014-08-29 | 2019-02-26 | Harman International Industries, Incorporated | Auto-calibrating noise canceling headphone |
US9942848B2 (en) * | 2014-12-05 | 2018-04-10 | Silicon Laboratories Inc. | Bi-directional communications in a wearable monitor |
CN104486286B (en) * | 2015-01-19 | 2018-01-05 | 武汉邮电科学研究院 | A kind of up frame synchornization method of continuous subcarrier OFDMA system |
US9905216B2 (en) * | 2015-03-13 | 2018-02-27 | Bose Corporation | Voice sensing using multiple microphones |
US9847093B2 (en) * | 2015-06-19 | 2017-12-19 | Samsung Electronics Co., Ltd. | Method and apparatus for processing speech signal |
US20160379661A1 (en) * | 2015-06-26 | 2016-12-29 | Intel IP Corporation | Noise reduction for electronic devices |
GB2552178A (en) * | 2016-07-12 | 2018-01-17 | Samsung Electronics Co Ltd | Noise suppressor |
CN106211012B (en) * | 2016-07-15 | 2019-11-29 | 成都定为电子技术有限公司 | A kind of measurement and correction system and method for the response of earphone time-frequency |
JP6634354B2 (en) * | 2016-07-20 | 2020-01-22 | ホシデン株式会社 | Hands-free communication device for emergency call system |
CN110035808A (en) * | 2016-09-14 | 2019-07-19 | 声感股份有限公司 | With synchronous more equipment audio stream Transmission systems |
WO2018083511A1 (en) * | 2016-11-03 | 2018-05-11 | 北京金锐德路科技有限公司 | Audio playing apparatus and method |
WO2018199846A1 (en) * | 2017-04-23 | 2018-11-01 | Audio Zoom Pte Ltd | Transducer apparatus for high speech intelligibility in noisy environments |
US10341759B2 (en) * | 2017-05-26 | 2019-07-02 | Apple Inc. | System and method of wind and noise reduction for a headphone |
CN107180627B (en) * | 2017-06-22 | 2020-10-09 | 潍坊歌尔微电子有限公司 | Method and device for removing noise |
US10706868B2 (en) | 2017-09-06 | 2020-07-07 | Realwear, Inc. | Multi-mode noise cancellation for voice detection |
US10701470B2 (en) | 2017-09-07 | 2020-06-30 | Light Speed Aviation, Inc. | Circumaural headset or headphones with adjustable biometric sensor |
US10764668B2 (en) | 2017-09-07 | 2020-09-01 | Lightspeed Aviation, Inc. | Sensor mount and circumaural headset or headphones with adjustable sensor |
CN109729463A (en) * | 2017-10-27 | 2019-05-07 | 北京金锐德路科技有限公司 | The compound audio signal reception device of sound wheat bone wheat of formula interactive voice earphone is worn for neck |
JP7194912B2 (en) * | 2017-10-30 | 2022-12-23 | パナソニックIpマネジメント株式会社 | headset |
CN107886967B (en) * | 2017-11-18 | 2018-11-13 | 中国人民解放军陆军工程大学 | A kind of bone conduction sound enhancement method of depth bidirectional gate recurrent neural network |
US10438605B1 (en) * | 2018-03-19 | 2019-10-08 | Bose Corporation | Echo control in binaural adaptive noise cancellation systems in headsets |
CN110931027A (en) * | 2018-09-18 | 2020-03-27 | 北京三星通信技术研究有限公司 | Audio processing method and device, electronic equipment and computer readable storage medium |
CN109413539A (en) * | 2018-12-25 | 2019-03-01 | 珠海蓝宝石声学设备有限公司 | A kind of earphone and its regulating device |
EP3737115A1 (en) * | 2019-05-06 | 2020-11-11 | GN Hearing A/S | A hearing apparatus with bone conduction sensor |
CN110265056B (en) * | 2019-06-11 | 2021-09-17 | 安克创新科技股份有限公司 | Sound source control method, loudspeaker device and apparatus |
CN110121129B (en) * | 2019-06-20 | 2021-04-20 | 歌尔股份有限公司 | Microphone array noise reduction method and device of earphone, earphone and TWS earphone |
CN114424581A (en) | 2019-09-12 | 2022-04-29 | 深圳市韶音科技有限公司 | System and method for audio signal generation |
JP2022505997A (en) * | 2019-10-09 | 2022-01-17 | 大象声科(深セン)科技有限公司 | Deep learning voice extraction and noise reduction method that fuses bone vibration sensor and microphone signal |
TWI735986B (en) * | 2019-10-24 | 2021-08-11 | 瑞昱半導體股份有限公司 | Sound receiving apparatus and method |
CN113038318B (en) * | 2019-12-25 | 2022-06-07 | 荣耀终端有限公司 | Voice signal processing method and device |
TWI745845B (en) * | 2020-01-31 | 2021-11-11 | 美律實業股份有限公司 | Earphone and set of earphones |
KR20220017080A (en) * | 2020-08-04 | 2022-02-11 | 삼성전자주식회사 | Method for processing voice signal and apparatus using the same |
CN111935573B (en) * | 2020-08-11 | 2022-06-14 | Oppo广东移动通信有限公司 | Audio enhancement method and device, storage medium and wearable device |
CN114339569B (en) * | 2020-08-29 | 2023-05-26 | 深圳市韶音科技有限公司 | Method and system for obtaining vibration transfer function |
WO2022060891A1 (en) * | 2020-09-15 | 2022-03-24 | Dolby Laboratories Licensing Corporation | Method and device for processing a binaural recording |
US11259119B1 (en) | 2020-10-06 | 2022-02-22 | Qualcomm Incorporated | Active self-voice naturalization using a bone conduction sensor |
US11337000B1 (en) * | 2020-10-23 | 2022-05-17 | Knowles Electronics, Llc | Wearable audio device having improved output |
JP7467317B2 (en) * | 2020-11-12 | 2024-04-15 | 株式会社東芝 | Acoustic inspection device and acoustic inspection method |
US11943601B2 (en) | 2021-08-13 | 2024-03-26 | Meta Platforms Technologies, Llc | Audio beam steering, tracking and audio effects for AR/VR applications |
US20230050954A1 (en) * | 2021-08-13 | 2023-02-16 | Meta Platforms Technologies, Llc | Contact and acoustic microphones for voice wake and voice processing for ar/vr applications |
US20230253002A1 (en) * | 2022-02-08 | 2023-08-10 | Analog Devices International Unlimited Company | Audio signal processing method and system for noise mitigation of a voice signal measured by air and bone conduction sensors |
CN114724574A (en) * | 2022-02-21 | 2022-07-08 | 大连理工大学 | Double-microphone noise reduction method with adjustable expected sound source direction |
CN114333883B (en) * | 2022-03-12 | 2022-05-31 | 广州思正电子股份有限公司 | Head-wearing intelligent voice recognition device |
US11978468B2 (en) * | 2022-04-06 | 2024-05-07 | Analog Devices International Unlimited Company | Audio signal processing method and system for noise mitigation of a voice signal measured by a bone conduction sensor, a feedback sensor and a feedforward sensor |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0683621A2 (en) | 1994-05-18 | 1995-11-22 | Nippon Telegraph And Telephone Corporation | Transmitter-receiver having ear-piece type acoustic transducing part |
JPH08214391A (en) * | 1995-02-03 | 1996-08-20 | Iwatsu Electric Co Ltd | Bone-conduction and air-conduction composite type ear microphone device |
WO2000021194A1 (en) * | 1998-10-08 | 2000-04-13 | Resound Corporation | Dual-sensor voice transmission system |
JP2000261534A (en) | 1999-03-10 | 2000-09-22 | Nippon Telegr & Teleph Corp <Ntt> | Handset |
FR2792146A1 (en) | 1999-04-07 | 2000-10-13 | Parrot Sa | Hands free car acoustic echo audio suppression technique has APA type algorithm adaptively/dynamically noise/echo modifying received speech signal |
US20070088544A1 (en) * | 2005-10-14 | 2007-04-19 | Microsoft Corporation | Calibration based beamforming, non-linear adaptive filtering, and multi-sensor headset |
WO2007099222A1 (en) | 2006-03-01 | 2007-09-07 | Parrot | Method for suppressing noise in an audio signal |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5394918A (en) * | 1977-01-28 | 1978-08-19 | Masahisa Ikegami | Combtned mtcrophone |
JPH08223677A (en) * | 1995-02-15 | 1996-08-30 | Nippon Telegr & Teleph Corp <Ntt> | Telephone transmitter |
JPH11265199A (en) * | 1998-03-18 | 1999-09-28 | Nippon Telegr & Teleph Corp <Ntt> | Voice transmitter |
JP2002125298A (en) * | 2000-10-13 | 2002-04-26 | Yamaha Corp | Microphone device and earphone microphone device |
JP2003264883A (en) * | 2002-03-08 | 2003-09-19 | Denso Corp | Voice processing apparatus and voice processing method |
JP4348706B2 (en) * | 2002-10-08 | 2009-10-21 | 日本電気株式会社 | Array device and portable terminal |
CN1701528A (en) * | 2003-07-17 | 2005-11-23 | 松下电器产业株式会社 | Speech communication apparatus |
US7383181B2 (en) * | 2003-07-29 | 2008-06-03 | Microsoft Corporation | Multi-sensory speech detection system |
US7492889B2 (en) * | 2004-04-23 | 2009-02-17 | Acoustic Technologies, Inc. | Noise suppression based on bark band wiener filtering and modified doblinger noise estimate |
US7930178B2 (en) * | 2005-12-23 | 2011-04-19 | Microsoft Corporation | Speech modeling and enhancement based on magnitude-normalized spectra |
JP2007264132A (en) * | 2006-03-27 | 2007-10-11 | Toshiba Corp | Voice detection device and its method |
EP2294835A4 (en) * | 2008-05-22 | 2012-01-18 | Bone Tone Comm Ltd | A method and a system for processing signals |
JP5499633B2 (en) * | 2009-10-28 | 2014-05-21 | ソニー株式会社 | REPRODUCTION DEVICE, HEADPHONE, AND REPRODUCTION METHOD |
FR2976111B1 (en) * | 2011-06-01 | 2013-07-05 | Parrot | AUDIO EQUIPMENT COMPRISING MEANS FOR DEBRISING A SPEECH SIGNAL BY FRACTIONAL TIME FILTERING, IN PARTICULAR FOR A HANDS-FREE TELEPHONY SYSTEM |
US9020168B2 (en) * | 2011-08-30 | 2015-04-28 | Nokia Corporation | Apparatus and method for audio delivery with different sound conduction transducers |
-
2011
- 2011-04-26 FR FR1153572A patent/FR2974655B1/en active Active
-
2012
- 2012-04-18 US US13/450,361 patent/US8751224B2/en not_active Expired - Fee Related
- 2012-04-19 EP EP12164777.0A patent/EP2518724B1/en not_active Not-in-force
- 2012-04-25 CN CN201210124682.8A patent/CN102761643B/en not_active Expired - Fee Related
- 2012-04-26 JP JP2012100555A patent/JP6017825B2/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0683621A2 (en) | 1994-05-18 | 1995-11-22 | Nippon Telegraph And Telephone Corporation | Transmitter-receiver having ear-piece type acoustic transducing part |
JPH08214391A (en) * | 1995-02-03 | 1996-08-20 | Iwatsu Electric Co Ltd | Bone-conduction and air-conduction composite type ear microphone device |
WO2000021194A1 (en) * | 1998-10-08 | 2000-04-13 | Resound Corporation | Dual-sensor voice transmission system |
JP2000261534A (en) | 1999-03-10 | 2000-09-22 | Nippon Telegr & Teleph Corp <Ntt> | Handset |
FR2792146A1 (en) | 1999-04-07 | 2000-10-13 | Parrot Sa | Hands free car acoustic echo audio suppression technique has APA type algorithm adaptively/dynamically noise/echo modifying received speech signal |
US20070088544A1 (en) * | 2005-10-14 | 2007-04-19 | Microsoft Corporation | Calibration based beamforming, non-linear adaptive filtering, and multi-sensor headset |
WO2007099222A1 (en) | 2006-03-01 | 2007-09-07 | Parrot | Method for suppressing noise in an audio signal |
Non-Patent Citations (2)
Title |
---|
M. BUCK, M. RÖSSLER: "First Order Differential Microphones Arrays for Automotive Applications", PROCEEDINGS OF THE 7TH INTERNATIONAL WORK- SHOP ON ACOUSTIC ECHO AND NOISE CONTROL (IWAENC), 10 September 2001 (2001-09-10), XP002680249 * |
M. BUCK; M. RÖSSLER: "First Order Differential Microphones Arrays for Automotive Applications", PROCEEDINGS OF THE 7TH INTERNATIONAL WORK- SHOP ON ACOUSTIC ECHO AND NOISE CONTROL (IWAENC, 10 September 2001 (2001-09-10) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015144708A1 (en) * | 2014-03-25 | 2015-10-01 | Elno | Acoustic apparatus comprising at least one electroacoustic microphone, an osteophonic microphone and means for calculating a corrected signal, and associated item of headwear |
FR3019422A1 (en) * | 2014-03-25 | 2015-10-02 | Elno | ACOUSTICAL APPARATUS COMPRISING AT LEAST ONE ELECTROACOUSTIC MICROPHONE, A OSTEOPHONIC MICROPHONE AND MEANS FOR CALCULATING A CORRECTED SIGNAL, AND ASSOCIATED HEAD EQUIPMENT |
EP2945399A1 (en) | 2014-05-16 | 2015-11-18 | Parrot | Audio headset with active noise control anc with prevention of the effects of saturation of a microphone signal feedback |
FR3021180A1 (en) * | 2014-05-16 | 2015-11-20 | Parrot | AUDIO ACTIVE ANC CONTROL AUDIO HELMET WITH PREVENTION OF THE EFFECTS OF A SATURATION OF THE MICROPHONE SIGNAL "FEEDBACK" |
US9466281B2 (en) | 2014-05-16 | 2016-10-11 | Parrot | ANC noise active control audio headset with prevention of the effects of a saturation of the feedback microphone signal |
EP3163572A1 (en) * | 2015-10-29 | 2017-05-03 | BlackBerry Limited | Method and device for supressing ambient noise in a speech signal generated at a microphone of the device |
EP3171612A1 (en) | 2015-11-19 | 2017-05-24 | Parrot Drones | Audio headphones with active noise control, anti-occlusion control and passive attenuation cancellation, based on the presence or the absence of a vocal activity of the headphone user |
CN110447073B (en) * | 2017-03-20 | 2023-11-03 | 伯斯有限公司 | Audio signal processing for noise reduction |
Also Published As
Publication number | Publication date |
---|---|
FR2974655A1 (en) | 2012-11-02 |
CN102761643A (en) | 2012-10-31 |
JP6017825B2 (en) | 2016-11-02 |
JP2012231468A (en) | 2012-11-22 |
US8751224B2 (en) | 2014-06-10 |
FR2974655B1 (en) | 2013-12-20 |
CN102761643B (en) | 2017-04-12 |
EP2518724B1 (en) | 2013-10-02 |
US20120278070A1 (en) | 2012-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2518724B1 (en) | Microphone/headphone audio headset comprising a means for suppressing noise in a speech signal, in particular for a hands-free telephone system | |
EP2530673B1 (en) | Audio device with suppression of noise in a voice signal using a fractional delay filter | |
EP3171612A1 (en) | Audio headphones with active noise control, anti-occlusion control and passive attenuation cancellation, based on the presence or the absence of a vocal activity of the headphone user | |
CN107533838B (en) | Voice sensing using multiple microphones | |
EP2597889B1 (en) | Headphones with non-adaptive active noise control | |
EP3348047B1 (en) | Audio signal processing | |
EP2930942A1 (en) | Audio headset with active noise control (anc) with electric hiss reduction | |
EP3011758B1 (en) | Headset with end-firing microphone array and automatic calibration of end-firing array | |
JP4631939B2 (en) | Noise reducing voice reproducing apparatus and noise reducing voice reproducing method | |
EP2945399A1 (en) | Audio headset with active noise control anc with prevention of the effects of saturation of a microphone signal feedback | |
FR2595498A1 (en) | METHODS AND DEVICES FOR MITIGATING EXTERNAL NOISE FROM TYMPAN AND ENHANCING THE INTELLIGIBILITY OF ELECTRO-ACOUSTIC COMMUNICATIONS | |
EP0919096A1 (en) | Method for cancelling multichannel acoustic echo and multichannel acoustic echo canceller | |
US20190043518A1 (en) | Capture and extraction of own voice signal | |
EP0818121B1 (en) | Sound pick-up and reproduction system for a headset in a noisy environment | |
WO2004004298A1 (en) | Echo processing devices for single-channel or multichannel communication systems | |
FR2857551A1 (en) | DEVICE FOR CAPTURING OR REPRODUCING AUDIO SIGNALS | |
FR2764469A1 (en) | METHOD AND DEVICE FOR OPTIMIZED PROCESSING OF A DISTURBANCE SIGNAL WHEN TAKING A SOUND | |
WO2017207286A1 (en) | Audio microphone/headset combination comprising multiple means for detecting vocal activity with supervised classifier | |
US11533555B1 (en) | Wearable audio device with enhanced voice pick-up | |
CN115398934A (en) | Method, device, earphone and computer program for actively suppressing occlusion effect when reproducing audio signals | |
FR2566658A1 (en) | Multichannel auditory prosthesis. | |
FR3136308A1 (en) | Noise-canceling headphones | |
FR3109687A1 (en) | Acoustic System | |
FR2921747A1 (en) | Portable audio signal i.e. music, listening device e.g. MPEG-1 audio layer 3 walkman, for e.g. coach, has analyzing and transferring unit transferring external audio signal that informs monitoring of sound event to user, to listening unit | |
FR2921746A1 (en) | Portable musical signal listening device e.g. MPEG-1 audio layer 3 walkman, for e.g. car, has transferring stage transferring external audio signal to musical signal listening unit, and processor applying processing function to audio signal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
17P | Request for examination filed |
Effective date: 20130418 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 21/02 20130101AFI20130606BHEP Ipc: G10L 21/0208 20130101ALI20130606BHEP Ipc: G10L 21/0216 20130101ALI20130606BHEP Ipc: H04R 3/00 20060101ALI20130606BHEP |
|
INTG | Intention to grant announced |
Effective date: 20130621 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: AT Ref legal event code: REF Ref document number: 634970 Country of ref document: AT Kind code of ref document: T Effective date: 20131015 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: FRENCH |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602012000351 Country of ref document: DE Effective date: 20131205 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: T3 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 634970 Country of ref document: AT Kind code of ref document: T Effective date: 20131002 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140102 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140202 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140203 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602012000351 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
26N | No opposition filed |
Effective date: 20140703 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602012000351 Country of ref document: DE Effective date: 20140703 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140419 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140419 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150430 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150430 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 5 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140101 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140103 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 602012000351 Country of ref document: DE Owner name: PARROT DRONES, FR Free format text: FORMER OWNER: PARROT, PARIS, FR |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20120419 Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140430 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20160811 AND 20160817 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: PD Owner name: PARROT DRONES; FR Free format text: DETAILS ASSIGNMENT: VERANDERING VAN EIGENAAR(S), OVERDRACHT; FORMER OWNER NAME: PARROT Effective date: 20160804 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: TP Owner name: PARROT DRONES, FR Effective date: 20161010 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 6 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: NL Payment date: 20170424 Year of fee payment: 6 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20170425 Year of fee payment: 6 Ref country code: DE Payment date: 20170425 Year of fee payment: 6 Ref country code: FR Payment date: 20170418 Year of fee payment: 6 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20170420 Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131002 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 602012000351 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MM Effective date: 20180501 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20180419 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180501 Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20181101 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180419 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180419 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180430 |