EP2518724A1 - Kombinierte Audioeinheit bestehend aus Mikrofon und Kopfhörer, die Mittel zur Geräuschdämpfung eines nahen Wortsignals umfasst, insbesondere für eine telefonische Freisprechanlage - Google Patents

Kombinierte Audioeinheit bestehend aus Mikrofon und Kopfhörer, die Mittel zur Geräuschdämpfung eines nahen Wortsignals umfasst, insbesondere für eine telefonische Freisprechanlage Download PDF

Info

Publication number
EP2518724A1
EP2518724A1 EP12164777A EP12164777A EP2518724A1 EP 2518724 A1 EP2518724 A1 EP 2518724A1 EP 12164777 A EP12164777 A EP 12164777A EP 12164777 A EP12164777 A EP 12164777A EP 2518724 A1 EP2518724 A1 EP 2518724A1
Authority
EP
European Patent Office
Prior art keywords
signal
speech
headset
physiological sensor
microphone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP12164777A
Other languages
English (en)
French (fr)
Other versions
EP2518724B1 (de
Inventor
Michael Herve
Guillaume Vitte
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Parrot SA
Original Assignee
Parrot SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Parrot SA filed Critical Parrot SA
Publication of EP2518724A1 publication Critical patent/EP2518724A1/de
Application granted granted Critical
Publication of EP2518724B1 publication Critical patent/EP2518724B1/de
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • G10L21/0364Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02085Periodic noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02165Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/13Hearing devices using bone conduction transducers

Definitions

  • the invention relates to a headset type microphone / headset combined.
  • Such a headset can in particular be used for communication functions such as "hands-free" telephony functions, in addition to listening to an audio source (music for example) coming from a device on which the headphones are connected. .
  • one of the difficulties is to ensure sufficient intelligibility of the signal picked up by the microphone (“microphone”), that is to say the speech signal of the close speaker (the helmet wearer).
  • microphone that is to say the speech signal of the close speaker (the helmet wearer).
  • the helmet can indeed be used in a noisy environment (metro, busy street, train, etc.), so that the microphone will not only capture the speech of the wearer of the helmet, but also the surrounding noise.
  • the wearer can be protected from these noises by the helmet, especially if it is a model with closed headphones isolating the ear from the outside, and even more if the headset is provided with an "active control of noise".
  • the distant speaker the one at the other end of the communication channel
  • the noise picked up by the microphone coming to overlap and interfere with the speech signal of the nearby speaker (the helmet wearer).
  • the signal collected by the physiological sensor is usable only in the low frequencies.
  • the noises generally encountered in a usual environment street, metro, train .
  • the physiological sensor delivers a signal naturally devoid of noise component noise (which is not possible with a classic microphone).
  • the EP 0 683 621 A2 for its part, it describes more precisely how to integrate the physiological sensor and the external microphone into one and the same ear canal.
  • the signal collected by the physiological sensor is not strictly speaking speech since speech is not only formed of voiced sounds, it contains components that are not born at the level of the vocal cords: the frequency content is for example much richer with the sound coming from the throat and emitted through the mouth.
  • the internal bone conduction and the crossing of the skin has the effect of filtering certain vocal components, which makes that the signal delivered by the physiological sensor is exploitable only in the lowest part of the spectrum. That is why this signal is supplemented by another signal, delivered by a conventional microphonic sensor, to which it is combined.
  • the general problem of the invention is, in such a context, to deliver to the remote speaker a voice signal representative of the speech transmitted by the near speaker, a signal which is freed from the parasitic components of external noise present in the environment of the close speaker .
  • Another aspect of the invention resides in the ability to efficiently use the signal from the physiological sensor to control various signal processing functions. This signal makes it possible to access new information concerning the content of the speech, which will then be used for the denoising as well as for various auxiliary functions that will be explained below, in particular the calculation of a cutoff frequency of a dynamic filter.
  • the microphone / headset combination comprises low-pass filtering means of the first speech signal before combination by the mixing means, and / or high-pass filtering means of the second speech signal before denoising and combination by the means. mixing.
  • These low-pass and / or high-pass filtering means comprise an adjustable cutoff frequency filter
  • the headset comprises means for calculating the cutoff frequency, operating as a function of the signal delivered by the physiological sensor.
  • the means for calculating the cutoff frequency may in particular comprise means for analyzing the spectral content of the signal delivered by the physiological sensor, able to determine the cutoff frequency as a function of the relative levels of the signal / noise ratio evaluated in a plurality of distinct frequency bands of the signal delivered by the physiological sensor.
  • the denoising means of the second speech signal are non-frequency noise reduction means with, in a particular embodiment of the invention, the microphone assembly which comprises two microphones, and the noise reduction means.
  • non-frequency which comprise a combiner able to apply a delay to the signal delivered by one of the microphones and to subtract this delayed signal from the signal delivered by the other microphone.
  • the two microphones can be aligned in a linear array in a main direction directed towards the mouth of the wearer of the helmet.
  • denoising means of the third speech signal delivered by the mixing means including frequency noise reduction means.
  • input receiving means and operating an intercorrelation between the first and the third speech signal, and outputting a speech presence probability signal. function of the result of the intercorrelation.
  • the denoising means of the third speech signal receive as input this speech presence probability signal for selectively: i) making a noise reduction differentiated according to the frequency bands as a function of the value of the speech presence probability signal, and ii) perform a maximum noise reduction on all the frequency bands in the absence of speech.
  • post-processing means capable of selectively frequency band equalizing in the part of the spectrum corresponding to the signal collected by the physiological sensor. These means determine an equalization gain for each of the frequency bands, this gain being calculated from the respective frequency coefficients of the signals delivered by the microphone (s) and signals delivered by the physiological sensor, considered in the frequency domain. They also operate smoothing on a plurality of successive signal frames of the calculated equalization gain.
  • the reference 10 generally designates the helmet according to the invention, which comprises two atria 12 joined by a hoop.
  • Each of the atria is preferably constituted by a closed shell 12, housing a sound reproduction transducer, applied around the ear of the user with the interposition of an insulating pad 16 isolating the ear from the outside.
  • This helmet is provided with a physiological sensor 18 for collecting the vibrations produced by a voiced signal emitted by the wearer of the helmet, and which can be picked up at the level of the cheek or the temple.
  • the sensor 18 is preferably an accelerometer integrated in the pad 16 so as to be applied against the cheek or the temple of the user with the closest possible coupling.
  • the physiological sensor may in particular be placed on the inside of the skin of the pad so that, once the helmet is in place, the physiological sensor is applied against the cheek or the temple of the user under the effect of a slight pressure resulting from the crushing of the material of the pad, with only the interposition of the skin of the pad.
  • the headset also comprises a microphone array or antenna, for example two omnidirectional microphones 20, 22, placed on the shell of the earpiece 12. These two front and rear mics 22 and 20 are omnidirectional microphones arranged relative to each other. other so that their alignment direction 24 is approximately directed towards the mouth 26 of the helmet wearer.
  • the Figure 2 is a block diagram showing the different blocks and functions implemented by the method of the invention as well as their interactions.
  • the method of the invention is implemented by software means, which can be broken down and schematized by a number of illustrated blocks 30 to 64 Figure 2 . These processes are implemented in the form of appropriate algorithms executed by a microcontroller or a digital signal processor. Although, for the sake of clarity, these various treatments are presented in the form of separate blocks, they implement common elements and correspond in practice to a plurality of functions globally executed by the same software.
  • the reference 28 also designates the sound reproduction transducer placed inside the hull. of the earpiece.
  • These various elements deliver signals that are processed by the block referenced 30, which can be coupled to an interface 32 to the communication circuits (telephone circuits) and receives at the input E the sound intended to be reproduced by the transducer 28 (speech of the remote speaker during a telephone call, music source out periods of telephone communication), and delivers on the output S a signal representative of the speech of the next speaker, that is to say, the wearer of the headset.
  • the signal to be reproduced applied to the input E is a digital signal converted into analog by the converter 34, then amplified by the amplifier 36 for reproduction by the transducer 28.
  • the signal collected by the physiological sensor 18 is a signal mainly comprising components in the lower region of the sound spectrum (typically 0-1500 Hz). As explained above, this signal is naturally non-noisy.
  • the signals collected by the microphones 20, 22 will be used mainly for the high spectrum (above 1500 Hz), but these signals are strongly noisy and it will be essential to carry out a strong denoising treatment to eliminate the components of parasitic noise, the level of which may be such, in certain environments, that they completely obscure the speech signal picked up by these microphones 20, 22.
  • the first stage of the treatment is an anti-echo treatment, applied to the signals of the physiological sensor and the microphones.
  • the sound reproduced by the transducer 28 is captured by the physiological sensor 18 and the microphones 20, 22, generating an echo that disrupts the operation of the system, and must be eliminated upstream.
  • This anti-echo treatment is implemented by the blocks 38, 40 and 42, each of these blocks receiving on a first input the signal emitted by the sensor 18, 20 or else 22 and on a second input the signal reproduced by the transducer. 28 (echo generator signal), and outputs, for further processing, a signal whose echo has been eliminated.
  • the anti-echo treatment is for example carried out by an adaptive algorithm treatment such as that described in FIG. FR 2 792 146 A1 (Parrot SA), which can be referred to for more details.
  • This is an echo cancellation or AEC technique consisting in dynamically defining a compensation filter modeling the acoustic coupling between the transducer 28 and the physiological sensor 18 (or the microphone 20, or the microphone 22, respectively) by a linear transformation between the signal reproduced by the transducer 28 (that is to say the signal E applied at the input of the blocks 38, 40 or 42) and the echo picked up by the physiological sensor 18 (or the microphone 20 or 22).
  • This transformation defines an adaptive filter which is applied to the reproduced incident signal, and the result of this filtering is subtracted from the signal collected by the physiological sensor 18 (or the microphone 20 or 22), which has the effect of canceling the major part acoustic echo.
  • This modeling is based on the search for a correlation between the signal reproduced by the transducer 28 and the signal collected by the physiological sensor 18 (or the microphone 20 or 22), that is to say on an estimate of the impulse response.
  • the coupling constituted by the body of the earphone 12 supporting these various elements.
  • the processing is performed in particular by an adaptive APA ( Affine Projection Algorithm ) algorithm , which provides fast convergence, well suited to hands-free applications with intermittent speech rate and a level that can quickly vary.
  • adaptive APA Affine Projection Algorithm
  • the iterative algorithm is executed with a variable pitch, as described in FIG. FR 2 792 146 A1 supra.
  • the pitch ⁇ varies continuously according to the energy levels of the signal picked up by the microphone, before and after filtering. This step is increased when the energy of the sensed signal is dominated by the energy of the echo, and, conversely, reduced when the energy of the signal picked up is dominated by the energy of the background noise and / or the speech from the remote speaker.
  • the signal collected by the physiological sensor 18 after the anti-echo processing by the block 38 will be used as the input signal of a block 44 for calculating a cutoff frequency FC.
  • the next step consists in filtering the signals, with a low-pass filter 48 for the signal of the physiological sensor 18 and with a filter high pass 50, 52 for the signals collected by the microphones 20, 22, respectively.
  • These filters 48, 50 and 52 are preferably infinite impulse response type IIR (recursive filter) type digital filters, which have a relatively steep transition between the bandwidth and the rejected band.
  • IIR infinite impulse response type
  • these filters are adaptive filters whose cutoff frequency is variable and determined dynamically by the block 44.
  • the cut-off frequency FC which is preferably the same for the low-pass filter 48 and the high-pass filters 50 and 52, is determined from the signal of the physiological sensor 18 after the anti-echo treatment 38.
  • an algorithm calculates the signal-to-noise ratio for a plurality of frequency bands in a range between, for example, 0 and 2500 Hz (the noise level being given by a calculation of the energy in a higher frequency band, for example between 3000 and 4000 Hz, because it is known that in this zone the signal can only be noise, because of the properties of the component constituting the physiological sensor 18).
  • the cutoff frequency chosen will correspond to the maximum frequency for which the signal / noise ratio exceeds a predetermined threshold, for example 10 dB.
  • the following step consists in operating, by means of block 54, a mix to reconstruct the complete spectrum with, on the one hand, the lower region of the spectrum given by the filtered signal of the physiological sensor 18 and, on the other hand, the top of the spectrum given by the filtered signal of the microphones 20 and 22 after passing through a combiner-phase shifter 56 for operating a denoising in this part of the spectrum.
  • This reconstruction is performed by summing the two signals, which are applied synchronously to the mixing block 54 so as to avoid any deformation.
  • the signal that we want to denoise (that is, the signal from the near speaker located in the upper part of the spectrum, typically the components of frequency greater than 1500 Hz) is derived from the two microphones 20, 22 disposed a few centimeters from each other on the shell 14 of one of the earphones of the helmet. As indicated, these two microphones are arranged relative to each other so that the direction 24 they define is approximately oriented in the direction of the mouth 26 of the helmet wearer. As a result, a speech signal emitted from the mouth will reach the microphone before 20 and then the rear microphone 22 with a delay, and therefore a substantially constant phase shift, while the ambient noise will be picked up without phase shift by the two microphones 20 and 22. (which are omnidirectional microphones), given the distance of sources of parasitic noise compared to the two microphones 20 and 22.
  • phase shifter combiner 56 which comprises a phase-shifter 58 applying a delay ⁇ to the signal of the rear microphone 22 and a combiner 60 for subtracting this delayed signal from the signal from the microphone before 20.
  • a differential network of first-order microphones equivalent to a single virtual microphone whose directivity can be adjusted as a function of the value of ⁇ , with 0 ⁇ ⁇ ⁇ ⁇ A ( ⁇ A being the value corresponding to the natural phase difference between the two microphones 20 and 22, equal to the distance between the two microphones divided by the speed of sound, a delay of about 30 microseconds for a spacing of 1 cm).
  • An appropriate choice of this parameter can be achieved by attenuating about 6 dB on surrounding diffuse noises. For more details on this technique, we can for example refer to:
  • This signal is subjected by the block 62 to a frequency noise reduction.
  • this frequency noise reduction is operated differently in the presence or absence of speech, by evaluating a probability p of absence of speech from the signal collected by the physiological sensor 18.
  • this probability of absence of speech is derived from the information given by the physiological sensor.
  • the signal delivered by this sensor has a very good signal / noise ratio up to the cutoff frequency FC determined by the block 44. But beyond this cutoff frequency the signal / noise ratio is still good, and often better than that of the microphones 20 and 22.
  • the sensor information is exploited by calculating (block 64) the frequency intercorrelation between the combined signal delivered by the mixing block 54 and the signal unfiltered physiological sensor, before filtering by the low-pass filter 48.
  • Smix ( f ) and smix ( f ) being the frequency (complex) vector representations, for the n- frame, respectively of the combined signal delivered by the mixing block 54, and of the signal of the physiological sensor 18.
  • the algorithm searches for frequencies for which there is only noise (situation of absence of speech): on the spectrogram of the signal delivered by the mixing block 54 certain harmonics are embedded in the noise, while they stand out more on the signal of the physiological sensor.
  • the peaks P 1 , P 2 , P 3 , P 4 , ... of this intercorrelation calculation indicate a strong correlation between the combined signal delivered by the mixing block 54, and the signal of the physiological sensor 18, and the Emergence of these correlated frequencies indicates the likely presence of speech for these frequencies.
  • the value coefficient_normalization makes it possible to regulate the distribution of the probabilities according to the value of intercorrelation, and to obtain values between 0 and 1.
  • the system that has just been described makes it possible to obtain excellent overall performance, typically of the order of 30 to 40 dB of noise reduction on the speech signal of the nearby speaker.
  • This gives the impression to the distant speaker (the one with which the wearer of the headset is in communication ) that his interlocutor (the helmet wearer) is in a quiet room.
  • the low frequency content collected at the cheek or temple by the physiological sensor 18 is different from the low frequency content of the sound emitted by the mouth of the user, as it would be captured by a microphone located a few centimeters from the mouth, or even by the ear of an interlocutor.
  • the use of the physiological sensor and the filtering described above certainly makes it possible to obtain a very good signal in terms of signal-to-noise ratio, but which may present for the interlocutor who hears it a tone a little deaf and unnatural.
  • the equalization can be performed automatically, from the signal delivered by the microphones 20, 22, before filtering.
  • the Figure 4 shows an example, in the frequency domain (thus after Fourier transform) of the ACC signal produced by the physiological sensor 18, with respect to a MIC microphone signal that would be captured a few centimeters from the mouth.
  • differentiated gains G 1 , G 2 , G 3 , G 4 ,... are applied to different frequency bands of the part of the spectrum located in the low frequencies.
  • the algorithm calculates respective Fourier transforms of the two signals, providing a series of frequency coefficients (expressed in dB) NormPhysioFreq_dB (i) and NormMicFreq_dB (i) respectively corresponding to the standard of the ⁇ th Fourier coefficient physiological sensor signal and the standard of the ⁇ th Fourier coefficient of the microphonic signal.
  • DifferenceFreq_dB i NormPhysioFreq _ d ⁇ B i - NormMicFreq _db i .
  • the gain that will be applied will be less than unity (negative in dB); Conversely, if the difference is negative, the gain to be applied will be greater than unity (positive in dB).
  • Gain_dB i ⁇ .

Landscapes

  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Details Of Audible-Bandwidth Transducers (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
  • Telephone Set Structure (AREA)
EP12164777.0A 2011-04-26 2012-04-19 Kombinierte Audioeinheit bestehend aus Mikrofon und Kopfhörer, die Mittel zur Geräuschdämpfung eines nahen Wortsignals umfasst, insbesondere für eine telefonische Freisprechanlage Not-in-force EP2518724B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FR1153572A FR2974655B1 (fr) 2011-04-26 2011-04-26 Combine audio micro/casque comprenant des moyens de debruitage d'un signal de parole proche, notamment pour un systeme de telephonie "mains libres".

Publications (2)

Publication Number Publication Date
EP2518724A1 true EP2518724A1 (de) 2012-10-31
EP2518724B1 EP2518724B1 (de) 2013-10-02

Family

ID=45939241

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12164777.0A Not-in-force EP2518724B1 (de) 2011-04-26 2012-04-19 Kombinierte Audioeinheit bestehend aus Mikrofon und Kopfhörer, die Mittel zur Geräuschdämpfung eines nahen Wortsignals umfasst, insbesondere für eine telefonische Freisprechanlage

Country Status (5)

Country Link
US (1) US8751224B2 (de)
EP (1) EP2518724B1 (de)
JP (1) JP6017825B2 (de)
CN (1) CN102761643B (de)
FR (1) FR2974655B1 (de)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015144708A1 (fr) * 2014-03-25 2015-10-01 Elno Appareil acoustique comprenant au moins un microphone électroacoustique, un microphone ostéophonique et des moyens de calcul d'un signal corrigé, et équipement de tête associé
EP2945399A1 (de) 2014-05-16 2015-11-18 Parrot Audiokopfhörer mit aktiver anc-geräuschkontrolle mit vorbeugung gegen sättigungseffekte eines feedback-mikrophonsignals
EP3163572A1 (de) * 2015-10-29 2017-05-03 BlackBerry Limited Verfahren und vorrichtung zur unterdrückung von umgebungsgeräuschen in einem sprachsignal, das an einem mikrofon der vorrichtung erzeugt wird
EP3171612A1 (de) 2015-11-19 2017-05-24 Parrot Drones Audio-headset mit aktiver geräuschkontrolle, anti-okklusionskontrolle und löschung der passiven schalldämpfung je nach vorliegen oder nicht-vorliegen einer stimmaktivität des headset-benutzers
CN110447073B (zh) * 2017-03-20 2023-11-03 伯斯有限公司 用于降噪的音频信号处理

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9247346B2 (en) 2007-12-07 2016-01-26 Northern Illinois Research Foundation Apparatus, system and method for noise cancellation and communication for incubators and related devices
US9135915B1 (en) * 2012-07-26 2015-09-15 Google Inc. Augmenting speech segmentation and recognition using head-mounted vibration and/or motion sensors
US9704486B2 (en) * 2012-12-11 2017-07-11 Amazon Technologies, Inc. Speech recognition power management
CN103208291A (zh) * 2013-03-08 2013-07-17 华南理工大学 一种可用于强噪声环境的语音增强方法及装置
US9560444B2 (en) * 2013-03-13 2017-01-31 Cisco Technology, Inc. Kinetic event detection in microphones
JP6123503B2 (ja) * 2013-06-07 2017-05-10 富士通株式会社 音声補正装置、音声補正プログラム、および、音声補正方法
US9554226B2 (en) 2013-06-28 2017-01-24 Harman International Industries, Inc. Headphone response measurement and equalization
DE102013216133A1 (de) * 2013-08-14 2015-02-19 Sennheiser Electronic Gmbh & Co. Kg Hörer oder Headset
US9180055B2 (en) * 2013-10-25 2015-11-10 Harman International Industries, Incorporated Electronic hearing protector with quadrant sound localization
US20150118960A1 (en) * 2013-10-28 2015-04-30 Aliphcom Wearable communication device
US9036844B1 (en) 2013-11-10 2015-05-19 Avraham Suhami Hearing devices based on the plasticity of the brain
EP2882203A1 (de) 2013-12-06 2015-06-10 Oticon A/s Hörgerätevorrichtung für freihändige Kommunikation
WO2016032523A1 (en) * 2014-08-29 2016-03-03 Harman International Industries, Inc. Auto-calibrating noise canceling headphone
US9942848B2 (en) * 2014-12-05 2018-04-10 Silicon Laboratories Inc. Bi-directional communications in a wearable monitor
CN104486286B (zh) * 2015-01-19 2018-01-05 武汉邮电科学研究院 一种连续子载波ofdma系统的上行帧同步方法
US9905216B2 (en) * 2015-03-13 2018-02-27 Bose Corporation Voice sensing using multiple microphones
US9847093B2 (en) * 2015-06-19 2017-12-19 Samsung Electronics Co., Ltd. Method and apparatus for processing speech signal
US20160379661A1 (en) * 2015-06-26 2016-12-29 Intel IP Corporation Noise reduction for electronic devices
GB2552178A (en) * 2016-07-12 2018-01-17 Samsung Electronics Co Ltd Noise suppressor
CN106211012B (zh) * 2016-07-15 2019-11-29 成都定为电子技术有限公司 一种耳机时频响应的测量与校正系统及其方法
JP6634354B2 (ja) * 2016-07-20 2020-01-22 ホシデン株式会社 緊急通報システム用ハンズフリー通話装置
JP2020502607A (ja) * 2016-09-14 2020-01-23 ソニックセンソリー、インコーポレイテッド 同期化を伴うマルチデバイスオーディオストリーミングシステム
WO2018083511A1 (zh) * 2016-11-03 2018-05-11 北京金锐德路科技有限公司 一种音频播放装置及方法
WO2018199846A1 (en) * 2017-04-23 2018-11-01 Audio Zoom Pte Ltd Transducer apparatus for high speech intelligibility in noisy environments
US10341759B2 (en) * 2017-05-26 2019-07-02 Apple Inc. System and method of wind and noise reduction for a headphone
CN107180627B (zh) * 2017-06-22 2020-10-09 潍坊歌尔微电子有限公司 去除噪声的方法和装置
US10706868B2 (en) * 2017-09-06 2020-07-07 Realwear, Inc. Multi-mode noise cancellation for voice detection
US10764668B2 (en) 2017-09-07 2020-09-01 Lightspeed Aviation, Inc. Sensor mount and circumaural headset or headphones with adjustable sensor
US10701470B2 (en) 2017-09-07 2020-06-30 Light Speed Aviation, Inc. Circumaural headset or headphones with adjustable biometric sensor
CN109729463A (zh) * 2017-10-27 2019-05-07 北京金锐德路科技有限公司 用于脖戴式语音交互耳机的声麦骨麦复合收音装置
JP7194912B2 (ja) * 2017-10-30 2022-12-23 パナソニックIpマネジメント株式会社 ヘッドセット
CN107886967B (zh) * 2017-11-18 2018-11-13 中国人民解放军陆军工程大学 一种深度双向门递归神经网络的骨导语音增强方法
US10438605B1 (en) * 2018-03-19 2019-10-08 Bose Corporation Echo control in binaural adaptive noise cancellation systems in headsets
CN110931027B (zh) * 2018-09-18 2024-09-27 北京三星通信技术研究有限公司 音频处理方法、装置、电子设备及计算机可读存储介质
CN109413539A (zh) * 2018-12-25 2019-03-01 珠海蓝宝石声学设备有限公司 一种耳机及其调节装置
EP3737115A1 (de) * 2019-05-06 2020-11-11 GN Hearing A/S Hörgerät mit knochenleitungssensor
CN110265056B (zh) * 2019-06-11 2021-09-17 安克创新科技股份有限公司 音源的控制方法以及扬声设备、装置
CN110121129B (zh) * 2019-06-20 2021-04-20 歌尔股份有限公司 耳机的麦克风阵列降噪方法、装置、耳机及tws耳机
CN114424581A (zh) 2019-09-12 2022-04-29 深圳市韶音科技有限公司 用于音频信号生成的系统和方法
EP4044181A4 (de) * 2019-10-09 2023-10-18 Elevoc Technology Co., Ltd. Auf tiefenlernen basierendes rauschunterdrückungsverfahren unter verwendung von knochenleitungssensor- und mikrofonsignalen
TWI735986B (zh) * 2019-10-24 2021-08-11 瑞昱半導體股份有限公司 收音裝置及方法
CN113038318B (zh) 2019-12-25 2022-06-07 荣耀终端有限公司 一种语音信号处理方法及装置
TWI745845B (zh) * 2020-01-31 2021-11-11 美律實業股份有限公司 耳機及耳機組
KR20220017080A (ko) * 2020-08-04 2022-02-11 삼성전자주식회사 음성 신호를 처리하는 방법 및 이를 이용한 장치
CN111935573B (zh) * 2020-08-11 2022-06-14 Oppo广东移动通信有限公司 音频增强方法、装置、存储介质及可穿戴设备
CN114339569B (zh) * 2020-08-29 2023-05-26 深圳市韶音科技有限公司 一种获取振动传递函数的方法和系统
CN116349252A (zh) * 2020-09-15 2023-06-27 杜比实验室特许公司 用于处理双耳录音的方法和设备
US11259119B1 (en) 2020-10-06 2022-02-22 Qualcomm Incorporated Active self-voice naturalization using a bone conduction sensor
US11337000B1 (en) * 2020-10-23 2022-05-17 Knowles Electronics, Llc Wearable audio device having improved output
JP7467317B2 (ja) * 2020-11-12 2024-04-15 株式会社東芝 音響検査装置及び音響検査方法
JP2023552364A (ja) * 2020-12-31 2023-12-15 深▲セン▼市韶音科技有限公司 オーディオ生成の方法およびシステム
WO2022198234A1 (en) * 2021-03-18 2022-09-22 Magic Leap, Inc. Method and apparatus for improved speaker identification and speech enhancement
US11943601B2 (en) 2021-08-13 2024-03-26 Meta Platforms Technologies, Llc Audio beam steering, tracking and audio effects for AR/VR applications
US12041427B2 (en) * 2021-08-13 2024-07-16 Meta Platforms Technologies, Llc Contact and acoustic microphones for voice wake and voice processing for AR/VR applications
US12052538B2 (en) * 2021-09-16 2024-07-30 Bitwave Pte Ltd. Voice communication in hostile noisy environment
US20230253002A1 (en) * 2022-02-08 2023-08-10 Analog Devices International Unlimited Company Audio signal processing method and system for noise mitigation of a voice signal measured by air and bone conduction sensors
CN114724574B (zh) * 2022-02-21 2024-07-05 大连理工大学 一种期望声源方向可调的双麦克风降噪方法
CN114333883B (zh) * 2022-03-12 2022-05-31 广州思正电子股份有限公司 一种头戴式智能语音识别装置
US11978468B2 (en) * 2022-04-06 2024-05-07 Analog Devices International Unlimited Company Audio signal processing method and system for noise mitigation of a voice signal measured by a bone conduction sensor, a feedback sensor and a feedforward sensor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0683621A2 (de) 1994-05-18 1995-11-22 Nippon Telegraph And Telephone Corporation Sender-Empfänger mit einem akustischen Wandler vom Ohrpassstück-Typ
JPH08214391A (ja) * 1995-02-03 1996-08-20 Iwatsu Electric Co Ltd 骨導気導複合型イヤーマイクロホン装置
WO2000021194A1 (en) * 1998-10-08 2000-04-13 Resound Corporation Dual-sensor voice transmission system
JP2000261534A (ja) 1999-03-10 2000-09-22 Nippon Telegr & Teleph Corp <Ntt> 送受話器
FR2792146A1 (fr) 1999-04-07 2000-10-13 Parrot Sa Procede de suppression de l'echo acoustique d'un signal audio, notamment dans le signal capte par un microphone
US20070088544A1 (en) * 2005-10-14 2007-04-19 Microsoft Corporation Calibration based beamforming, non-linear adaptive filtering, and multi-sensor headset
WO2007099222A1 (fr) 2006-03-01 2007-09-07 Parrot Procede de debruitage d'un signal audio

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5394918A (en) * 1977-01-28 1978-08-19 Masahisa Ikegami Combtned mtcrophone
JPH08223677A (ja) * 1995-02-15 1996-08-30 Nippon Telegr & Teleph Corp <Ntt> 送話器
JPH11265199A (ja) * 1998-03-18 1999-09-28 Nippon Telegr & Teleph Corp <Ntt> 送話器
JP2002125298A (ja) * 2000-10-13 2002-04-26 Yamaha Corp マイク装置およびイヤホンマイク装置
JP2003264883A (ja) * 2002-03-08 2003-09-19 Denso Corp 音声処理装置および音声処理方法
JP4348706B2 (ja) * 2002-10-08 2009-10-21 日本電気株式会社 アレイ装置および携帯端末
CN1701528A (zh) * 2003-07-17 2005-11-23 松下电器产业株式会社 通话装置
US7383181B2 (en) * 2003-07-29 2008-06-03 Microsoft Corporation Multi-sensory speech detection system
US7492889B2 (en) * 2004-04-23 2009-02-17 Acoustic Technologies, Inc. Noise suppression based on bark band wiener filtering and modified doblinger noise estimate
US7930178B2 (en) * 2005-12-23 2011-04-19 Microsoft Corporation Speech modeling and enhancement based on magnitude-normalized spectra
JP2007264132A (ja) * 2006-03-27 2007-10-11 Toshiba Corp 音声検出装置及びその方法
US8675884B2 (en) * 2008-05-22 2014-03-18 DSP Group Method and a system for processing signals
JP5499633B2 (ja) * 2009-10-28 2014-05-21 ソニー株式会社 再生装置、ヘッドホン及び再生方法
FR2976111B1 (fr) * 2011-06-01 2013-07-05 Parrot Equipement audio comprenant des moyens de debruitage d'un signal de parole par filtrage a delai fractionnaire, notamment pour un systeme de telephonie "mains libres"
US9020168B2 (en) * 2011-08-30 2015-04-28 Nokia Corporation Apparatus and method for audio delivery with different sound conduction transducers

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0683621A2 (de) 1994-05-18 1995-11-22 Nippon Telegraph And Telephone Corporation Sender-Empfänger mit einem akustischen Wandler vom Ohrpassstück-Typ
JPH08214391A (ja) * 1995-02-03 1996-08-20 Iwatsu Electric Co Ltd 骨導気導複合型イヤーマイクロホン装置
WO2000021194A1 (en) * 1998-10-08 2000-04-13 Resound Corporation Dual-sensor voice transmission system
JP2000261534A (ja) 1999-03-10 2000-09-22 Nippon Telegr & Teleph Corp <Ntt> 送受話器
FR2792146A1 (fr) 1999-04-07 2000-10-13 Parrot Sa Procede de suppression de l'echo acoustique d'un signal audio, notamment dans le signal capte par un microphone
US20070088544A1 (en) * 2005-10-14 2007-04-19 Microsoft Corporation Calibration based beamforming, non-linear adaptive filtering, and multi-sensor headset
WO2007099222A1 (fr) 2006-03-01 2007-09-07 Parrot Procede de debruitage d'un signal audio

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M. BUCK, M. RÖSSLER: "First Order Differential Microphones Arrays for Automotive Applications", PROCEEDINGS OF THE 7TH INTERNATIONAL WORK- SHOP ON ACOUSTIC ECHO AND NOISE CONTROL (IWAENC), 10 September 2001 (2001-09-10), XP002680249 *
M. BUCK; M. RÖSSLER: "First Order Differential Microphones Arrays for Automotive Applications", PROCEEDINGS OF THE 7TH INTERNATIONAL WORK- SHOP ON ACOUSTIC ECHO AND NOISE CONTROL (IWAENC, 10 September 2001 (2001-09-10)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015144708A1 (fr) * 2014-03-25 2015-10-01 Elno Appareil acoustique comprenant au moins un microphone électroacoustique, un microphone ostéophonique et des moyens de calcul d'un signal corrigé, et équipement de tête associé
FR3019422A1 (fr) * 2014-03-25 2015-10-02 Elno Appareil acoustique comprenant au moins un microphone electroacoustique, un microphone osteophonique et des moyens de calcul d'un signal corrige, et equipement de tete associe
EP2945399A1 (de) 2014-05-16 2015-11-18 Parrot Audiokopfhörer mit aktiver anc-geräuschkontrolle mit vorbeugung gegen sättigungseffekte eines feedback-mikrophonsignals
FR3021180A1 (fr) * 2014-05-16 2015-11-20 Parrot Casque audio a controle actif de bruit anc avec prevention des effets d'une saturation du signal microphonique "feedback"
US9466281B2 (en) 2014-05-16 2016-10-11 Parrot ANC noise active control audio headset with prevention of the effects of a saturation of the feedback microphone signal
EP3163572A1 (de) * 2015-10-29 2017-05-03 BlackBerry Limited Verfahren und vorrichtung zur unterdrückung von umgebungsgeräuschen in einem sprachsignal, das an einem mikrofon der vorrichtung erzeugt wird
EP3171612A1 (de) 2015-11-19 2017-05-24 Parrot Drones Audio-headset mit aktiver geräuschkontrolle, anti-okklusionskontrolle und löschung der passiven schalldämpfung je nach vorliegen oder nicht-vorliegen einer stimmaktivität des headset-benutzers
CN110447073B (zh) * 2017-03-20 2023-11-03 伯斯有限公司 用于降噪的音频信号处理

Also Published As

Publication number Publication date
CN102761643A (zh) 2012-10-31
CN102761643B (zh) 2017-04-12
US8751224B2 (en) 2014-06-10
JP6017825B2 (ja) 2016-11-02
FR2974655A1 (fr) 2012-11-02
FR2974655B1 (fr) 2013-12-20
US20120278070A1 (en) 2012-11-01
EP2518724B1 (de) 2013-10-02
JP2012231468A (ja) 2012-11-22

Similar Documents

Publication Publication Date Title
EP2518724B1 (de) Kombinierte Audioeinheit bestehend aus Mikrofon und Kopfhörer, die Mittel zur Geräuschdämpfung eines nahen Wortsignals umfasst, insbesondere für eine telefonische Freisprechanlage
EP2530673B1 (de) Audiogerät mit Rauschunterdrückung in einem Sprachsignal unter Verwendung von einem Filter mit fraktionaler Verzögerung
EP3171612A1 (de) Audio-headset mit aktiver geräuschkontrolle, anti-okklusionskontrolle und löschung der passiven schalldämpfung je nach vorliegen oder nicht-vorliegen einer stimmaktivität des headset-benutzers
EP2945399B1 (de) Audiokopfhörer mit aktiver anc-geräuschkontrolle mit vorbeugung gegen sättigungseffekte eines feedback-mikrophonsignals
CN107533838B (zh) 使用多个麦克风的语音感测
EP2597889B1 (de) Kopfhörer mit nicht-adaptives aktiven geräuschkontrolle
EP3348047B1 (de) Tonsignalverarbeitung
EP2930942A1 (de) Audiokopfhörer mit aktiver anc-geräuschkontrolle mit reduzierung des elektrischen rauschens
EP3011758B1 (de) Kopfhörer mit längsstrahlermikrofongruppe und automatischer kalibrierung der längsstrahleranordnung
JP4631939B2 (ja) ノイズ低減音声再生装置およびノイズ低減音声再生方法
FR2595498A1 (fr) Procedes et dispositifs pour attenuer les bruits d&#39;origine externe parvenant au tympan et ameliorer l&#39;intelligibilite des communications electro-acoustiques
EP0919096A1 (de) Verfahren und einrichtung zur mehrkanaligen kompensation eines akustischen echos
EP0998166A1 (de) Anordnung zur Verarbeitung von Audiosignalen, Empfänger und Verfahren zum Filtern und Wiedergabe eines Nutzsignals in Gegenwart von Umgebungsgeräusche
EP0818121B1 (de) System zur ton- und höraufnahme für helm in geräuschvoller umgebung
WO2004004298A1 (fr) Dispositifs de traitement d&#39;echo pour systemes de communication de type monovoie ou multivoies
FR2857551A1 (fr) Dispositif pour capter ou reproduire des signaux audio
FR2764469A1 (fr) Procede et dispositif de traitement optimise d&#39;un signal perturbateur lors d&#39;une prise de son
WO2017207286A1 (fr) Combine audio micro/casque comprenant des moyens de detection d&#39;activite vocale multiples a classifieur supervise
US11533555B1 (en) Wearable audio device with enhanced voice pick-up
CN115398934A (zh) 再现音频信号时主动抑制闭塞效应的方法、装置、耳机及计算机程序
FR2566658A1 (fr) Prothese auditive multivoie
CN118870277A (zh) 具有主动降噪的助听方法、头戴式设备和计算机程序产品
FR3136308A1 (fr) Casque audio à réducteur de bruit
FR3109687A1 (fr) Système Acoustique

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

17P Request for examination filed

Effective date: 20130418

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 21/02 20130101AFI20130606BHEP

Ipc: G10L 21/0208 20130101ALI20130606BHEP

Ipc: G10L 21/0216 20130101ALI20130606BHEP

Ipc: H04R 3/00 20060101ALI20130606BHEP

INTG Intention to grant announced

Effective date: 20130621

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 634970

Country of ref document: AT

Kind code of ref document: T

Effective date: 20131015

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602012000351

Country of ref document: DE

Effective date: 20131205

REG Reference to a national code

Ref country code: NL

Ref legal event code: T3

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 634970

Country of ref document: AT

Kind code of ref document: T

Effective date: 20131002

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140102

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140203

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602012000351

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

26N No opposition filed

Effective date: 20140703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602012000351

Country of ref document: DE

Effective date: 20140703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140419

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140419

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150430

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140103

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 602012000351

Country of ref document: DE

Owner name: PARROT DRONES, FR

Free format text: FORMER OWNER: PARROT, PARIS, FR

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20120419

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140430

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20160811 AND 20160817

REG Reference to a national code

Ref country code: NL

Ref legal event code: PD

Owner name: PARROT DRONES; FR

Free format text: DETAILS ASSIGNMENT: VERANDERING VAN EIGENAAR(S), OVERDRACHT; FORMER OWNER NAME: PARROT

Effective date: 20160804

REG Reference to a national code

Ref country code: FR

Ref legal event code: TP

Owner name: PARROT DRONES, FR

Effective date: 20161010

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20170424

Year of fee payment: 6

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20170425

Year of fee payment: 6

Ref country code: DE

Payment date: 20170425

Year of fee payment: 6

Ref country code: FR

Payment date: 20170418

Year of fee payment: 6

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20170420

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131002

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602012000351

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20180501

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20180419

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180501

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180419

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180419

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180430