EP2603018B1 - Dispositif auditif avec détection d'activité de locuteur et procédé de fonctionnement d'un dispositif auditif - Google Patents

Dispositif auditif avec détection d'activité de locuteur et procédé de fonctionnement d'un dispositif auditif Download PDF

Info

Publication number
EP2603018B1
EP2603018B1 EP12191191.1A EP12191191A EP2603018B1 EP 2603018 B1 EP2603018 B1 EP 2603018B1 EP 12191191 A EP12191191 A EP 12191191A EP 2603018 B1 EP2603018 B1 EP 2603018B1
Authority
EP
European Patent Office
Prior art keywords
basis
voice activity
wearer
analysis devices
activity data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP12191191.1A
Other languages
German (de)
English (en)
Other versions
EP2603018A1 (fr
Inventor
Marko Dr. Lugger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sivantos Pte Ltd
Original Assignee
Sivantos Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sivantos Pte Ltd filed Critical Sivantos Pte Ltd
Publication of EP2603018A1 publication Critical patent/EP2603018A1/fr
Application granted granted Critical
Publication of EP2603018B1 publication Critical patent/EP2603018B1/fr
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing

Definitions

  • the invention relates to a hearing device which is designed to automatically detect whether a wearer of the hearing device is currently speaking or not.
  • the invention also includes a method for operating a hearing device, by means of which it can likewise be automatically detected whether the wearer of the hearing device speaks for himself.
  • a hearing device is understood here to mean any sound-emitting device that can be worn in or on the ear, in particular a hearing device, a headset, headphones.
  • Hearing aids are portable hearing aids that are used to care for the hearing impaired.
  • different types of hearing aids such as behind-the-ear hearing aids (BTE), hearing aid with external receiver (RIC: receiver in the canal) and in-the-ear hearing aids (ITE), e.g. Concha hearing aids or canal hearing aids (ITE, CIC).
  • BTE behind-the-ear hearing aids
  • RIC hearing aid with external receiver
  • ITE in-the-ear hearing aids
  • ITE in-the-ear hearing aids
  • ITE in-the-ear hearing aids
  • ITE concha hearing aids or canal hearing aids
  • the hearing aids listed by way of example are worn on the outer ear or in the ear canal.
  • bone conduction hearing aids, implantable or vibrotactile hearing aids are also available on the market. The stimulation of the damaged hearing takes place either mechanically or electrically.
  • Hearing aids have in principle as essential components an input transducer, an amplifier and an output transducer.
  • the input transducer is usually a sound receiver, z. As a microphone, and / or an electromagnetic receiver, for. B. an induction coil.
  • the output transducer is usually used as an electroacoustic transducer, z. As miniature speaker, or as an electromechanical transducer, z. B. bone conduction, realized.
  • the amplifier is usually integrated in a signal processing unit. This basic structure is in FIG. 1 using the example of a behind-the-ear hearing aid shown. In a hearing aid housing 1 for carrying behind the ear, one or more microphones 2 for receiving the sound from the environment are installed.
  • a signal processing unit 3 which is also integrated in the hearing aid housing 1, processes the microphone signals and amplifies them.
  • the output signal of the signal processing unit 3 is transmitted to a loudspeaker or earpiece 4, which outputs an acoustic signal.
  • the sound is optionally transmitted via a sound tube, which is fixed with an earmold in the ear canal, to the eardrum of the device carrier.
  • the power supply of the hearing device and in particular the signal processing unit 3 is effected by a likewise integrated into the hearing aid housing 1 battery. 5
  • the gain of the different frequency bands as a function of the hearing of the wearer of the hearing device can usually always remain the same, that is independent of the changing speakers.
  • a beamformer only has to be able to move quickly enough between the directions from which the voices of the speakers come alternately.
  • the situation is different in the case when the wearer of the hearing device speaks for himself.
  • the wearer always takes his own voice, for example because of a bone sound transmission different from the voice of people around him. If the listener's own voice is detected by a microphone as airborne sound and processed in the same way as the voices of other speakers by the hearing device, then the wearer of the hearing device perceives his own voice alienated. In the case of beamforming, it is unclear where the beamformer's main lobe should actually point at a voice activity of the wearer of the hearing device.
  • a signal processing device for a hearing device which is designed to a self-speaker activity based on microphone signals to recognize two microphones.
  • the detection is performed on the basis of the specific characteristics of a sound field, such as the self-voice of the hearing aid wearer due to Nachfeld binen causes, as well as on the basis of the symmetry of the microphone signals.
  • the absolute level of the signals as well as the spectral envelope of the signal spectra can be analyzed in parallel processing blocks.
  • the three analysis blocks each provide a binary signal which indicates whether or not the respective signal block has recognized intrinsic speech activity.
  • a combination block following the analysis blocks links the signals to an overall decision by means of an AND operation.
  • a programmable communication device which switches upon detection of a self-speech activity signal processing in accordance with the specifications of a user of the communication device, so as to present the user as natural as possible reproduction of his own language.
  • parameters are extracted from microphone signals which are then compared to previously learned parameters, the learned parameters being determined based on the user's own voice.
  • Preferred parameters are firstly the level of a low-frequency channel and secondly the level of a high-frequency channel, wherein both levels are combined in order to then decide whether the signal in the two channels is an eigenvoice or not.
  • An object of the present invention is to provide a reliable self-tuning recognition for a hearing device.
  • the hearing device according to the invention and the method according to the invention are not dependent on a comparison of two independently detected audio signals. Instead, reliable and robust inter-speaker recognition is achieved by examining audio signals received by the hearing device for more than one type of analysis as to whether it is indicative of in-speaker activity. The different analysis results are then combined in a second step in order to make a reliable statement from the merged information as to whether the wearer of the hearing device is currently speaking or not.
  • the risk of incorrect speech detection is significantly reduced by this fusion of different sources of information, since false detection results, which can result from only one single analysis, are compensated by the results of other analyzes that may be more appropriate for a particular situation.
  • the hearing device has at least two independent analysis devices, each of which is designed to obtain data, referred to herein as voice activity data, based on an audio signal received by the hearing device they are dependent on a speaker activity of the wearer of the hearing device.
  • voice activity data data
  • an audio signal is to be understood here as meaning an electrical or digital signal which has signal components in the audio frequency range.
  • Each of the analysis devices can be supplied with an audio signal from another signal source. However, one and the same audio signal can also be supplied to a plurality of analysis devices. Examples of sources of an audio signal are a microphone, a beamformer or a structure-borne sound sensor.
  • the analysis facilities extract the voice activity data based on a different analysis criterion, that is, for example, depending on an incident direction of an ambient sound, as a function of spectral values of a frequency spectrum of the audio signal, based on speaker-independent voice activity recognition or in dependence on binaural information, as can be obtained when recorded on different sides of a head of the carrier audio data become.
  • the hearing device In order to be able to make a reliable statement as to whether the wearer is currently speaking from the voice activity data of the individual analysis devices, the hearing device according to the invention has a fusion device which is designed to receive the voice activity data from the analysis devices and on the basis of the voice activity data to perform the self-speech recognition. It may be sufficient in this case that the fusion device is designed to recognize whether the voice of the wearer is active or not. It only needs to be recognized in a few cases, the identity of the carrier, for. When using spectral features.
  • the hearing device according to the invention can be produced in a particularly favorable manner if only that microphone device is used by means of which the ambient sound striking the carrier is also converted into the useful signal, which is to be presented to the wearer of the hearing device in processed form.
  • a microphone device this is not necessarily meant a single microphone.
  • a microphone array or other arrangement of multiple microphones may also be used.
  • a particularly expedient further development of the hearing device according to the invention has an adaptation device which is complementary thereto is designed to change an operation of the hearing device, if the carrier speaks.
  • a transmission behavior of the hearing device is adapted in order to convey to the wearer of the hearing device a neutral sound impression of his own voice. It has proven to be particularly useful to attenuate a low-frequency component of the useful signal in order to avoid the known as occlusion effect distorted perception of one's own voice.
  • its straightening behavior is expediently adapted.
  • the invention also provides a method for operating a hearing device.
  • voice activity data is obtained independently of each other by means of at least two analyzers, i. Data dependent on speaker activity of a wearer of the hearing device.
  • the voice activity data of the analyzers are combined by means of a fusion device. On the basis of these combined language activities then reviewed in summary whether the carrier speaks or not.
  • the analysis of the audio signal by the individual analysis devices and the speech activity detection by the fusion device can be done in many different ways.
  • the method according to the invention advantageously makes it possible to freely combine the most varied analysis methods and to combine them for a reliable and robust overall statement about the speech activity.
  • feature extraction is performed by at least one of the analysis devices.
  • feature values are determined in dependence on the audio signal, such as an incident direction of a sound which has caused the audio signal or a reverberation of the audio signal.
  • it may also be a specific representation of individual segments of the audio signal, such as spectral or cepstral coefficients, coefficients linear prediction (LPC).
  • LPC coefficients linear prediction
  • the analysis device may be expedient to make a provisional statement by the analysis device as to whether the wearer of the hearing device is currently speaking. This happens in the form of a probability value (values between zero and one). But it can already happen as a so-called hard or binary decision (speaks or does not speak).
  • the latter may be enabled by an analyzer which acts as a classifier and checks for this on the basis of a classification criterion whether the bearer speaks or not.
  • classification criteria are known and available from the prior art, for example in connection with a so-called speaker-independent voice activity detection (VAD).
  • a weighting of the individual voice activity data is carried out by the fusion device. This weighting then depends on which analysis device the respective voice activity data came from.
  • the weighting advantageously achieves that, depending on the current situation, an analysis device which is known to deliver only unreliable data in this situation as expected obtains less influence on the decision result than an analysis device known to function reliably in the situation.
  • either trainable or untrainable embodiments can be realized for these weightings.
  • the invention defined in claim 1 and 4 refers only to the trainable embodiment.
  • the weighted voice activity data can be finally connect, resulting in the already described information fusion.
  • Voice activity data from different analysis devices can be combined particularly easily if the voice activity data already provides a preliminary decision about the speaker activity. Then, for example, a majority decision may be made by the fusion device that says something about whether the speaker activity is being displayed by the analysis devices together.
  • voice activity detectors may be used in at least two analysis devices e.g. be provided with different parameters.
  • a hearing device 10 which generates a sound 12 from an environment of a wearer of the hearing device detected.
  • the audio signal of the sound 12 is processed by the hearing device 10 and reproduced as output sound signal 14 in an ear canal 16 of the wearer of the device.
  • the hearing device 10 can be, for example, a hearing device, such as a behind-the-ear hearing device or a in-the-ear hearing device.
  • the hearing device 10 detects the ambient sound 12 by means of a microphone device 18, which is incident on the ambient sound 12 from the environment and converts the audio signal of the sound 12 into a digital useful signal.
  • the useful signal is processed by a processing device 20 of the hearing device 10 and then emitted in processed form by a receiver 22 of the hearing device 10 in the ear canal 16 as the output sound 14.
  • the microphone device 18 may include one or more microphones.
  • a microphone device 18 with three microphones 24, 26, 28 is shown.
  • the microphones 24 to 28 may form a microphone array; but they can also be mounted independently, for example, on opposite sides of the head of the wearer of the hearing.
  • the processing device 20 may be, for example, a digital signal processor. However, the processing device 20 can also be realized by separate or integrated circuits.
  • the handset 22 may be, for example, a headphone or an RIC (Receiver in the Canal) or an external hearing aid handset whose sound is conducted via a sound tube into the ear canal 16.
  • the useful signal is processed by a signal processor 30 in such a way that the equipment wearer is on Hearing adapted output sound signal 14 perceives.
  • the signal processing 30 is switched to a mode by which the wearer a neutral sound impression of the own voice is mediated, if he also perceives this via the hearing device 10.
  • the measures to be performed by the signal processing 30 are known per se from the prior art.
  • the processing device 20 carries out the method explained in more detail below.
  • the method makes it possible to reliably detect on the basis of the ambient sound 12 whether or not the ambient sound 12 is the own voice of the wearer of the hearing device 10.
  • the method does not rely on acoustic characteristics of a single source of information. A signal from such a single source would be subject to too great a variance, so that a reliable statement about the speaker activity could only be achieved by smoothing the signal over a long period of time.
  • the processing device 20 could not respond to rapid changes between the voice of the wearer of the hearing device 10 on the one hand and the voice of another person. In other acoustic scenarios where ambient sound 12 with varying proportions contains both the wearer's voice and ambient noise, no reliable decision could be made at all based on a single source of acoustic features.
  • a plurality of analysis devices 32, 34, 36, 38 are provided with the processing device 20, which represent independent information sources relating to the speaker activity of the wearer of the hearing device.
  • the four analyzers 32-38 shown here represent only one exemplary configuration of a processing device
  • the analyzers 32-38 may be provided, for example, by one or more analysis programs for a digital signal processor.
  • the analysis devices 32 to 38 generate output signals depending on the useful signal of the microphone device 18, which data relate to the voice activity of the hearing device wearer d.
  • the voice activity data 40 through 46 are merged (FUS - Fusion) by a fusion device 48, that is, they are combined into a single signal indicating whether the carrier's voice is active (OVA - Own Voice Active) or not is (OVNA - Own Voice not Active).
  • the output signal of the fusion device 48 forms a control signal of the signal processing 30, by which the signal processing 30 between the two modes described hard switched or soft-faded.
  • the person skilled in the art can easily find suitable analysis criteria on the basis of simple experiments on a specific model of a hearing device in order to distinguish between an ambient sound 12 emitted by the voice of the wearer of the hearing device 10 itself is generated, and an ambient sound 12, which originates from sound sources from the environment of the wearer to distinguish.
  • exemplary possible embodiments of the analysis devices 32 to 38 are described which have proved to be particularly expedient.
  • an evaluation of a spatial information can be carried out, as can be obtained on the basis of a plurality of microphone channels (MC - Multi Channel) in a manner known per se. In this way, for example, a direction of incidence 50 can be determined, from which the ambient sound 12 strikes the microphone device 18 or at least some of its microphones 24 to 28.
  • a spectral evaluation on the basis of a single microphone channel (SC - Single Channel) take place.
  • SC Single microphone channel
  • Such analyzes are also known per se from the prior art and are based, for example, on the evaluation of a signal power in individual spectral bands of the audio signal.
  • One possible spectral information is a speaker verification.
  • a speaker verification By such a speaker verification, a "one out of N" speaker recognition is performed, i. H. it is a very specific speaker from several possible speakers recognized. It can be carried out, for example, on the basis of a spectral characteristic of the speaker to be recognized, in this case the wearer of the hearing device 10.
  • speaker-independent voice activity detection may be performed by the analyzer 36 based on a single microphone channel.
  • the analysis device 38 can be obtained from a plurality of microphone channels binaural information, as they can be obtained in contrast to a microphone array with more distant microphones.
  • the output signals of the individual analysis devices 32 to 38 can represent the extracted information in different ways, depending on the type of analysis.
  • Convenient forms are the output of features in the form of discrete real numbers, the output of probabilities (ie, real numbers between zero and one), or even the output of concrete decisions about speaker activity (possibly binary outputs of zero or one).
  • the probabilities may, for example, be likelihood values.
  • FIG. 2 Each of these output forms is illustrated by corresponding references to features X, probabilities P (probability) or decisions D (decision).
  • the fusion device 48 By means of the fusion device 48, an evaluation of the voice activity data 40 to 46 is carried out, which ultimately is crucial for the control of signal processing 30.
  • the fusion device 48 may be a program or program portion of a digital signal processor.
  • the type of "fusion" of the activity data 40 to 46 also depends to a great extent on the analysis devices 32 to 38 used and on the form of the voice activity data 40 to 46 used (characteristics, probabilities or individual decisions).
  • the voice activity data can for example be processed in parallel or serially or even in a hybrid approach.
  • the voice activity data 40 to 46 can be subjected to an input-side weighting by the fusion device 48.
  • Suitable weights can be determined by means of a training process on the basis of training data that can be radiated onto the hearing device 10, for example by means of a loudspeaker as ambient sound 12.
  • the weights can then be determined, for example, in the form of a covariance matrix, by which a relationship between the voice activity data 40 to 46 on the one hand and the true decision to be made (carrier speaks or does not speak) is described.
  • the voice activity data 40 to 46 are expediently transmitted in the form of a vector to the fusion device 48, in which the numerical values of the analysis results, for example the probabilities, are combined.
  • Another possible evaluation method of the fusion device 48 is a majority decision, which can be performed, for example, on the basis of individual decisions D1, D2, D3, D4 analysis devices 32 to 38. The result is then an overall decision D.
  • likelihood values P1, P2, P3, P4 as voice activity data 40 to 46
  • these likelihoods may be summarized by calculating an average of these likelihood values P1 to P4 into a total probability P, for example.
  • the total probability P can then be compared, for example, with a threshold value in order to obtain the final overall decision D.
  • the signal processing 30 can set, for example, a frequency response of the signal path as formed by the microphone device 18, the processing device 20, the signal processing device 30 and the handset 22. For example, to avoid an occlusion effect, low frequencies of the audio signal can be attenuated. In the same way, it can be provided that a directional microphone is not adapted when inserting the voice of the carrier, since it makes no sense to pivot the main lobe of a beamformer away from an external source when the wearer of the hearing device 10 is speaking.

Claims (13)

  1. Dispositif d'aide auditive, comprenant
    - au moins deux dispositifs d'analyse (32 à 38) dont chacun est conçu pour obtenir des données d'activité vocale (40 à 46) qui dépendent d'une activité de locution d'un porteur du dispositif d'aide auditive (10) sur la base d'un signal audio (12) reçu par le dispositif d'aide auditive (10) et
    - un dispositif de fusion (48) qui est conçu pour recevoir les données d'activité vocale (40 à 46) des dispositifs d'analyse (32 à 38) et pour détecter sur la base des données d'activité vocale (40 à 46) si le porteur parle ou non,
    caractérisé en ce que
    - au moins l'un des dispositifs d'analyse (32 à 38) est conçu pour déterminer des valeurs (P1 à P4) destinées à une décision non stricte ou à une probabilité que le porteur parle, dans lequel les valeurs (P1 à P4) sont générées en fonction du signal audio, et
    - le dispositif de fusion (48) est conçu pour pondérer les données d'activité vocale (40 à 46) d'au moins deux dispositifs d'analyse (32 à 38) en fonction du dispositif d'analyse (32 à 38) dont elles proviennent du côté de l'entrée et pour combiner les unes aux autres les données d'activité vocale (40 à 46), dans lequel des poids appropriés sont déterminés au moyen d'un processus d'apprentissage sur la base de données d'apprentissage.
  2. Dispositif d'aide auditive (10) selon la revendication 1, caractérisé par un dispositif à microphone (18) comprenant au moins un microphone (24 à 28) et conçu pour convertir un son ambiant (12) incident sur le porteur en un signal utile, dans lequel les dispositifs d'analyse (32 à 38) sont conçus pour traiter le signal utile comme étant le signal audio.
  3. Dispositif d'aide auditive (10) selon la revendication 1 ou 2, caractérisé par un dispositif d'adaptation (30) qui est conçu pour modifier un mode de fonctionnement du dispositif d'aide auditive (10), notamment une caractéristique de transfert du dispositif d'aide auditive (10) et/ou une directivité d'un dispositif de formation adaptative de faisceau du dispositif d'aide auditive (10) dans le cas où le dispositif de fusion (48) détecte que le porteur parle.
  4. Procédé de mise en fonctionnement d'un dispositif d'aide auditive (10), dans lequel des données d'activité vocale (40 à 46) qui dépendent d'une activité de locution d'un porteur du dispositif d'aide auditive (10) sont obtenues à partir d'un signal audio au moyen d'au moins deux dispositifs d'analyse (32 à 38) indépendamment l'un de l'autre, et les données d'activité vocale (40 à 46) sont combinées au moyen d'un dispositif de fusion (48) et il est vérifié sur la base des données d'activité vocale combinées (40 à 46) si le porteur parle ou non,
    caractérisé en ce que
    - des valeurs (P1 à P4) destinées à une prise de décision non stricte ou à une probabilité que le porteur parle sont obtenues par au moins l'un des dispositifs d'analyse (32 à 38), dans lequel les valeurs (P1 à P4) sont générées en fonction du signal audio et
    - les données d'activité vocale (40 à 46) d'au moins deux dispositifs d'analyse (32 à 38) sont pondérées par le dispositif de fusion (48) en fonction du dispositif d'analyse (32 à 38) dont elles proviennent au moyen d'une pondération effectuée du côté de l'entrée et les données d'activité vocale pondérée (40 à 46) sont combinées les unes aux autres, dans lequel des facteurs de pondération appropriés sont déterminés au moyen d'un processus d'apprentissage sur la base de donnée d'apprentissage.
  5. Procédé selon la revendication 4, caractérisé en ce qu'une extraction de caractéristique est effectuée par au moins l'un des dispositifs d'analyse (32 à 38) et en ce que des valeurs caractéristiques (X1 à X4), notamment une direction d'incidence (50) d'un son ambiant (12), un sexe d'un locuteur, une réverbération du signal audio ou des caractéristiques spectrales telles que des coefficients spectraux ou cepstraux, sont déterminées à cet effet en fonction du signal audio.
  6. Procédé selon l'une quelconque des revendications 4 ou 5, caractérisé en ce qu'une classification est effectuée par au moins l'un des dispositifs d'analyse (32 à 38) et en ce qu'une décision individuelle (D1 à D4) est déjà générée à cet effet en fonction du signal audio par le dispositif d'analyse (32 à 38) sur la base d'un critère de classification pour déterminer si le porteur parle ou non.
  7. Procédé selon l'une quelconque des revendications 4 à 6, caractérisé en ce que les données d'activité vocale (40) sont générées par au moins l'un des dispositifs d'analyse (32) en fonction d'une direction d'incidence (50) d'un son ambiant (12).
  8. Procédé selon l'une quelconque des revendications 4 à 7, caractérisé en ce que les données d'activité vocale (42) sont générées par au moins l'un des dispositifs d'analyse (34) en fonction de valeurs spectrales d'un spectre de fréquence du signal audio.
  9. Procédé selon l'une quelconque des revendications 4 à 8, caractérisé en ce qu'une identification d'activité vocale indépendante du locuteur est effectuée par au moins l'un des dispositifs d'analyse (36).
  10. Procédé selon l'une quelconque des revendications 4 à 9, caractérisé en ce que les données d'activité vocale (46) sont générées par au moins l'un des dispositifs d'analyse (38) en fonction d'une information binaurale qui est créée à partir de données audio obtenues sur des côtés différents de la tête d'un porteur.
  11. Procédé selon l'une quelconque des revendications 4 à 10, caractérisé en ce qu'une décision à la majorité est prise par le dispositif de fusion (48) sur la base de décisions individuelles (40 à 46) d'au moins deux dispositifs d'analyse pour déterminer si une activité de locution est signalée simultanément par ces dispositifs d'analyse (32 à 38).
  12. Procédé selon l'une quelconque des revendications 4 à 11, caractérisé en ce qu'une valeur moyenne est calculée par le dispositif de fusion (48) à partir de décisions non strictes de détecteurs d'activité vocale d'au moins deux dispositifs d'analyse (40 à 46).
  13. Procédé selon l'une quelconque des revendications 4 à 12, caractérisé en ce qu'une réponse en fréquence du dispositif d'aide auditive (10) est adaptée par un dispositif d'adaptation (30) lorsque l'activité vocale du porteur est identifiée par le dispositif de fusion (48) et en ce qu'une partie à basse fréquence d'un signal utile est notamment amortie et/ou l'adaptation d'une caractéristique de directivité d'un dispositif à microphone directif du dispositif d'aide auditive (10) est à cet effet interrompue ou arrêtée.
EP12191191.1A 2011-12-08 2012-11-05 Dispositif auditif avec détection d'activité de locuteur et procédé de fonctionnement d'un dispositif auditif Not-in-force EP2603018B1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102011087984A DE102011087984A1 (de) 2011-12-08 2011-12-08 Hörvorrichtung mit Sprecheraktivitätserkennung und Verfahren zum Betreiben einer Hörvorrichtung

Publications (2)

Publication Number Publication Date
EP2603018A1 EP2603018A1 (fr) 2013-06-12
EP2603018B1 true EP2603018B1 (fr) 2016-02-03

Family

ID=47221957

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12191191.1A Not-in-force EP2603018B1 (fr) 2011-12-08 2012-11-05 Dispositif auditif avec détection d'activité de locuteur et procédé de fonctionnement d'un dispositif auditif

Country Status (4)

Country Link
US (1) US8873779B2 (fr)
EP (1) EP2603018B1 (fr)
DE (1) DE102011087984A1 (fr)
DK (1) DK2603018T3 (fr)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2908549A1 (fr) 2014-02-13 2015-08-19 Oticon A/s Dispositif de prothèse auditive comprenant un élément de capteur
EP3461148B1 (fr) 2014-08-20 2023-03-22 Starkey Laboratories, Inc. Système d'aide auditive avec détection de sa propre voix
DK2991379T3 (da) 2014-08-28 2017-08-28 Sivantos Pte Ltd Fremgangsmåde og apparat til forbedret opfattelse af egen stemme
DK3222057T3 (da) 2014-11-19 2019-08-05 Sivantos Pte Ltd Fremgangsmåde og indretning til hurtig genkendelse af egen stemme
CN105976829B (zh) * 2015-03-10 2021-08-20 松下知识产权经营株式会社 声音处理装置、声音处理方法
DE102015210652B4 (de) * 2015-06-10 2019-08-08 Sivantos Pte. Ltd. Verfahren zur Verbesserung eines Aufnahmesignals in einem Hörsystem
US9978397B2 (en) 2015-12-22 2018-05-22 Intel Corporation Wearer voice activity detection
DE102016203987A1 (de) 2016-03-10 2017-09-14 Sivantos Pte. Ltd. Verfahren zum Betrieb eines Hörgeräts sowie Hörgerät
DK3396978T3 (da) 2017-04-26 2020-06-08 Sivantos Pte Ltd Fremgangsmåde til drift af en høreindretning og en høreindretning
US11477587B2 (en) 2018-01-16 2022-10-18 Cochlear Limited Individualized own voice detection in a hearing prosthesis
DE102018202155A1 (de) 2018-02-13 2019-03-07 Sivantos Pte. Ltd. Sprechhilfe-Vorrichtung und Verfahren zum Betrieb einer Sprechhilfe-Vorrichtung
EP3641344B1 (fr) 2018-10-16 2023-12-06 Sivantos Pte. Ltd. Procédé de fonctionnement d'un instrument auditif et système auditif comprenant un instrument auditif
EP3641345B1 (fr) 2018-10-16 2024-03-20 Sivantos Pte. Ltd. Procédé de fonctionnement d'un instrument auditif et système auditif comprenant un instrument auditif
US10795638B2 (en) 2018-10-19 2020-10-06 Bose Corporation Conversation assistance audio device personalization
US11089402B2 (en) * 2018-10-19 2021-08-10 Bose Corporation Conversation assistance audio device control
EP3672281B1 (fr) * 2018-12-20 2023-06-21 GN Hearing A/S Dispositif d'aide auditive avec détection de sa propre voix et procédé associé
EP4184949A1 (fr) 2019-04-17 2023-05-24 Oticon A/s Dispositif auditif comprenant un émetteur
EP3823306B1 (fr) 2019-11-15 2022-08-24 Sivantos Pte. Ltd. Système auditif comprenant un instrument auditif et procédé de fonctionnement de l'instrument auditif
DE102020201615B3 (de) 2020-02-10 2021-08-12 Sivantos Pte. Ltd. Hörsystem mit mindestens einem im oder am Ohr des Nutzers getragenen Hörinstrument sowie Verfahren zum Betrieb eines solchen Hörsystems
DE102020202483A1 (de) 2020-02-26 2021-08-26 Sivantos Pte. Ltd. Hörsystem mit mindestens einem im oder am Ohr des Nutzers getragenen Hörinstrument sowie Verfahren zum Betrieb eines solchen Hörsystems
CN112863269B (zh) * 2021-01-20 2022-05-20 青岛黄海学院 一种英语口语与听力训练装置及训练方法
EP4138416A1 (fr) 2021-08-16 2023-02-22 Sivantos Pte. Ltd. Système auditif comprenant un instrument auditif et procédé de fonctionnement de l'instrument auditif
EP4184948A1 (fr) 2021-11-17 2023-05-24 Sivantos Pte. Ltd. Système auditif comprenant un instrument auditif et procédé de fonctionnement de l'instrument auditif

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6700985B1 (en) * 1998-06-30 2004-03-02 Gn Resound North America Corporation Ear level noise rejection voice pickup method and apparatus
DE10137685C1 (de) * 2001-08-01 2002-12-19 Tuerk & Tuerk Electronic Gmbh Verfahren zum Erkennen des Vorliegens von Sprachsignalen
ATE298968T1 (de) 2001-10-05 2005-07-15 Oticon As Verfahren zum programmieren einer kommunikationseinrichtung und programmierbare kommunikationseinrichtung
US7512245B2 (en) * 2003-02-25 2009-03-31 Oticon A/S Method for detection of own voice activity in a communication device
ATE324763T1 (de) * 2003-08-21 2006-05-15 Bernafon Ag Verfahren zur verarbeitung von audiosignalen
JP4446338B2 (ja) * 2004-03-22 2010-04-07 ソニー・エリクソン・モバイルコミュニケーションズ株式会社 再送要求方法、無線通信システム、および受信機
DE102005032274B4 (de) 2005-07-11 2007-05-10 Siemens Audiologische Technik Gmbh Hörvorrichtung und entsprechendes Verfahren zur Eigenstimmendetektion
US8611560B2 (en) * 2007-04-13 2013-12-17 Navisense Method and device for voice operated control
US8571242B2 (en) * 2008-05-30 2013-10-29 Phonak Ag Method for adapting sound in a hearing aid device by frequency modification and such a device
EP2306457B1 (fr) * 2009-08-24 2016-10-12 Oticon A/S Reconnaissance sonore automatique basée sur des unités de fréquence temporelle binaire
JP2011065093A (ja) * 2009-09-18 2011-03-31 Toshiba Corp オーディオ信号補正装置及びオーディオ信号補正方法
DK2381700T3 (en) * 2010-04-20 2015-06-01 Oticon As Removal of the reverberation from a signal with use of omgivelsesinformation
US8462969B2 (en) * 2010-04-22 2013-06-11 Siemens Audiologische Technik Gmbh Systems and methods for own voice recognition with adaptations for noise robustness

Also Published As

Publication number Publication date
DE102011087984A1 (de) 2013-06-13
US8873779B2 (en) 2014-10-28
DK2603018T3 (da) 2016-05-17
US20130148829A1 (en) 2013-06-13
EP2603018A1 (fr) 2013-06-12

Similar Documents

Publication Publication Date Title
EP2603018B1 (fr) Dispositif auditif avec détection d'activité de locuteur et procédé de fonctionnement d'un dispositif auditif
EP3451705B1 (fr) Procédé et dispositif de reconnaissance rapide de voix propre
EP2405673B1 (fr) Procédé de localisation d'un source audio et système auditif à plusieurs canaux
EP2833651A1 (fr) Procédé de suivi d'une source sonore
DE10327890A1 (de) Verfahren zum Betrieb eines Hörhilfegerätes sowie Hörhilfegerät mit einem Mikrofonsystem, bei dem unterschiedliche Richtcharakteristiken einstellbar sind
EP3873108A1 (fr) Système auditif pourvu d'au moins un instrument auditif porté dans ou sur l'oreille de l'utilisateur, ainsi que procédé de fonctionnement d'un tel système auditif
EP2182741B1 (fr) Dispositif auditif doté d'une unité de reconnaissance de situation spéciale et procédé de fonctionnement d'un dispositif auditif
DE102008046040B4 (de) Verfahren zum Betrieb einer Hörvorrichtung mit Richtwirkung und zugehörige Hörvorrichtung
EP1962554A2 (fr) Appareil auditif doté d'une séparation de signal d'erreur et procédé correspondant
EP2658289B1 (fr) Procédé de commande d'une caractéristique de guidage et système auditif
EP2434781A1 (fr) Procédé de reconstruction d'un signal vocal et dispositif auditif
EP1926087A1 (fr) Adaptation d'un dispositif auditif à un signal vocal
EP2219389B1 (fr) Dispositif et procédé d'estimation des bruits parasites dans l'arrivée d'un appareil auditif binauréal
EP2793488B1 (fr) Adaptation de microphone binaurale au moyen de sa propre voix
EP2982136B1 (fr) Procédé d'estimation d'un signal utile et dispositif auditif
EP3048813B1 (fr) Procédé et dispositif de suppression du bruit basée sur l'inter-corrélation de bandes secondaires
DE102021210098A1 (de) Verfahren zum Betrieb eines Hörgeräts
EP2885926A1 (fr) Système auditif ainsi que procédé de transmission

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

17P Request for examination filed

Effective date: 20131211

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

17Q First examination report despatched

Effective date: 20140115

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20150703

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SIVANTOS PTE. LTD.

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 774150

Country of ref document: AT

Kind code of ref document: T

Effective date: 20160215

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502012005851

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: E. BLUM AND CO. AG PATENT- UND MARKENANWAELTE , CH

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20160509

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

Ref country code: NL

Ref legal event code: MP

Effective date: 20160203

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160503

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160504

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160603

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160603

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 502012005851

Country of ref document: DE

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20161104

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20161130

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160503

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20161130

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20161105

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20161130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20121105

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160203

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 774150

Country of ref document: AT

Kind code of ref document: T

Effective date: 20171105

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171105

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20181122

Year of fee payment: 7

Ref country code: DK

Payment date: 20181126

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20181126

Year of fee payment: 7

Ref country code: CH

Payment date: 20181126

Year of fee payment: 7

Ref country code: FR

Payment date: 20181123

Year of fee payment: 7

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 502012005851

Country of ref document: DE

REG Reference to a national code

Ref country code: DK

Ref legal event code: EBP

Effective date: 20191130

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191130

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191130

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20191105

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191130

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200603

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191105

Ref country code: DK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191130