EP2988531B1 - Hörhilfesystem mit erkennung der eigenen stimme - Google Patents

Hörhilfesystem mit erkennung der eigenen stimme Download PDF

Info

Publication number
EP2988531B1
EP2988531B1 EP15181620.4A EP15181620A EP2988531B1 EP 2988531 B1 EP2988531 B1 EP 2988531B1 EP 15181620 A EP15181620 A EP 15181620A EP 2988531 B1 EP2988531 B1 EP 2988531B1
Authority
EP
European Patent Office
Prior art keywords
voice
wearer
hearing assistance
microphone
assistance device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Revoked
Application number
EP15181620.4A
Other languages
English (en)
French (fr)
Other versions
EP2988531A1 (de
Inventor
Ivo Merks
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Starkey Laboratories Inc
Original Assignee
Starkey Laboratories Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=53879441&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=EP2988531(B1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Priority claimed from US14/464,149 external-priority patent/US9219964B2/en
Application filed by Starkey Laboratories Inc filed Critical Starkey Laboratories Inc
Priority to EP18195310.0A priority Critical patent/EP3461148B1/de
Publication of EP2988531A1 publication Critical patent/EP2988531A1/de
Application granted granted Critical
Publication of EP2988531B1 publication Critical patent/EP2988531B1/de
Revoked legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/405Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers

Definitions

  • This application relates to hearing assistance systems, and more particularly, to hearing assistance systems with own voice detection.
  • Hearing assistance devices are electronic devices that amplify sounds above the audibility threshold to is hearing impaired user. Undesired sounds such as noise, feedback and the user's own voice may also be amplified, which can result in decreased sound quality and benefit for the user. It is undesirable for the user to hear his or her own voice amplified. Further, if the user is using an ear mold with little or no venting, he or she will experience an occlusion effect where his or her own voice sounds hollow ("talking in a barrel"). Thirdly, if the hearing aid has a noise reduction/environment classification algorithm, the user's own voice can be wrongly detected as desired speech.
  • a digital sound processing system 308 processes the acoustic signals received by the first and second microphones, and provides a signal to the receiver 306 to produce an audible signal to the wearer of the device 305.
  • the illustrated digital sound processing system 308 includes an interface 307, a sound processor 308, and a voice detector 309.
  • the illustrated interface 307 converts the analog signals from the first and second microphones into digital signals for processing by the sound processor 308 and the voice detector 309.
  • the interface may include analog-to-digital converters, and appropriate registers to hold the digital signals for processing by the sound processor and voice detector.
  • the voice detector 309 receives signals representative of sound received by the first microphone and sound received by the second microphone.
  • the voice detector 309 detects the user's own voice, and provides an indication 311 to the sound processor 308 regarding whether the user's own voice is detected. Once the user's own voice is detected any number of possible other actions can take place.
  • the sound processor 308 can perform one or more of the following, including but not limited to reduction of the amplification of the user's voice, control of an anti-occlusion process, and/or control of an environment classification process. Those skilled in the art will understand that other processes may take place without departing from the scope of the present subject matter.
  • the illustrated power analyzer 413 compares the power of the error signal 420 to the power of the signal representative of sound received from the first microphone. According to various embodiments, a voice will not be detected unless the power of the signal representative of sound received from the first microphone is much greater than the power of the error signal. For example, the power analyzer 413 compares the difference to a threshold, and will not detect voice if the difference is less than the threshold.
  • the illustrated coefficient analyzer 414 analyzes the filter coefficients from the adaptive filter process 415. According to various embodiments, a voice will not be detected unless a peak value for the coefficients is significantly high. For example, some embodiments will not detect voice unless the largest normalized coefficient is greater than a predetermined value (e.g. 0.5).
  • a predetermined value e.g. 0.5
  • FIGS. 5-7 illustrate various processes for detecting voice that can be used in various embodiments of the present subject matter.
  • the power of the error signal from the adaptive filter is compared to the power of a signal representative of sound received by the first microphone.
  • the threshold is selected to be sufficiently high to ensure that the power of the first microphone is much greater than the power of the error signal.
  • voice is detected at 523 if the power of the first microphone is greater than the power of the error signal by a predetermined threshold, and voice is not detected at 524 if the power of the first microphone is greater than the power of the error signal by a predetermined threshold.
  • coefficients of the adaptive filter are analyzed.
  • voice is detected at 623 if the largest normalized coefficient is greater than a predetermined value, and voice is not detected at 624 if the largest normalized coefficient is not greater than a predetermined value.
  • the power of the error signal from the adaptive filter is compared to the power of a signal representative of sound received by the first microphone.
  • voice is not detected at 724 if the power of the first microphone is not greater than the power of the error signal by a predetermined threshold. If the power of the error signal is too large, then the adaptive filter has not converged.
  • the coefficients are not analyzed until the adaptive filter converges.
  • coefficients of the adaptive filter are analyzed if the power of the first microphone is greater than the power of the error signal by a predetermined threshold.
  • FIG. 8 illustrates one embodiment of the present subject matter with an "own voice detector" to control active noise canceller for occlusion reduction.
  • the active noise canceller filters microphone M2 with filter h and sends the filtered signal to the receiver.
  • the microphone M2 and the error microphone M3 (in the ear canal) are used to calculate the filter update for filter h.
  • the own voice detector which uses microphone M1 and M2, is used to steer the stepsize in the filter update.
  • FIG. 9 illustrates one embodiment of the present subject matter offering a multichannel expansion, compression and output control limiting algorithm (MECO) which uses the signal of microphone M2 to calculate the desired gain and subsequently applies that gain to microphone signal M2 and then sends the amplified signal to the receiver. Additionally, the gain calculation can take into account the outcome of the own voice detector (which uses M1 and M2) to calculate the desired gain. If the wearer's own voice is detected, the gain in the lower channels (typically below 1 KHz) will be lowered to avoid occlusion. Note: the MECO algorithm can use microphone signal M1 or M2 or a combination of both.
  • MECO multichannel expansion, compression and output control limiting algorithm
  • FIG. 10 illustrates one embodiment of the present subject matter which uses an "own voice detector" in an environment classification scheme. From the microphone signal M2, several features are calculated. These features together with the result of the own voice detector, which uses M1 and M2, are used in a classifier to determine the acoustic environment. This acoustic environment classification is used to set the gain in the hearing aid.
  • the hearing aid may use M2 or M1 or M1 and M2 for the feature calculation.
  • FIG. 11 illustrates a pair of hearing assistance devices according to one embodiment of the present subject matter.
  • the pair of hearing assistance devices includes a left hearing assistance device 1105L and a right hearing assistance device 1105R, such as a left hearing aid and a right hearing aid.
  • the left hearing assistance device 1105L is configured to be worn in or about the left ear of a wearer for delivering sound to the left ear canal of the wearer.
  • the right hearing assistance device 1105R is configured to be worn in or about the right ear of the wearer for delivering sound to the right ear canal of the wearer.
  • the left and right hearing assistance devices 1105L and 1105R each represent an embodiment of the device 305 as discussed above with capability of performing wireless communication between each other and uses voice detection capability of both devices to determine whether voice of the wearer is present.
  • the illustrated left hearing assistance device 1105L includes a first microphone MIC 1L, a second microphone MIC 2L, an interface 1107L, a sound processor 1108L, a receiver 1106L, a voice detector 1109L, and a communication circuit 1130L.
  • the first microphone MIC 1L produces a first left microphone signal.
  • the second microphone MIC 2L produces a second left microphone signal.
  • the first microphone MIC 1L is positioned about the left ear or the wearer
  • the second microphone MIC 2L is positioned about the left ear canal of wearer, at a different location than the first microphone MIC 1L, on an air side of the left ear canal to detect signals outside the left ear canal.
  • Interface 1107L converts the analog versions of the first and second left microphone signals into digital signals for processing by the sound processor 1108L and the voice detector 1109L.
  • the interface 1107L may include analog-to-digital converters, and appropriate registers to hold the digital signals for processing by the sound processor 1108L and the voice detector 1109L.
  • the left voice detector 1109L detects a voice of the wearer using the first left microphone signal and the second left microphone signal. In one embodiment, in response to the voice of the wearer being detected based on the first left microphone signal and the second left microphone signal, the left voice detector 1109L produces a left detection signal indicative of detection of the voice of the wearer. In one embodiment, the left voice detector 1109L includes a left adaptive filter configured to output left information and identifies the voice of the wearer from the output left information. In various embodiments, the output left information includes coefficients of the left adaptive filter and/or a left error signal. In various embodiments, the left voice detector 1109L includes the voice detector 309 or the voice detector 409 as discussed above.
  • the left communication circuit 1130L receives information from, and transmits information to, the right hearing assistance device 1105R via a wireless communication link 1132.
  • the information transmitted via wireless communication link 1132 includes information associated with the detection of the voice of the wearer as performed by each of the left and right hearing assistance devices 1105L and 1105R.
  • the illustrated right hearing assistance device 1105R includes a first microphone MIC 1R, a second microphone MIC 2R, an interface 1107R, a sound processor 1108R, a receiver 1106R, a voice detector 1109R, and a communication circuit 1130R.
  • the first microphone MIC 1R produces a first right microphone signal.
  • the second microphone MIC 2R produces a second right microphone signal.
  • the first microphone MIC 1R is positioned about the right ear or the wearer
  • the second microphone MIC 2R is positioned about the right ear canal of wearer, at a different location than the first microphone MIC 1R, on an air side of the right ear canal to detect signals outside the right ear canal.
  • Interface 1107R converts the analog versions of the first and second right microphone signals into digital signals for processing by the sound processor 1108R and the voice detector 1109R.
  • the interface 1107R may include analog-to-digital converters, and appropriate registers to hold the digital signals for processing by the sound processor 1108R and the voice detector 1109R.
  • the sound processor 1108R produces a processed right sound signal 1110R.
  • the right receiver 1106R produces a right audible signal based on the processed right sound signal 1110R and transmits the right audible signal to the right ear canal of the wearer.
  • the sound processor 1108R produces the processed right sound signal 1110R based on the first right microphone signal.
  • the sound processor 1108R produces the processed right sound signal 1110R based on the first right microphone signal and the second right microphone signal.
  • the right voice detector 1109R detects the voice of the wearer using the first right microphone signal and the second right microphone signal. In one embodiment, in response to the voice of the wearer being detected based on the first right microphone signal and the second right microphone signal, the right voice detector 1109R produces a right detection signal indicative of detection of the voice of the wearer. In one embodiment, the right voice detector 1109R includes a right adaptive filter configured to output right information and identifies the voice of the wearer from the output right information. In various embodiments, the output right information includes coefficients of the right adaptive filter and/or a right error signal. In various embodiments, the right voice detector 1109R includes the voice detector 309 or the voice detector 409 as discussed above.
  • the right communication circuit 1130R receives information from, and transmits information to, the right hearing assistance device 1105R via a wireless communication link 1132.
  • At least one of the left voice detector 1109L and the right voice detector 1109R is configured to detect the voice of wearer using the first left microphone signal, the second left microphone signal, the first right microphone signal, and the second right microphone signal.
  • signals produced by all of the microphones MIC 1L, MIC 2L, MIC 1R, and MIC 2R are used for determining whether the voice of the wearer is present.
  • the left voice detector 1109L and/or the right voice detector 1109R declares a detection of the voice of the wearer in response to at least one of the left detection signal and the right detection signal being present.
  • the left voice detector 1109L and/or the right voice detector 1109R declares a detection of the voice of the wearer in response to the left detection signal and the right detection signal both being present. In one embodiment, the left voice detector 1109L and/or the right voice detector 1109R determines whether to declare a detection of the voice of the wearer using the output left information and output right information.
  • the output left information and output right information are each indicative of one or more detection strength parameters each being a measure of likeliness of actual existence of the voice of wearer. Examples of the one or more detection strength parameters include the difference between the power of the error signal and the power of the first microphone signal and the largest normalized coefficient of the adaptive filter.
  • the left voice detector 1109L and/or the right voice detector 1109R determines whether to declare a detection of the voice of the wearer using a weighted combination of the output left information and the output eight information.
  • the weighted combination of the output left information and the output right information can include a weighted sum of the detection strength parameters.
  • the one or more detection strength parameters produced by each of the left and right voice detectors can be multiplied by one or more corresponding weighting factors before being added to produce the weighted sum.
  • the weighting factors may be determined using a priori information such as estimates of the background noise and/or position(s) of other sound sources in a room.
  • the detection of the voice of the wearer is performed using both the left and the right voice detectors such as detectors 1109L and 1109R.
  • whether to declare a detection of the voice of the wearer may be determined by each of the left voice detector 1109L and the right voice detector 1109R, determined by the left voice detector 1109L and communicated to the right voice detector 1109R via wireless link 1132, or determined by the right voice detector 1109R and communicated to the left voice detector 1109L via wireless link 1132.
  • the left voice detector 1109L transmits an indication 1111L to the sound processor 1108L
  • the right voice detector 1109R transmits an indication 1111R to the sound processor 1108R.
  • the sound processors 1108L and 1108R produce the processed sound signals 1110L and 1110R, respectively, using the indication that the voice of the wearer is detected.
  • FIG. 12 illustrates a process for detecting voice using a pair of hearing assistance devices including a left hearing assistance device and a right hearing assistance device, such as the left and right hearing assistance devices 1105L and 1105R.
  • voice of a wearer is detected using the left hearing assistance device.
  • voice of a wearer is detected using the right hearing assistance device.
  • steps 1241 and 1242 are performed concurrently or simultaneously. Examples for each of steps 1241 and 1242 include the processes illustrated in each of FIGS. 5-7 .
  • whether to declare a detection of the voice of the wearer is determining using an outcome of both of the detections at 1241 and 1242.
  • the left and right hearing assistance devices each include first and second microphones. Electrical signals produced by the first and second microphones of the left hearing assistance device are used as inputs to a voice detector of the left hearing assistance device at 1241.
  • the voice detector of the left hearing assistance device includes a left adaptive filter. Electrical signals produced by the first and second microphones of the right hearing assistance device are used as inputs to a voice detector of the right hearing assistance device at 1242.
  • the voice detector of the right hearing assistance device includes a right adaptive filter.
  • the voice of the wearer is detected using information output from the left adaptive filter and information output from the right adaptive filter at 1243. In one embodiment, the voice of the wearer is detected using left coefficients of the left adaptive filter and right coefficients of the right adaptive filter.
  • the voice of the wearer is detected using a left error signal produced by the left adaptive filter and a right error signal produced by the right adaptive filter.
  • the voice of the wearer is detected using a left detection strength parameter of the information output from the left adaptive filter and a right detection strength parameter of the information output from the right adaptive filter.
  • the left and right detection strength parameters are each a measure of likeliness of actual existence of the voice of wearer. Examples of the left detection strength parameter include the difference between the power of a left error signal produced by the left adaptive filter and the power of the electrical signal produced by the first microphone of the left hearing assistance device and the largest normalized coefficient of the left adaptive filter.
  • Examples of the right detection strength parameter include the difference between the power of a right error signal produced by the right adaptive filter and the power of the electrical signal produced by the first microphone of the right hearing assistance device and the largest normalized coefficient of the right adaptive filter.
  • the voice of the wearer is detected using a weighted combination of the information output from the left adaptive filter and the information output from the right adaptive filter.
  • the voice of the wearer is detected using the left hearing assistance device based on the electrical signals produced by the first and second microphones of the left hearing assistance device, and a left detection signal indicative of whether the voice of the wearer is detected by the left hearing assistance device is produced, at 1241.
  • the voice of the wearer is detected using the right hearing assistance device based on the electrical signals produced by the first and second microphones of the right hearing assistance device, and a right detection signal indicative of whether the voice of the wearer is detected by the right hearing assistance device is produced, at 1242. Whether to declare the detection of the voice of the wearer is determined using the left detection signal and the right detection signal at 1243.
  • the detection of the voice of the wearer is declared in response to both of the left detection signal and the right detection signal being present. In another embodiment, the detection of the voice of the wearer is declared in response to at least one of the left detection signal and the right detection signal being present. In one embodiment, whether to declare the detection of the voice of the wearer is determined using the left detection signal, the right detection signal, and weighting factors applied to the left and right detection signals.
  • each device of a pair of hearing assistance devices can be applied to each device of a pair of hearing assistance devices, with the declaration of the detection of the voice of the wearer being a result of detection using both devices of the pair of hearing assistance devices, as discussed with reference to FIGS. 11 and 12 .
  • Such binaural voice detection will likely improve the acoustic perception of the wearer because both hearing assistance devices worn by the wearer are acting similarly when the wearer speaks.
  • whether to declare a detection of the voice of the wearer may be determined based on the detection performed by either one device of the pair of hearing assistance devices or based on the detection performed by both devices of the pair of hearing assistance devices.
  • An example of the pair of hearing assistance devices includes a pair of hearing aids.
  • the present subject matter includes hearing assistance devices, and was demonstrated with respect to BTE, OTE, and RIC type devices, but it is understood that it may also be employed in cochlear implant type hearing devices. It is understood that other hearing assistance devices not expressly stated herein may fall within the scope of the present subject matter.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Circuit For Audible Band Transducer (AREA)

Claims (14)

  1. Vorrichtung, die dafür ausgelegt ist, von einem Träger am Körper getragen zu werden, der ein linkes Ohr mit einem linken Gehörgang und ein rechtes Ohr mit einem rechten Gehörgang aufweist, umfassend:
    ein linkes Hörunterstützungsgerät (1105L), das dafür ausgelegt ist, im oder nahe am linken Ohr getragen zu werden, und ein rechtes Hörunterstützungsgerät (1105R), das dafür ausgelegt ist, im oder nahe am rechten Ohr getragen zu werden, wobei das linke und das rechte Hörunterstützungsgerät dafür ausgelegt sind, über eine drahtlose Kommunikationsverbindung (1132) kommunikationsfähig miteinander gekoppelt zu werden, und jeweils aufweisen:
    ein erstes Mikrofon (MIC 1L, MIC 1R), das dafür ausgelegt ist, ein erstes Mikrofonsignal zu erzeugen;
    ein zweites Mikrofon (MIC 2L, MIC 2R), das dafür ausgelegt ist, ein zweites Mikrofonsignal zu erzeugen; und
    einen Stimmendetektor (1109L, 1109R), der dafür ausgelegt ist, eine Stimme des Trägers unter Verwendung des ersten und des zweiten Mikrofonsignals zu detektieren und ein Detektionssignal zu erzeugen, das die Detektion der Stimme des Trägers angibt, wobei mindestens ein Stimmendetektor von den Stimmendetektoren des linken und des rechten Hörunterstützungsgeräts dafür ausgelegt ist, unter Verwendung des vom linken Hörunterstützungsgerät erzeugten Detektionssignals und des vom rechten Hörunterstützungsgerät erzeugten Detektionssignals zu bestimmen, ob die Stimme des Trägers vorliegt.
  2. Vorrichtung nach Anspruch 1, wobei der Stimmendetektor sowohl des linken als auch des rechten Hörunterstützungsgeräts ein adaptives Filter (415) umfasst, das dafür ausgelegt ist, Informationen auszugeben, und der mindestens eine von den Stimmendetektoren des linken und des rechten Hörunterstützungsgeräts dafür ausgelegt ist, die Stimme des Trägers unter Verwendung der ausgegebenen Informationen von sowohl dem linken als auch dem rechten Hörunterstützungsgerät zu detektieren.
  3. Vorrichtung nach Anspruch 2, wobei der mindestens eine Stimmendetektor dafür ausgelegt ist, die Stimme des Trägers unter Verwendung von Koeffizienten des adaptiven Filters sowohl des linken als auch des rechten Hörunterstützungsgeräts zu detektieren.
  4. Vorrichtung nach einem der Ansprüche 2 und 3, wobei der mindestens eine Stimmendetektor dafür ausgelegt ist, die Stimme des Trägers unter Verwendung eines Fehlersignals zu detektieren, das vom adaptiven Filter sowohl des linken als auch des rechten Hörunterstützungsgeräts erzeugt wird.
  5. Vorrichtung nach einem der Ansprüche 2 bis 4, wobei der mindestens eine Stimmendetektor dafür ausgelegt ist, die Stimme des Trägers unter Verwendung eines Detektionsstärkeparameters der ausgegebenen Informationen sowohl der linken als auch der rechten Hörunterstützung zu detektieren, wobei der Detektionsstärkeparameter ein Maß für die Wahrscheinlichkeit dafür ist, dass tatsächlich die Stimme des Trägers vorliegt.
  6. Vorrichtung nach einem der Ansprüche 2 bis 5, wobei der mindestens eine Stimmendetektor dafür ausgelegt ist, die Stimme des Trägers unter Verwendung einer gewichteten Kombination aus den ausgegebenen Informationen vom linken Hörunterstützungsgerät und den ausgegebenen Informationen vom rechten Hörunterstützungsgerät zu detektieren.
  7. Vorrichtung nach einem der vorangehenden Ansprüche, wobei das linke und das rechte Hörunterstützungsgerät jeweils eine Hörhilfe umfassen, die so ausgelegt ist, dass dann, wenn sie vom Träger am Körper getragen wird, das erste Mikrofon nahe dem linken oder dem rechten Ohr angeordnet ist und das zweite Mikrofon an einer anderen Stelle als das erste Mikrofon nahe dem linken oder dem rechten Gehörgang auf einer Luftseite des linken oder rechten Gehörgangs angeordnet ist, um Signale außerhalb des linken oder des rechten Gehörgangs zu detektieren.
  8. Vorrichtung nach Anspruch 7, wobei die Hörhilfe aufweist:
    einen Klangprozessor, der dafür ausgelegt ist, auf Basis zumindest des ersten Mikrofonsignals und dessen, ob die Stimme des Trägers detektiert wird, ein verarbeitetes Klangsignal zu erzeugen; und
    einen Empfänger, der dafür ausgelegt ist, auf Basis des verarbeiteten Klangsignals ein akustisches Signal zu erzeugen und das akustische Signal an den linken oder den rechten Gehörgang zu senden.
  9. Verfahren zum Detektieren einer Stimme eines Trägers eines Paars aus einem linken und einem rechten Hörunterstützungsgerät (1105L, 1105R), die jeweils ein erstes Mikrofon (MIC 1L, MIC 1R) und ein zweites Mikrofon (MIC 2L, MIC 2R) einschließen und über eine drahtlose Kommunikationsverbindung (1132) kommunikationstechnisch miteinander gekoppelt sind, wobei der Träger ein linkes Ohr mit einem linken Gehörgang und ein rechtes Ohr mit einem rechten Gehörgang aufweist, wobei das Verfahren umfasst:
    Erzeugen von elektrischen Signalen unter Verwendung des ersten Mikrofons und des zweiten Mikrofons des linken Hörunterstützungsgeräts, das nahe dem linken Ohr angeordnet ist, auf jeden detektierten Klang außerhalb des linken Gehörgangs hin, wobei das linke Hörunterstützungsgerät im oder nahe am linken Ohr getragen wird;
    Detektieren einer Stimme des Trägers unter Verwendung des linken Hörunterstützungsgeräts auf Basis der elektrischen Signale, die vom ersten und vom zweiten Mikrofon des linken Hörunterstützungsgeräts erzeugt werden;
    Erzeugen eines Linksdetektionssignals, das angibt, ob die Stimme des Trägers vom linken Hörunterstützungsgerät detektiert wird;
    Erzeugen von elektrischen Signalen unter Verwendung des ersten Mikrofons und des zweiten Mikrofons des rechten Hörunterstützungsgeräts, das um das rechte Ohr angeordnet ist, auf jeden detektierten Ton außerhalb des rechten Gehörgangs hin, wobei das rechte Hörunterstützungsgerät im oder nahe am rechten Ohr getragen wird;
    Detektieren der Stimme des Trägers unter Verwendung des rechten Hörunterstützungsgeräts auf Basis der elektrischen Signale, die vom ersten und vom zweiten Mikrofon des rechten Hörunterstützungsgeräts erzeugt werden;
    Erzeugen eines Rechtsdetektionssignals, das angibt, ob die Stimme des Trägers vom rechten Hörunterstützungsgerät detektiert wird; und
    Bestimmen, ob eine Detektion der Stimme des Trägers unter Verwendung des linken und/oder des rechten Hörunterstützungsgeräts gemeldet werden soll, auf Basis des Linksdetektionssignals und des Rechtsdetektionssignals.
  10. Verfahren nach Anspruch 9, wobei das Detektieren der Stimme des Trägers umfasst:
    Verwenden elektrischer Signale, die vom ersten und vom zweiten Mikrofon des linken Hörunterstützungsgeräts erzeugt werden, als Eingaben in einen Stimmendetektor des linken Hörunterstützungsgeräts, der ein linkes adaptives Filter einschließt;
    Verwenden elektrischer Signale, die vom ersten und vom zweiten Mikrofon des rechten Hörunterstützungsgeräts erzeugt werden, als Eingaben in einen Stimmendetektor des rechten Hörunterstützungsgeräts, der ein rechtes adaptives Filter einschließt; und
    Detektieren der Stimme des Trägers unter Verwendung von Informationen, die vom linken adaptiven Filter ausgegeben werden, und von Informationen, die vom rechten adaptiven Filter ausgegeben werden.
  11. Verfahren nach Anspruch 10, wobei das Detektieren der Stimme des Trägers das Detektieren der Stimme des Trägers unter Verwendung von Linkskoeffizienten des linken adaptiven Filters (415) und von Rechtskoeffizienten des rechten adaptiven Filters (415) umfasst.
  12. Verfahren nach einem der Ansprüche 10 und 11, wobei das Detektieren der Stimme des Trägers das Detektieren der Stimme des Trägers unter Verwendung eines Linksfehlersignals, das vom linken adaptiven Filter erzeugt wird, und eines Rechtsfehlersignals, das vom rechten adaptiven Filter erzeugt wird, umfasst.
  13. Verfahren nach einem der Ansprüche 10 bis 12, wobei das Detektieren der Stimme des Trägers das Detektieren der Stimme des Trägers unter Verwendung eines Linksdetektionsstärkeparameters der Informationen, die vom linken adaptiven Filter ausgegeben werden, und eines Rechtsdetektionsstärkeparameters der Informationen, die vom rechten adaptiven Filter ausgegeben werden, umfasst, wobei der Links- und der Rechtsdetektionsstärkeparameter jeweils ein Maß für die Wahrscheinlichkeit dafür sind, dass die Stimme des Trägers tatsächlich vorliegt.
  14. Verfahren nach einem der Ansprüche 10 bis 13, wobei das Detektieren der Stimme des Trägers das Detektieren der Stimme des Trägers unter Verwendung einer gewichteten Kombination aus den Informationen, die vom linken Hörunterstützungsgerät ausgegeben werden, und den Informationen, die vom rechten Hörunterstützungsgerät ausgegeben werden, umfasst.
EP15181620.4A 2014-08-20 2015-08-19 Hörhilfesystem mit erkennung der eigenen stimme Revoked EP2988531B1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP18195310.0A EP3461148B1 (de) 2014-08-20 2015-08-19 Hörhilfesystem mit erkennung der eigenen stimme

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/464,149 US9219964B2 (en) 2009-04-01 2014-08-20 Hearing assistance system with own voice detection

Related Child Applications (1)

Application Number Title Priority Date Filing Date
EP18195310.0A Division EP3461148B1 (de) 2014-08-20 2015-08-19 Hörhilfesystem mit erkennung der eigenen stimme

Publications (2)

Publication Number Publication Date
EP2988531A1 EP2988531A1 (de) 2016-02-24
EP2988531B1 true EP2988531B1 (de) 2018-09-19

Family

ID=53879441

Family Applications (2)

Application Number Title Priority Date Filing Date
EP15181620.4A Revoked EP2988531B1 (de) 2014-08-20 2015-08-19 Hörhilfesystem mit erkennung der eigenen stimme
EP18195310.0A Active EP3461148B1 (de) 2014-08-20 2015-08-19 Hörhilfesystem mit erkennung der eigenen stimme

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP18195310.0A Active EP3461148B1 (de) 2014-08-20 2015-08-19 Hörhilfesystem mit erkennung der eigenen stimme

Country Status (2)

Country Link
EP (2) EP2988531B1 (de)
DK (1) DK2988531T3 (de)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3672281B1 (de) * 2018-12-20 2023-06-21 GN Hearing A/S Hörgerät mit eigenstimmendetektion und zugehöriges verfahren
CN114449427A (zh) * 2020-11-02 2022-05-06 原相科技股份有限公司 听力辅助装置及调整听力辅助装置输出声音的方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004077090A1 (en) 2003-02-25 2004-09-10 Oticon A/S Method for detection of own voice activity in a communication device
US20070098192A1 (en) 2002-09-18 2007-05-03 Sipkema Marcus K Spectacle hearing aid
US20100002887A1 (en) 2006-07-12 2010-01-07 Phonak Ag Method for operating a binaural hearing system as well as a binaural hearing system
US20100260364A1 (en) 2009-04-01 2010-10-14 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US20120128187A1 (en) 2010-06-18 2012-05-24 Panasonic Corporation Hearing aid, signal processing method, and program
US20130148829A1 (en) 2011-12-08 2013-06-13 Siemens Medical Instruments Pte. Ltd. Hearing apparatus with speaker activity detection and method for operating a hearing apparatus
US20140029762A1 (en) 2012-07-25 2014-01-30 Nokia Corporation Head-Mounted Sound Capture Device
WO2014075195A1 (en) 2012-11-15 2014-05-22 Phonak Ag Own voice shaping in a hearing instrument

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7027607B2 (en) * 2000-09-22 2006-04-11 Gn Resound A/S Hearing aid with adaptive microphone matching
DE102010012622B4 (de) 2010-03-24 2015-04-30 Siemens Medical Instruments Pte. Ltd. Binaurales Verfahren und binaurale Anordnung zur Sprachsteuerung von Hörgeräten
DK2563045T3 (da) * 2011-08-23 2014-10-27 Oticon As Fremgangsmåde og et binauralt lyttesystem for at maksimere en bedre øreeffekt

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070098192A1 (en) 2002-09-18 2007-05-03 Sipkema Marcus K Spectacle hearing aid
WO2004077090A1 (en) 2003-02-25 2004-09-10 Oticon A/S Method for detection of own voice activity in a communication device
US20100002887A1 (en) 2006-07-12 2010-01-07 Phonak Ag Method for operating a binaural hearing system as well as a binaural hearing system
US20100260364A1 (en) 2009-04-01 2010-10-14 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
EP2242289A1 (de) 2009-04-01 2010-10-20 Starkey Laboratories, Inc. Hörhilfesystem mit Erkennung der eigenen Stimme
US20140010397A1 (en) 2009-04-01 2014-01-09 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US20120128187A1 (en) 2010-06-18 2012-05-24 Panasonic Corporation Hearing aid, signal processing method, and program
US20130148829A1 (en) 2011-12-08 2013-06-13 Siemens Medical Instruments Pte. Ltd. Hearing apparatus with speaker activity detection and method for operating a hearing apparatus
US20140029762A1 (en) 2012-07-25 2014-01-30 Nokia Corporation Head-Mounted Sound Capture Device
WO2014075195A1 (en) 2012-11-15 2014-05-22 Phonak Ag Own voice shaping in a hearing instrument

Also Published As

Publication number Publication date
EP3461148A2 (de) 2019-03-27
DK2988531T3 (en) 2019-01-14
EP2988531A1 (de) 2016-02-24
EP3461148A3 (de) 2019-04-17
EP3461148B1 (de) 2023-03-22

Similar Documents

Publication Publication Date Title
US11388529B2 (en) Hearing assistance system with own voice detection
US10715931B2 (en) Hearing assistance system with own voice detection
US9749754B2 (en) Hearing aids with adaptive beamformer responsive to off-axis speech
EP3188508B1 (de) Verfahren und vorrichtung zum streamen von kommunikation zwischen hörvorrichtungen
EP3005731B1 (de) Verfahren für den betrieb eines hörgeräts und hörgerät
US9641942B2 (en) Method and apparatus for hearing assistance in multiple-talker settings
EP3799444A1 (de) Hörgerät, ein richtmikrofonsystem umfassend
US20120195450A1 (en) Method for control of adaptation of feedback suppression in a hearing aid, and a hearing aid
EP2988531B1 (de) Hörhilfesystem mit erkennung der eigenen stimme

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150819

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20161212

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20180307

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1044709

Country of ref document: AT

Kind code of ref document: T

Effective date: 20181015

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015016517

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: RENTSCH PARTNER AG, CH

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20190106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181220

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181219

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181219

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1044709

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180919

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190119

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190119

REG Reference to a national code

Ref country code: DE

Ref legal event code: R026

Ref document number: 602015016517

Country of ref document: DE

PLBI Opposition filed

Free format text: ORIGINAL CODE: 0009260

PLBP Opposition withdrawn

Free format text: ORIGINAL CODE: 0009264

PLAX Notice of opposition and request to file observation + time limit sent

Free format text: ORIGINAL CODE: EPIDOSNOBS2

26 Opposition filed

Opponent name: OTICON A/S / GN HEARING A/S

Effective date: 20190618

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

PLBB Reply of patent proprietor to notice(s) of opposition received

Free format text: ORIGINAL CODE: EPIDOSNOBS3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190819

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20190831

PLAB Opposition data, opponent's data or that of the opponent's representative modified

Free format text: ORIGINAL CODE: 0009299OPPO

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190819

R26 Opposition filed (corrected)

Opponent name: OTICON A/S / GN HEARING A/S

Effective date: 20190618

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190831

REG Reference to a national code

Ref country code: DE

Ref legal event code: R064

Ref document number: 602015016517

Country of ref document: DE

Ref country code: DE

Ref legal event code: R103

Ref document number: 602015016517

Country of ref document: DE

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20200811

Year of fee payment: 6

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20200812

Year of fee payment: 6

Ref country code: DK

Payment date: 20200814

Year of fee payment: 6

Ref country code: FR

Payment date: 20200817

Year of fee payment: 6

Ref country code: GB

Payment date: 20200813

Year of fee payment: 6

RDAF Communication despatched that patent is revoked

Free format text: ORIGINAL CODE: EPIDOSNREV1

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CH

Payment date: 20200812

Year of fee payment: 6

RDAG Patent revoked

Free format text: ORIGINAL CODE: 0009271

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: PATENT REVOKED

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: FI

Ref legal event code: MGE

27W Patent revoked

Effective date: 20200917

GBPR Gb: patent revoked under art. 102 of the ep convention designating the uk as contracting state

Effective date: 20200917

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20150819

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180919