EP3930346A1 - Prothèse auditive comprenant un dispositif de suivi de ses propres conversations vocales - Google Patents

Prothèse auditive comprenant un dispositif de suivi de ses propres conversations vocales Download PDF

Info

Publication number
EP3930346A1
EP3930346A1 EP20181325.0A EP20181325A EP3930346A1 EP 3930346 A1 EP3930346 A1 EP 3930346A1 EP 20181325 A EP20181325 A EP 20181325A EP 3930346 A1 EP3930346 A1 EP 3930346A1
Authority
EP
European Patent Office
Prior art keywords
hearing aid
voice
data
user
hearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20181325.0A
Other languages
German (de)
English (en)
Inventor
Karsten BONKE
Thomas Mortensen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oticon AS
Original Assignee
Oticon AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oticon AS filed Critical Oticon AS
Priority to EP20181325.0A priority Critical patent/EP3930346A1/fr
Publication of EP3930346A1 publication Critical patent/EP3930346A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/35Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using translation techniques
    • H04R25/356Amplitude, e.g. amplitude shift or compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/021Behind the ear [BTE] hearing aids
    • H04R2225/0213Constructional details of earhooks, e.g. shape, material
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/021Behind the ear [BTE] hearing aids
    • H04R2225/0216BTE hearing aids having a receiver in the ear mould
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/31Aspects of the use of accumulators in hearing aids, e.g. rechargeable batteries or fuel cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/39Aspects relating to automatic logging of sound environment parameters and the performance of the hearing aid during use, e.g. histogram logging, or of user selected programs or settings in the hearing aid, e.g. usage logging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/55Communication between hearing aids and external devices via a network for data exchange
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/30Monitoring or testing of hearing aids, e.g. functioning, settings, battery power

Definitions

  • Hearing care is about restoring the ability to hear.
  • a dominant element of this restoration is to regain the ability to understand speech in various sound environments.
  • the ability to monitor, nudge, and guide the wearer to continuously challenge and improve hearing ability and social interaction is important to regain an active social lifestyle.
  • the proposed solution may track and estimate the activity level of the user and derive a score for a relative objective in a rehabilitation plan.
  • the proposed solution may be used to document treatment outcome for users frequently connected as well as users offline between consultations.
  • the hearing instruments may be configured to log the time a wearer is speaking and combine with data concerning the sound environment, such as SNR, or activity level of other identified speech sources.
  • the result may help a hearing care professional to prescribe specific targets to aid in the rehabilitation for a specific user (e.g. active participation in conversations, its frequency and/or duration).
  • specific targets e.g. active participation in conversations, its frequency and/or duration.
  • the tracking of own voice activity and other talker activity can be combined with a ratio of pauses. From this ratio it can be derived if the user is engaging in a conversation rather than passively listening to for example the TV.
  • the ratio (or equivalent data) for a given time window may be stored in the instrument for periodic retrieval.
  • 'a window of variable time' is taken to mean an 'observation window' that may vary in time.
  • the data that are observed in the observation window may e.g. include voice activity detection data (e.g. own voice and other voice activity or no-activity).
  • An indication of the voice activity in the environment of a user may e.g. be provided by a ratio of the sum of time periods with voice activity and the total time period of the observation window (the total being the sum of time periods with voice activity and the sum of speech pauses).
  • Further sub-indicators may be provided by a) the ratio of the sum of time periods with own voice activity and the total time period of the observation window, and b) the ratio of the sum of time periods with other voice activity and the total time period of the observation window.
  • the pauses may be further classified as 'pauses before own voice' and 'pauses before other voice', which may additionally be used to evaluate the user's conversation participation pattern.
  • the ratio for one or more time windows may be transferred to a different apparatus which is capable of further processing the data and/or present the data in a user interface.
  • apparatus or processing device may be constituted by or comprise a fitting system, or a smartphone or a remote control device for the hearing aid, etc.
  • US2006222194A1 deals with a hearing aid comprising a datalogger and with the learning from these data.
  • the hearing aid comprises an input unit, a signal processing unit, and a user interface for converting user interaction to a control signal thereby controlling a processing setting of the signal processing unit.
  • the hearing aid further comprises a memory unit comprising a control section storing a set of control parameters associated with the acoustic environment, and a datalogger section receiving data from the input unit, the signal processing unit, and the user interface.
  • the signal processing unit configures the setting according to the set of control parameters and comprises a learning controller adapted to adjust the set of control parameters according to the data in the data logging section.
  • the proposed solution enriches the result with the social activity and active participation of the wearer.
  • a hearing aid is a hearing aid
  • a hearing aid configured to be worn at or in an ear of a user.
  • the hearing aid comprises
  • the hearing aid may be configured to further log data concerning a sound environment, at least during said time periods of own voice activity.
  • the data concerning a sound environment may be logged with the same (or lower) frequency as the own voice activity is logged.
  • the hearing aid may comprise one or more detectors of the acoustic environment.
  • the hearing aid may be configured o receive data from one or more detectors of the acoustic environment located in other devices or systems, e.g. an external device, such as a smartphone, or a charging station or other auxiliary device in communication with the hearing aid.
  • the data concerning a sound environment may include a measure of sound quality, e.g. a signal to noise ratio (SNR).
  • the hearing aid may comprise a detector for providing a measure of sound quality of the electric input signal or a signal originating therefrom.
  • the hearing aid may comprise a detector for estimating an SNR of the at least one electric input signal, or a processed version thereof.
  • the hearing aid may comprise a level detector for estimating a current level of the at least one electric input signal or of a signal derived therefrom.
  • the data concerning a sound environment may include an activity level of other (identified) speech sources.
  • 'An activity level' of a sound source (an external or the user) may e.g. be a duration of activity in an absolute or relative time scale (e.g. in seconds or in a number of time units (absolute or arbitrary) relative to the number of time units of a total period of observation).
  • 'An activity level' may e.g. include a number of distinct events of activity (e.g. separated by a certain minimum time period) and a total duration of activity in an absolute or relative scale of the sound source in question.
  • the data may comprise a requested gain from a compressive amplification algorithm of the hearing aid.
  • the compressive amplification algorithm may be configured to compensate for the user's hearing impairment.
  • the compressive amplification algorithm may be configured to provide frequency and level dependent gain (amplification or attenuation) to the at least one electric input signal or to a signal derived therefrom.
  • the hearing aid may comprise a voice activity detector configured to detect whether or not, or with what probability, said at least one electric input signal, or a processed version thereof, comprises a human voice and to provide a voice control signal indicative thereof.
  • the voice activity detector may be configured to detect speech.
  • the voice activity detector may be configured to differentiate between the voice of the user wearing the hearing aid and other voices (e.g. using a level differentiation, and/or a trained algorithm, e.g. a neural network), in which case the voice activity detector may include the own voice detector.
  • the hearing aid may be configured to further log absolute or relative time periods of NO own voice activity.
  • the tracking of own voice activity can e.g. be combined with the logging of speech pauses, and/or the logging of total (absolute or relative) time elapsed in the observation window in question.
  • a ratio of time periods of own voice activity to speech pauses may be logged.
  • a ratio of time periods of voice activity to speech pauses may be logged.
  • a ratio of time periods of other voice activity (than own voice) to speech pauses may be logged. From this ratio it can be derived if the user is engaging in a conversation rather than passively listening to for example the TV.
  • the ratio for a given (observation) time window can be stored in the instrument for periodic retrieval.
  • the ratio for one or more windows can be transferred to a different apparatus (e.g. a smartphone, a similar processing device, or a fitting system) which is capable of further processing the data and/or present the data in a user interface.
  • the datalogger may be configured to log data in successive observation windows of variable time, e.g. of increasing length over time, but with a constant or decreasing number of logged data values of successive observation windows.
  • a time window e.g. an observation window of variable time, e.g. an observation window of increasing length over time (but with a constant (or decreasing) number of logged data values of successive windows
  • data can be logged over an extended time even with a limited storage capacity of a memory of the datalogger of the hearing aid, see e.g. FIG. 6 .
  • Data stored by the datalogger may e.g. be off-loaded during charging of a rechargeable battery of the hearing aid in a charging station, se e.g. FIG. 7 .
  • the hearing aid may comprise a communication interface allowing data to be exchanged with another device or system.
  • the communication interface may be based on a cabled connection, e.g. comprising appropriate connectors, allowing easy connection (and dis-connection) of the hearing aid to/from the 'another device or system'.
  • the communication interface may be based on a wireless connection to the 'another device or system', e.g. via a network.
  • the hearing aid may comprise an output unit, and wherein the output unit comprises a number of electrodes of a cochlear implant type hearing aid or a vibrator of a bone conducting hearing aid, or a loudspeaker of an air conduction hearing aid, or a combination thereof.
  • the hearing aid may be adapted to provide a frequency dependent gain and/or a level dependent compression and/or a transposition (with or without frequency compression) of one or more frequency ranges to one or more other frequency ranges, e.g. to compensate for a hearing impairment of a user.
  • the hearing aid may comprise a signal processor for applying one or mor processing algorithms to enhance the input signals and providing a processed output signal.
  • the hearing aid may comprise an output unit for providing a stimulus perceived by the user as an acoustic signal based on a processed electric signal.
  • the output unit may comprise a number of electrodes of a cochlear implant (for a CI type hearing aid) or a vibrator of a bone conducting hearing aid.
  • the output unit may comprise an output transducer.
  • the output transducer may comprise a receiver (loudspeaker) for providing the stimulus as an acoustic signal to the user (e.g. in an acoustic (air conduction based) hearing aid).
  • the output transducer may comprise a vibrator for providing the stimulus as mechanical vibration of a skull bone to the user (e.g. in a bone-attached or bone-anchored hearing aid).
  • the hearing aid may comprise an input unit for providing an electric input signal representing sound.
  • the input unit may comprise an input transducer, e.g. a microphone, for converting an input sound to an electric input signal.
  • the input unit may comprise a wireless receiver for receiving a wireless signal comprising or representing sound and for providing an electric input signal representing said sound.
  • the wireless receiver may e.g. be configured to receive an electromagnetic signal in the radio frequency range (3 kHz to 300 GHz).
  • the wireless receiver may e.g. be configured to receive an electromagnetic signal in a frequency range of light (e.g. infrared light 300 GHz to 430 THz, or visible light, e.g. 430 THz to 770 THz).
  • the hearing aid may comprise a directional system adapted to spatially filter sounds from the environment, and thereby enhance a target acoustic source among a multitude of acoustic sources in the local environment of the user wearing the hearing aid.
  • the directional system may be adapted to detect (such as adaptively detect) from which direction a particular part of the input signal originates. This can be achieved in various different ways as e.g. described in the prior art.
  • a microphone array beamformer is often used for spatially attenuating background noise sources. Many beamformer variants can be found in literature.
  • the minimum variance distortionless response (MVDR) beamformer is widely used in microphone array signal processing.
  • the MVDR beamformer keeps the signals from the target direction (also referred to as the look direction) unchanged, while attenuating sound signals from other directions maximally.
  • the generalized sidelobe canceller (GSC) structure is an equivalent representation of the MVDR beamformer offering computational and numerical advantages over a direct implementation in its original form.
  • Wireless communication may be in the base band (audio frequency range, e.g. between 0 and 20 kHz).
  • communication between the hearing aid and the other device is based on some sort of modulation at frequencies above 100 kHz.
  • frequencies used to establish a communication link between the hearing aid and the other device is below 70 GHz, e.g. located in a range from 50 MHz to 70 GHz, e.g. above 300 MHz, e.g. in an ISM range above 300 MHz, e.g.
  • the wireless link may be based on a standardized or proprietary technology.
  • the wireless link may be based on Bluetooth technology (e.g. Bluetooth Low-Energy technology).
  • the hearing aid may be or form part of a portable (i.e. configured to be wearable) device, e.g. a device comprising a local energy source, e.g. a battery, e.g. a rechargeable battery.
  • the hearing aid may e.g. be a low weight, easily wearable, device, e.g. having a total weight less than 100 g, such as less than 20 g.
  • the hearing aid may comprise a forward or signal path between an input unit (e.g. an input transducer, such as a microphone or a microphone system and/or direct electric input (e.g. a wireless receiver)) and an output unit, e.g. an output transducer.
  • the signal processor may be located in the forward path.
  • the signal processor may be adapted to provide a frequency dependent gain according to a user's particular needs.
  • the hearing aid may comprise an analysis path comprising functional components for analyzing the input signal (e.g. determining a level, a modulation, a type of signal, an acoustic feedback estimate, etc.). Some or all signal processing of the analysis path and/or the signal path may be conducted in the frequency domain. Some or all signal processing of the analysis path and/or the signal path may be conducted in the time domain.
  • the hearing aid may be configured to operate in different modes, e.g. a normal mode and one or more specific modes, e.g. selectable by a user, or automatically selectable.
  • a mode of operation may be optimized to a specific acoustic situation or environment.
  • a mode of operation may include a low-power mode, where functionality of the hearing aid is reduced (e.g. to save power), e.g. to disable wireless communication, and/or to disable specific features of the hearing aid.
  • the hearing aid may comprise a number of detectors configured to provide status signals relating to a current physical environment of the hearing aid (e.g. the current acoustic environment), and/or to a current state of the user wearing the hearing aid, and/or to a current state or mode of operation of the hearing aid.
  • one or more detectors may form part of an external device in communication (e.g. wirelessly) with the hearing aid.
  • An external device may e.g. comprise another hearing aid, a remote control, and audio delivery device, a telephone (e.g. a smartphone), an external sensor, etc.
  • One or more of the number of detectors may operate on the full band signal (time domain).
  • One or more of the number of detectors may operate on band split signals ((time-) frequency domain), e.g. in a limited number of frequency bands.
  • the number of detectors may comprise a level detector for estimating a current level of a signal of the forward path.
  • the detector may be configured to decide whether the current level of a signal of the forward path is above or below a given (L-)threshold value.
  • the level detector operates on the full band signal (time domain).
  • the level detector operates on band split signals ((time-) frequency domain).
  • the hearing aid may comprise a voice activity detector (VAD) for estimating whether or not (or with what probability) an input signal comprises a voice signal (at a given point in time).
  • VAD voice activity detector
  • a voice signal may in the present context be taken to include a speech signal from a human being. It may also include other forms of utterances generated by the human speech system (e.g. singing).
  • the voice activity detector unit may be adapted to classify a current acoustic environment of the user as a VOICE or NO-VOICE environment. This has the advantage that time segments of the electric microphone signal comprising human utterances (e.g. speech) in the user's environment can be identified, and thus separated from time segments only (or mainly) comprising other sound sources (e.g. artificially generated noise).
  • the voice activity detector may be adapted to detect as a VOICE also the user's own voice. Alternatively, the voice activity detector may be adapted to exclude a user's own voice from the detection of a VOICE.
  • the hearing aid may comprise an own voice detector for estimating whether or not (or with what probability) a given input sound (e.g. a voice, e.g. speech) originates from the voice of the user of the system.
  • a microphone system of the hearing aid may be adapted to be able to differentiate between a user's own voice and another person's voice and possibly from NON-voice sounds.
  • the number of detectors may comprise a movement detector, e.g. an acceleration sensor.
  • the movement detector may be configured to detect movement of the user's facial muscles and/or bones, e.g. due to speech or chewing (e.g. jaw movement) and to provide a detector signal indicative thereof.
  • the hearing aid may comprise a classification unit configured to classify the current situation based on input signals from (at least some of) the detectors, and possibly other inputs as well.
  • a current situation' may be taken to be defined by one or more of
  • the hearing aid may comprise of a multi-level data storage system to distil conversation history across current, recent, and past conversations.
  • the data storage scheme comprises that more data are stored in current conversation and made available for other classifiers and less data are stored for conversations that are no longer active.
  • data are aggregated and stored in memory bins representing shorter or longer time intervals.
  • the data may be represented by a single numeric counter or a ratio value, in place of the time-domain classifier result that is logged in an active conversation.
  • the hearing aid may be designed in a way that the available data storage and availability of means for data transport to other apparatus determine the degree of data aggregation. This dynamic aggregation may allow the hearing aid to store conversation tracking data for an arbitrary time period without sacrificing the detailed time-domain data for a specific number of conversation minutes, see e.g. FIG. 6 .
  • the classification unit may be based on or comprise a neural network, e.g. a trained neural network.
  • the hearing aid may further comprise other relevant functionality for the application in question, e.g. compression, noise reduction, feedback control, etc.
  • the hearing aid may comprise a hearing instrument, e.g. a hearing instrument adapted for being located at the ear or fully or partially in the ear canal of a user, e.g. a headset, an earphone, an ear protection device or a combination thereof.
  • a hearing instrument e.g. a hearing instrument adapted for being located at the ear or fully or partially in the ear canal of a user, e.g. a headset, an earphone, an ear protection device or a combination thereof.
  • Use may be provided in a system comprising audio distribution.
  • Use may be provided in a system comprising one or more hearing aids (e.g. hearing instruments), headsets, ear phones, active ear protection systems, etc.
  • a method of operating a hearing aid :
  • a method of operating a hearing aid configured to be worn at of in an ear of a user is provided.
  • the method may comprise
  • the method may apply progressive abstraction of data over a period of time.
  • the hearing aid preserve detailed conversation data logging for a number of minutes, after which the data will be aggregated into more abstract usable counters and ratios. This allow the hearing aid to track a high degree of data resolution if the user has a connected to a connected apparatus and still allow the hearing aid to maintain relevant data between visits to a clinic if the user is offline the entire time, see e.g. FIG. 6 .
  • a method of extracting information about a hearing aid user's conversations :
  • a method of extracting information about a hearing aid social engagement in conversations is provided.
  • the method comprises
  • the user's engagement in conversations may e.g. be estimated by identifying a conversation in the combined data from an own voice detector and a general voice detector (or a dedicated 'not-own voice' detector).
  • a conversation is detected, if a user's voice followed by another voice are detected, one voice following the other, without longer speech pauses between them (i.e. pauses are not larger than a threshold value ⁇ t PAUSE , e.g. 5-10 seconds), see e.g. FIG. 3 , 4 .
  • a computer readable medium or data carrier :
  • a tangible computer-readable medium storing a computer program comprising program code means (instructions) for causing a data processing system (a computer) to perform (carry out) at least some (such as a majority or all) of the (steps of the) method described above, in the 'detailed description of embodiments' and in the claims, when said computer program is executed on the data processing system is furthermore provided by the present application.
  • Such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
  • Other storage media include storage in DNA (e.g. in synthesized DNA strands). Combinations of the above should also be included within the scope of computer-readable media.
  • the computer program can also be transmitted via a transmission medium such as a wired or wireless link or a network, e.g. the Internet, and loaded into a data processing system for being executed at a location different from that of the tangible medium.
  • a transmission medium such as a wired or wireless link or a network, e.g. the Internet
  • a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out (steps of) the method described above, in the 'detailed description of embodiments' and in the claims is furthermore provided by the present application.
  • a data processing system :
  • a data processing system comprising a processor and program code means for causing the processor to perform at least some (such as a majority or all) of the steps of the method described above, in the 'detailed description of embodiments' and in the claims is furthermore provided by the present application.
  • a hearing system :
  • a hearing system comprising a hearing aid as described above, in the 'detailed description of embodiments', and in the claims, AND an auxiliary device is moreover provided.
  • the hearing system may be adapted to establish a communication link between the hearing aid and the auxiliary device to provide that information (e.g. control and status signals, possibly audio signals) can be exchanged or forwarded from one to the other.
  • information e.g. control and status signals, possibly audio signals
  • the auxiliary device may comprise a remote control, a smartphone, or other portable or wearable electronic device, such as a smartwatch or the like.
  • the auxiliary device may be constituted by or comprise a programming device (e.g. running a fitting software for adapting processing of the hearing aid to the needs, e.g. a hearing impairment, of the user of the hearing aid).
  • a programming device e.g. running a fitting software for adapting processing of the hearing aid to the needs, e.g. a hearing impairment, of the user of the hearing aid.
  • the auxiliary device may be constituted by or comprise a remote control for controlling functionality and operation of the hearing aid(s).
  • the function of a remote control may be implemented in a smartphone, the smartphone possibly running an APP allowing to control the functionality of the audio processing device via the smartphone (the hearing aid(s) comprising an appropriate wireless interface to the smartphone, e.g. based on Bluetooth or some other standardized or proprietary scheme).
  • the auxiliary device may be constituted by or comprise another hearing aid.
  • the hearing system may comprise two hearing aids adapted to implement a binaural hearing system, e.g. a binaural hearing aid system.
  • the auxiliary device may e.g. be or comprise a programming device, e.g. implementing a fitting system of the hearing aid.
  • the auxiliary device may comprise a charging station comprising a memory (e.g. acting as an intermediate storage medium, e.g. of 'day-to-day data' from the datalogger, cf. e.g. FIG. 7 ).
  • the auxiliary device may comprise a communication interface allowing the (wired or wireless) communication link to the hearing aid to be established.
  • the communication interface(s) may comprise appropriate antenna and transceiver circuitry to implement a wireless link, e.g. based on Bluetooth or similar technology.
  • the auxiliary device may comprise a communication interface allowing a connection to a server on a network, e.g.
  • the Internet e.g. 'in the cloud'
  • data from the datalogger received from the hearing aid may be relayed from the auxiliary device (e.g. a cellphone or a charging station for the hearing aid) to a server accessible for analysis of the data, e.g. by a fitting system for the hearing aid.
  • the auxiliary device e.g. a cellphone or a charging station for the hearing aid
  • a server accessible for analysis of the data, e.g. by a fitting system for the hearing aid.
  • the hearing system may be configured to download data from said datalogger to said auxiliary device.
  • the auxiliary device may comprise a memory for storing data from the datalogger of the hearing aid.
  • the auxiliary device may comprise an analyzing unit for analyzing data stored in the datalogger or the hearing aid and/or stored in the memory of the auxiliary device originating from the datalogger of the hearing aid.
  • Data in the memory may origin from different time periods, e.g. time periods that together span more than one week, such as more than one month, such as more than 6 months.
  • the auxiliary device may be configured to extract changes overtime of said data originating from the datalogger of the hearing aid. The changes over time may relate to the user's vocal activity, e.g. in connection with other persons' vocal activity (e.g. related to conversations vs.
  • the auxiliary device may comprise a user interface, allowing a user to interact with the auxiliary device, e.g. via touch sensitive display, and or a keyboard.
  • the user interface may be configured to allow a user to display results of an analysis of the from the datalogger.
  • the user interface may e.g. allow a user (e.g. a hearing care professional) to access data from the datalogger origination from previous observation periods, thereby allowing a development or trend in user behavior to be extracted from the data.
  • a non-transitory application termed an APP
  • the APP comprises executable instructions configured to be executed on an auxiliary device to implement a user interface for a hearing aid or a hearing system described above in the 'detailed description of embodiments', and in the claims.
  • the APP may be configured to run on cellular phone, e .g. a smartphone, or on another portable (or stationary) electronic device allowing communication with said hearing aid or said hearing system (e.g. a charging station).
  • the APP may implement a Datalogging APP , from which a user may configure the datalogger.
  • the user may e.g. select the data that should be logged, e.g. Own-voice data, Other voice data, internal and external sensor data (e.g. sensors or detectors related to an acoustic environment, and/or to a state of the user, e.g. a mental state).
  • the sensors or detectors that may be selected for logging together the voice activation data may include a movement sensor, a sound quality detector, a detector of body signals, e.g. brainwaves (e.g. EEG), a PPG sensor, etc.
  • the APP may further allow the user to off-load logged data to another device or system, e.g.
  • the APP may further allow the user to select a strategy or scheme for off-loading logged data to another device or system (e.g. among a number of predefined schemes).
  • a hearing aid e.g. a hearing instrument
  • a hearing aid refers to a device, which is adapted to improve, augment and/or protect the hearing capability of a user by receiving acoustic signals from the user's surroundings, generating corresponding audio signals, possibly modifying the audio signals and providing the possibly modified audio signals as audible signals to at least one of the user's ears.
  • Such audible signals may e.g. be provided in the form of acoustic signals radiated into the user's outer ears, acoustic signals transferred as mechanical vibrations to the user's inner ears through the bone structure of the user's head and/or through parts of the middle ear as well as electric signals transferred directly or indirectly to the cochlear nerve of the user.
  • the hearing aid may be configured to be worn in any known way, e.g. as a unit arranged behind the ear with a tube leading radiated acoustic signals into the ear canal or with an output transducer, e.g. a loudspeaker, arranged close to or in the ear canal, as a unit entirely or partly arranged in the pinna and/or in the ear canal, as a unit, e.g. a vibrator, attached to a fixture implanted into the skull bone, as an attachable, or entirely or partly implanted, unit, etc.
  • the hearing aid may comprise a single unit or several units communicating (e.g. acoustically, electrically or optically) with each other.
  • the loudspeaker may be arranged in a housing together with other components of the hearing aid, or it may be an external unit in itself (possibly in combination with a flexible guiding element, e.g. a dome-like element).
  • a hearing aid comprises an input transducer for receiving an acoustic signal from a user's surroundings and providing a corresponding input audio signal and/or a receiver for electronically (i.e. wired or wirelessly) receiving an input audio signal, a (typically configurable) signal processing circuit (e.g. a signal processor, e.g. comprising a configurable (programmable) processor, e.g. a digital signal processor) for processing the input audio signal and an output unit for providing an audible signal to the user in dependence on the processed audio signal.
  • the signal processor may be adapted to process the input signal in the time domain or in a number of frequency bands.
  • an amplifier and/or compressor may constitute the signal processing circuit.
  • the signal processing circuit typically comprises one or more (integrated or separate) memory elements for executing programs and/or for storing parameters used (or potentially used) in the processing and/or for storing information relevant for the function of the hearing aid and/or for storing information (e.g. processed information, e.g. provided by the signal processing circuit), e.g. for use in connection with an interface to a user and/or an interface to a programming device.
  • the output unit may comprise an output transducer, such as e.g. a loudspeaker for providing an air-borne acoustic signal or a vibrator for providing a structure-borne or liquid-borne acoustic signal.
  • the output unit may comprise one or more output electrodes for providing electric signals (e.g. to a multi-electrode array) for electrically stimulating the cochlear nerve (cochlear implant type hearing aid).
  • the vibrator may be adapted to provide a structure-borne acoustic signal transcutaneously or percutaneously to the skull bone.
  • the vibrator may be implanted in the middle ear and/or in the inner ear.
  • the vibrator may be adapted to provide a structure-borne acoustic signal to a middle-ear bone and/or to the cochlea.
  • the vibrator may be adapted to provide a liquid-borne acoustic signal to the cochlear liquid, e.g. through the oval window.
  • the output electrodes may be implanted in the cochlea or on the inside of the skull bone and may be adapted to provide the electric signals to the hair cells of the cochlea, to one or more hearing nerves, to the auditory brainstem, to the auditory midbrain, to the auditory cortex and/or to other parts of the cerebral cortex.
  • a hearing aid may be adapted to a particular user's needs, e.g. a hearing impairment.
  • a configurable signal processing circuit of the hearing aid may be adapted to apply a frequency and level dependent compressive amplification of an input signal.
  • a customized frequency and level dependent gain (amplification or compression) may be determined in a fitting process by a fitting system based on a user's hearing data, e.g. an audiogram, using a fitting rationale (e.g. adapted to speech).
  • the frequency and level dependent gain may e.g. be embodied in processing parameters, e.g. uploaded to the hearing aid via an interface to a programming device (fitting system), and used by a processing algorithm executed by the configurable signal processing circuit of the hearing aid.
  • a 'hearing system' refers to a system comprising one or two hearing aids
  • a 'binaural hearing system' refers to a system comprising two hearing aids and being adapted to cooperatively provide audible signals to both of the user's ears.
  • Hearing systems or binaural hearing systems may further comprise one or more 'auxiliary devices', which communicate with the hearing aid(s) and affect and/or benefit from the function of the hearing aid(s).
  • Such auxiliary devices may include at least one of a remote control, a remote microphone, an audio gateway device, an entertainment device, e.g. a music player, a wireless communication device, e.g. a mobile phone (such as a smartphone) or a tablet or another device, e.g.
  • Hearing aids, hearing systems or binaural hearing systems may e.g. be used for compensating for a hearing-impaired person's loss of hearing capability, augmenting or protecting a normal-hearing person's hearing capability and/or conveying electronic audio signals to a person.
  • Hearing aids or hearing systems may e.g. form part of or interact with public-address systems, active ear protection systems, handsfree telephone systems, car audio systems, entertainment (e.g. TV, music playing or karaoke) systems, teleconferencing systems, classroom amplification systems, etc.
  • Embodiments of the disclosure may e.g. be useful in applications such as hearing aids for compensation for a user's hearing impairment.
  • the electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g. flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc.
  • MEMS micro-electronic-mechanical systems
  • integrated circuits e.g. application specific
  • DSPs digital signal processors
  • FPGAs field programmable gate arrays
  • PLDs programmable logic devices
  • gated logic discrete hardware circuits
  • PCB printed circuit boards
  • PCB printed circuit boards
  • Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • the present application relates to the field of hearing aids, in particular to data logging.
  • FIG. 1 shows a first embodiment of a hearing aid comprising a datalogger according to the present disclosure.
  • FIG. 1 schematically illustrates a hearing aid (HA) configured to be worn at or in an ear of a user (or for being partially or fully implanted in the head at an ear of the user).
  • the hearing aid (HA) comprises an input unit (IU.
  • the input unit may e.g. comprise one or more input transducers, e.g. one or more microphones, configured to pick up sound (Acoustic input) from the environment of the hearing aid and to provide at least one electric input signal (IN) representing the sound.
  • the input unit (IU) may comprise an analogue to digital converter for converting an analogue signal to a digitized signal (e.g.
  • the input unit (IU) may further comprise an analysis filter bank for converting a (e.g. digitized) time domain signal to a time-frequency domain signal (e.g. represented as a multitude of frequency sub-band signals, each representing a frequency sub-range of the frequency range of operation of the hearing aid).
  • the hearing aid (HA) further comprises an own voice detector (OVD) configured to detect whether or not, or with what probability, the at least one electric input signal (IN), or a processed version thereof, comprises a voice from the user of the hearing aid, and to provide a user voice control signal (UVC) indicative thereof.
  • ODD own voice detector
  • the hearing aid (HA) further comprises a datalogger (DLOG) for logging over time data related to the use of the hearing aid, including absolute or relative time periods of own voice activity in dependence of the own voice control signal.
  • the hearing aid may be configured to log parameters of the current acoustic environment, including the own voice control signal, over time according to a predefined or adaptively determined scheme.
  • the hearing aid may be configured to log parameters of the current acoustic environment with a specific log frequency, e.g. with a frequency larger than 0.1 Hz.
  • the logged data may be (temporarily) stored in a memory of the datalogger.
  • the hearing aid (HA) further comprises a processor (PRO) for applying one or more processing algorithms to the at least one electric input signal (IN).
  • the one or more processing algorithms may include one or more of a compressive amplification algorithm configured to compensate for a hearing impairment of the user, a noise reduction algorithm, a feedback control algorithm, a directional beamforming algorithm, etc.
  • the processor (PRO) provides a processed signal (OUT) representing sound (e.g. the sound picked up by the input unit (IU), and/or sound received from another device), which is fed to an output unit (OU).
  • the output unit is configured to provide stimuli perceivable as sound to the user based on the processed signal (OUT).
  • the output unit (OU) may comprise an output transducer, e.g. a loudspeaker for providing air-conducted sound, or a vibrator for providing bone-conducted sound.
  • the output unit (OU) may comprise a multi-electrode array for directly stimulating the cochlear nerve of an ear of the user.
  • the output unit may further comprise a synthesis filter bank in case the processed output signal (OUT) comprises a multitude of a frequency-sub-band signals (time-frequency domain signal) and/or a digital to analogue converter for converting a digitized signal to an analogue signal according to the specific application.
  • the signal path from the input unit (IU) to the output unit (OU) via the processor (PRO) defines a forward path of the hearing aid (for processing the input sound to an output signal perceivable as sound to the user).
  • the hearing aid further comprises a communication interface (IF), e.g.
  • the communication interface may be based on near-field (e.g. inductive) communication or on far-field communication (e.g. based on Bluetooth or similar technologies).
  • FIG. 2 shows a second embodiment of a hearing aid comprising a datalogger according to the present disclosure.
  • the embodiment of a hearing aid (HA) in FIG. 2 comprises the same elements as the embodiment described in connection with FIG. 1 .
  • the embodiment of FIG. 2 comprises further detectors (DET) including e.g. an SNR estimator and/or a level estimator to monitor the acoustic environment.
  • DET detectors
  • the embodiment of FIG. 2 comprises separate own voice (OVD) and voice activity detectors (VAD) providing respective indicators OVC and VAC regarding the presence of the user's voice ('own voice') and other voices, respectively. Other voices may include or exclude the user's voice as considered practical in the specific application in question.
  • ODD separate own voice
  • VAD voice activity detectors
  • the hearing aid may thereby be configured to log the voice activity (e.g. a level of activity) of the user as well as of other persons in the environment of the user.
  • the hearing aid (e.g. the detector unit (DET)) may comprise an estimator of signal quality, e.g. of SNR, of the at least one electric input signal (IN) or of a signal derived therefrom.
  • the hearing aid may comprise an estimator of an ambient noise level, which may be estimated using the level detector and available voice activity detector(s), e.g. by making a noise estimate during speech pauses as determined by the voice indicator(s) (OVC, VAC).
  • a crude SNR may then be estimated by Level(voice)/Level(noise), the mentioned levels being e.g.
  • the hearing aid may be configured to log the conditions for engaging in a conversation.
  • the hearing aid (HA) may be configured to log data representing a currently requested gain from a compressive amplification algorithm of the hearing aid (cf. signal GRQ from the processor (PRO) to the datalogger (DLOG) in FIG. 2 ).
  • the compressive amplification algorithm may be configured to provide frequency and level dependent gain (amplification or attenuation) to the at least one electric input signal or to a signal derived therefrom.
  • the requested gain or changes to the requested gain reflects properties of the current acoustic environment of the user.
  • the hearing aid may be configured to further log absolute or relative time periods of NO own voice activity.
  • the datalogger comprises or interfaces to a timing unit (cf. unit TIME in FIG. 2 ) providing an absolute time or a relative time elapsed, e.g. since the last power up of the hearing aid (the latter may be relatively easily determined by an appropriate counter and knowledge of the relevant clock frequency of the hearing device).
  • TIME timing unit
  • the logged data for a given time window (e.g. from power on of the hearing aid to power off, e.g. corresponding to a single day of normal operation) or for several time windows, e.g. corresponding to a larger period of time, e.g. a week or a month, or the like, can be stored in a memory of the hearing aid.
  • the logged data (DATA) can e.g. - via the communication interface (IF) - be transferred to a different apparatus (e.g. a smartphone, a similar processing device, or a fitting system) which is capable of analyzing and/or possible further processing the data and/or of presenting the data in a user interface.
  • the hearing aid may e.g.
  • a device or server e.g.in the cloud
  • a specific or adaptive scheme e.g. in dependence of a current amount of logged data (or rest-capacity of a memory), or of a measure of a time elapsed.
  • An absolute timing (e.g. a time of day) may e.g. be obtained from specific timing-circuitry, e.g. included in the hearing aid, e.g. in communication with a time standard (e.g. the DCF77 in Frankfurt), or from another device (e.g. a smartphone or similar device, e.g. a watch) or from a network, e.g. including from a server.
  • a time standard e.g. the DCF77 in Frankfurt
  • another device e.g. a smartphone or similar device, e.g. a watch
  • a network e.g. including from a server.
  • FIG. 3 and 4 The logging of data related to the user's (active) participation in conversations is illustrated in FIG. 3 and 4 .
  • FIG. 3 shows a first time sequence reflecting a conversation of the user of the hearing aid with another person as detected by an own voice detector and a voice activity detector.
  • FIG. 3 shows values of different voice indicators (here control signals UVC (representing the user's voice) and OVC (representing other voice(s))) versus time (Time) for a time segment of an electric input signal of the hearing aid.
  • FIG. 3 shows an output of a voice activity detector that is capable of differentiating a user's voice from other voices in an environment of the user wearing the hearing aid.
  • control signals UVC or OVC respectively, being 1 or 0 (could also or alternatively be indicated by a speech presence probability (SPP) being above or below a threshold, respectively).
  • SPP speech presence probability
  • time periods of user voice and other persons' voice are indicated by different filling.
  • An analysis of the combination of indicators (UVC and OVC, respectively) of the presence or absence of user voice and other persons' voice may reveal a possible conversation with participation of the user. Identification of conversation involving the user may be identified by a sequential (alternating) occurrence of user voice (UVC) and other voice (OVC) indicators over a time period.
  • a criterion regarding the distance ⁇ tUser-Other in time between the user voice indicator (UVC) shifting from active to inactive and the other person's voice indicator (OVC) shifting from inactive to active (or vice versa) may by applied.
  • ⁇ tUser-Other t o,1 - t u,2 and t o,2 - t u,3 , respectively.
  • Such criterion may e.g. be ⁇ tUser-Other ⁇ 2 s.
  • a slight overlap may be accepted, and a further criterion may e.g. be ⁇ tUser-Other ⁇ -2 s. (thereby accepting a small period of 'double-talk).
  • a further criterion regarding the time period of each single period of active voice of the user (and/or the other person(s)) may be imposed, e.g.
  • the minimum duration may e.g. be 5 s.
  • FIG. 4 shows a second time sequence reflecting a varying acoustic environment of the user of the hearing aid, including sub-sequences reflecting a varying degree of speech-participation by the user.
  • FIG. 4 schematically illustrates a time window time dependent values of wherein indictors of the user voice (UVC) and other persons' voice (VAC) are indicated (an 'active indication' of the respective UVC and VAC indicators is shown by different fillings, as in FIG. 3 , bottom).
  • the time window comprises two time periods that indicate a user in conversation with another person, two time periods, that indicate silence (or no significant voice activity) and one time period of another persons' voice (without user participation, e.g. reflecting another person talking (without the user replying), e.g.
  • the time window of FIG. 4 has a range from t 1 to t 6 , i.e. spans a time period of duration ⁇ t w from t 6 - t 1 .
  • the time window of FIG. 4 comprises in consecutive order: (a 1 st period of) 'conversation', (a 1 st period of) 'silence', (a 1 st period of) 'one way speech', (a 2 nd period of) 'silence', and (a 2 nd period of) 'conversation'.
  • the individual time periods of each acoustic event may e.g. be estimated based on the logged data, either in the hearing aid or in another device or system to which the data are transferred.
  • the data logged over time, cf. time windows as illustrated in FIG. 3 , 4 and in practice comprising more acoustic events (represent longer time periods, e.g. days or weeks) and their subsequent analysis may allow extraction of information regarding the user's (voiced) social activity, e.g. in dependence of the acoustic environment (noisy environments may result in decreased activity), e.g. in dependence of the time of the day (a decrease with time of day (or time from power on of the hearing aid e.g. reflecting some sort of cognitive fatigue).
  • the analysis may result in changes being made to the processing of the hearing aid (e.g. increased noise reduction and/or more directionality in noisy environments).
  • the logged data may e.g. be used to extract information about the complexity (and length) of conversations engaged in by the user and in particular to changes in such parameters.
  • the repeated logging over time of own voice activity, other voice activity, input signal level (e.g. low, medium high), noise level and/or signal-to-noise-ratio may e.g. allow such information to be extracted.
  • the logged data may e.g. be up-loaded (off-loaded) to another device or server with a predefined frequency, e.g. 5 minutes, or every hour, or once a day (e.g. as part of a power-off procedure).
  • the hearing aid may be configured to take specific measures in case the intended (planned) off-loading of the logged parameters (to empty the memory and make room for new data) cannot be performed, e.g. due to lack of a communication ling, lack of power of the hearing aid, lack availability of the receiving device or system, etc.
  • Such specific measures may be to minimize the amount of data (and thus being able to cover a longer time window) by averaging values of the parameters (P 1 , ..., P Q ) over time.
  • the parameters may be averaged over different time periods (e.g. so that voice detection data (e.g. in particular own voice detection data) are prioritized over other parameters, e.g. level, or SNR, which may be assumed to vary more slowly, than the dynamic events of a conversation).
  • FIG. 5 shows an embodiment of a hearing system comprising a hearing aid and a programming device according to the present disclosure.
  • the hearing aid (HA) is in communication with a programming device (PD, e.g. a fitting system or other processing device, e.g. a smartphone).
  • the communication is e.g. via a direct link (LINK) or via a network.
  • the programming device (PD) comprises a communication interface (IF) allowing to establish a communication link to the hearing aid and to receive data from and transmit data to the hearing aid.
  • the programming device (PD) may e.g.
  • the processing unit may comprise a digital signal processor configured to run fitting software of the hearing aid, e.g. to adapt processing parameters of the processor (PRO) of the hearing aid to the needs of particular user (cf. double arrowed line between the processor (PRO) and the communication interface (IF) of the hearing aid (HD).
  • the programming device (PD) further comprises a user interface coupled to the processing unit (COMP), the memory (MEM) and the analyser (ANA).
  • the user interface comprises a visual display (DISP) and a keyboard (KEYB) allowing data to be displayed, e.g.
  • the programming device may (via the memory) e.g. have access to logged data from several time windows, e.g. representing observations over a time span of weeks or months.
  • the programming device may have access corresponding data from the available detectors in a time series spanning the mentioned period of weeks or months, e.g. including voice activities of the user and other persons in the environment of the user in a time resolution that allows an analysis of changes in the user's social vocal activity to be identified.
  • a schematic comparison of logged data for a particular user for two different time periods are shown.
  • a development in the user's active participation in conversations is (schematically) indicated from less in time period TP#1 to more in time period TP#2. This may be a result of changed parameter settings of the hearing aid (or the fitting of another (improved) hearing aid model) between time period TP#1 and TP#2, or it may be the result of a deliberate effort of the user to be more active (or both).
  • the results of the analysis may be inputs to a discussion with the user about his or her satisfaction with the hearing aid, and/or to the changing of parameter settings, fitting of a new hearing aid with improved features, etc.
  • Important learnings of the data are possible changes (over time) in the length and complexity of the user's conversations with other people, which can be taken as an indication of an improved social engagement (decreased self-isolation).
  • FIG. 6 schematically illustrates an example of data aggregation according to the present disclosure.
  • FIG. 6 shows values of averaged parameters ('Parameters averaged over ⁇ t', normalized scale), e.g. voice activity detection, overtime ('Time [s]').
  • Each 'vertical box' represent a data container (DataC).
  • DataC data container holds a value (e.g. an average value) of one or more 'conversation parameters' intended for being logged by the hearing aid or an external device connected to the hearing aid).
  • a conversation parameter may e.g. be a ratio of time periods with voice activity (e.g. own voice activity) to time periods of speech pauses.
  • FIG. 6 shows a multitude of observation windows of variable duration in time, here t 1 , t 2 , ..., t n , t n+1 , .
  • Each data container (DataC) of a given observation window t n has a common width ⁇ t n representing the time range that the data of that data container represents, e.g. a single value sampled in the time range time ⁇ t n or an average of values sampled in the time range ⁇ t n .
  • DataC data container
  • ⁇ t n representing the time range of the data container of observation window n is indicated to be smaller than or equal to A, B, and N, INF for observation windows t 1 , t 2 , and t n , t n+1 , respectively. It may be assumed that A ⁇ B ⁇ N ⁇ INF.
  • the observation windows may e.g. be of increasing duration in time (t 1 , t 2 , ).
  • the duration in time may e.g. increase with increasing n, e.g. increasing with increasing n for n larger than a first threshold value n th1 .
  • the duration in time of the observation windows may be different for different hearing aid models or styles (e.g. dependent on processor clock frequency, memory, processing algorithms, etc.).
  • the duration in time of the observation windows may e.g. vary from t 1 being of the order of milliseconds or of the order of minutes or larger.
  • Each observation window (t 1 , t 2 , ...) contains a number N DC,n of data containers (DataC). All observation windows (t 1 , t 2 , ...) may contain the same number N DC of data containers (DataC).
  • the observation windows may comprise different numbers N DC,n of data containers, e.g. decreasing with increasing n, e.g. decreasing with increasing n for n larger than a threshold value second n th2 .
  • the first and second threshold values of n (n th2 , n th2 ) may be equal or different.
  • each data container (DataC, irrespective of its width in time ⁇ t n ) occupies the same space in the memory (because it holds the same number of data values).
  • DataC successive data container
  • the storage frequency is further reduced in the third observation window (e.g. by another factor of 5), etc.
  • the reduction of the storage frequency can be repeated an arbitrary number of times.
  • the reduction of the storage frequency can be terminated after a number of observation windows, after which the storage frequency is kept constant.
  • the strategy for successively reducing the storage frequency can be controlled by a storage controller, e.g. in dependence of one or more of a memory size, a battery status of the hearing aid, an estimated time to the next possible off-loading of logged data, etc.
  • the logged data may e.g. be off-loaded to an external device (e.g.via an APP or directly, e.g. automatically, when the hearing aid is connected to the external device), e.g. to a memory of a portable device, e.g. to a smartphone, or to a fitting system of the hearing aid.
  • a reason for applying such storage strategy is that it may be difficult to predict a time between data off-loads.
  • a successful data off-load may be dependent on connectivity conditions at a given time (e.g., is the data receiver (e.g. a smart phone or a fitting system).
  • a successful data off-load may be dependent on the hearing aid having sufficient power to establish a link to the receiving device or system, etc.
  • a successful data off-load may be dependent on the receiving device or system being ready to receive data from the hearing aid (there may be other tasks that have higher priorities than the reception of logged data from the hearing aid).
  • FIG. 7 schematically illustrates a hearing aid system according to the present disclosure, wherein an external processor (PRO) and memory (MEM) is built-into a charging station (CHAS) for a hearing aid (HA1) or a pair of hearing aids (HA1, HA2).
  • the charging station (CHAS) can thereby be used to off-load data (LOGD) from a datalogger (cf. DLOG in FIG. 1, 2 ) of the hearing aid(s).
  • the charging station comprises a memory (MEM) for receiving data from the datalogger.
  • the charging station comprises (antenna (ANT) and) transceiver circuitry (WLIF) for establishing a communication link (WL) to the hearing aid(s).
  • the charging station may e.g. comprise one or more sensors for classifying the environment around the charging station, e.g. a microphone or other sensor, e.g. for of background noise.
  • the sensor data may be added to the logged data (LOGD) while the hearing aids are located in the charging station (and/or as long as the charging station (CHAS) and the hearing aid (HA1, HA2) are in communication via the communication link (WL), e.g. as long as the distance D between them is smaller than a maximum transmission/reception range of the link (WL).
  • the charging station may e.g. comprise an absolute clock that can be added to the logged data, when the hearing aid are located in the charging station.
  • the processor (PRO) of the charging station may have a larger processing power than a processor of the wearable device.
  • the processor may be configured to analyze the logged data from the hearing aid(s).
  • the charging station may be located on a support (Support), e.g. a table, in an appropriate place with a view to being accessible to the hearing aids when the user moves around.
  • the charging station may be a pocket-size, portable, device comprising an interface (PSIF), e.g. including a connector, to an electricity network, and/or a local (e.g. rechargeable) battery (BAT) for charging a battery or batteries of the hearing aids (HA1, HA2).
  • PSIF interface
  • BAT local (e.g. rechargeable) battery
  • the battery of the charging station is assumed to have a significantly larger capacity than a battery of the hearing aid.
  • the charging station may further comprise an interface (DIF) to a data network.
  • the interface is configured to establish a (here wireless) connection to the data network (cf. link WLDL, e.g. WiFi) e.g. to provide access to servers, e.g. a fitting system, on the Internet (cloud computing). Thereby the off-loaded date may be uploaded to the fitting system via the data network.
  • FIG. 8 shows an embodiment of a hearing aid (HA) according to the present disclosure comprising a BTE-part located behind an ear or a user and an ITE part located in an ear canal of the user, and an auxiliary device (AD) in communication with the hearing aid comprising a user interface (UI).
  • a hearing aid HA
  • auxiliary device AD
  • UI user interface
  • FIG. 8 illustrates an exemplary hearing aid (HA) formed as a receiver in the ear (RITE) type hearing aid comprising a BTE-part ( BTE ) adapted for being located behind pinna and a part (ITE) comprising an output transducer (OT, e.g. a loudspeaker/receiver) adapted for being located in an ear canal (Ear canal) of the user (e.g. exemplifying a hearing aid (HA) as shown in FIG. 1, 2 ).
  • the BTE-part (BTE) and the ITE-part (ITE) are connected (e.g. electrically connected, e.g. via a cable comprising a multitude of conductors, e.g.
  • the BTE part (BTE) comprises two input transducers (here microphones) (M BTE1 , M BTE2 ) each for providing an electric input audio signal representative of an input sound signal (S BTE ) from the environment (in the scenario of FIG. 8 , from sound source S, e.g. a communication partner).
  • the hearing aid (HA) of FIG. 8 further comprises two wireless transceivers (WLR 1 , WLR 2 ) for receiving and/or transmitting signals (e.g. comprising audio and/or information, e.g. logged data according to the present disclosure).
  • the hearing aid (HA) further comprises a substrate (SUB) whereon a number of electronic components are mounted, functionally partitioned according to the application in question (analogue, digital, passive components, etc.), but including a configurable digital signal processor (DSP), a front-end chip (FE), and a memory unit (MEM) coupled to each other and to input and output units via electrical conductors Wx.
  • DSP digital signal processor
  • FE front-end chip
  • MEM memory unit
  • the mentioned functional units may be partitioned in circuits and components according to the application in question (e.g. with a view to size, power consumption, analogue vs digital processing, etc.), e.g. integrated in one or more integrated circuits, or as a combination of one or more integrated circuits and one or more separate electronic components (e.g.
  • the configurable signal processor provides an enhanced audio signal (cf. signal OUT in FIG. 1, 2 ), which is intended to be presented to a user.
  • the front-end integrated circuit FE is adapted for providing an interface between the configurable signal processor (DSP) and the input and output transducers, etc., and typically comprising interfaces between analogue and digital signals.
  • the input and output transducers may be individual separate components, or integrated (e.g. MEMS-based) with other electronic circuitry.
  • a hearing aid device in FIG.
  • the ITE part comprises an output unit in the form of a loudspeaker (receiver) (SPK) for converting the electric signal (OUT) to an acoustic signal (providing, or contributing to, acoustic signal S ED at the ear drum (Ear drum).
  • the ITE-part further comprises an input unit comprising an input transducer (e.g. a microphone) (M ITE ) for providing an electric input audio signal representative of an input sound signal S ITE from the environment at or in the ear canal.
  • the hearing aid may comprise only the BTE-microphones (M BTE1 , M BTE2 ).
  • the hearing aid may comprise an input unit located elsewhere than at the ear canal in combination with one or more input units located in the BTE-part and/or the ITE-part.
  • the ITE-part further comprises a guiding element, e.g. a dome, (DO) for guiding and positioning the ITE-part in the ear canal of the user.
  • the hearing aid (HA) exemplified in FIG. 8 is a portable device and further comprises a battery (BAT) for energizing electronic components of the BTE- and ITE-parts.
  • the hearing aid (HA) may be identical to the hearing aid(s) illustrated in FIG. 7 .
  • the hearing aid may comprise a directional microphone system (e.g. a beamformer filter) adapted to enhance a target acoustic source among a multitude of acoustic sources in the local environment of the user wearing the hearing aid.
  • a directional microphone system e.g. a beamformer filter
  • the memory unit may form part of the datalogger and comprise logged data according to the present disclosure.
  • the hearing aid of FIG. 8 may constitute or form part of a binaural hearing aid system according to the present disclosure.
  • the hearing aid (HA) may comprise a user interface UI, e.g. as shown in the bottom part of FIG. 8 implemented in an auxiliary device (AD), e.g. a remote control, e.g. implemented as an APP in a smartphone or other portable (or stationary) electronic device (e.g. a charging station).
  • auxiliary device e.g. a remote control
  • APP e.g. implemented as an APP in a smartphone or other portable (or stationary) electronic device (e.g. a charging station).
  • the screen of the user interface (UI) illustrates a Datalogging APP.
  • the user may configure the datalogger via the APP.
  • the user may e.g. select the data that should be logged, e.g. Own-voice data, Other voice data, internal and external sensor data (termed HA-sensors and External sensors, respectively, in the exemplified screen of FIG. 8 ).
  • Own-voice, Other voice, and HA-sensors have been selected (as indicated by the filled square symbols ⁇ ).
  • the user may further off-load logged data to another device or system, e.g. to a fitting system, a Smartphone or to a Charging station (see e.g. FIG. 7 ).
  • connection to the smartphone is selected (as indicated by the filled square symbol ⁇ ).
  • Unselected options are indicated by open square symbols ( ⁇ ).
  • the auxiliary device (AD) and the hearing aid (HA) are adapted to allow communication of data representative of the currently selected direction (if deviating from a predetermined direction (already stored in the hearing aid)) to the hearing aid via a, e.g. wireless, communication link (cf. dashed arrow WL2 in FIG. 8 ).
  • the communication link WL2 may e.g. be based on far field communication, e.g. Bluetooth or Bluetooth Low Energy (or similar technology), implemented by appropriate antenna and transceiver circuitry in the hearing aid (HA) and the auxiliary device (AD), indicated by transceiver unit WLR 2 in the hearing aid.
  • connection or “coupled” as used herein may include wirelessly connected or coupled.
  • the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Circuit For Audible Band Transducer (AREA)
EP20181325.0A 2020-06-22 2020-06-22 Prothèse auditive comprenant un dispositif de suivi de ses propres conversations vocales Pending EP3930346A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20181325.0A EP3930346A1 (fr) 2020-06-22 2020-06-22 Prothèse auditive comprenant un dispositif de suivi de ses propres conversations vocales

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP20181325.0A EP3930346A1 (fr) 2020-06-22 2020-06-22 Prothèse auditive comprenant un dispositif de suivi de ses propres conversations vocales

Publications (1)

Publication Number Publication Date
EP3930346A1 true EP3930346A1 (fr) 2021-12-29

Family

ID=71120042

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20181325.0A Pending EP3930346A1 (fr) 2020-06-22 2020-06-22 Prothèse auditive comprenant un dispositif de suivi de ses propres conversations vocales

Country Status (1)

Country Link
EP (1) EP3930346A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4258689A1 (fr) 2022-04-07 2023-10-11 Oticon A/s Prothèse auditive comprenant une unité de notification adaptative
EP4340395A1 (fr) 2022-09-13 2024-03-20 Oticon A/s Prothèse auditive comprenant une interface de commande vocale
DE102023202367A1 (de) 2023-03-16 2024-09-19 Sivantos Pte. Ltd. Verfahren zum Betrieb eines Hörgerätes, Hörgerät und Computerprogrammprodukt

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060222194A1 (en) 2005-03-29 2006-10-05 Oticon A/S Hearing aid for recording data and learning therefrom
US8948428B2 (en) * 2006-09-05 2015-02-03 Gn Resound A/S Hearing aid with histogram based sound environment classification
US20160249144A1 (en) * 2015-02-24 2016-08-25 Sivantos Pte. Ltd. Method for ascertaining wearer-specific use data for a hearing aid, method for adapting hearing aid settings of a hearing aid, hearing aid system and setting unit for a hearing aid system
EP3641345A1 (fr) * 2018-10-16 2020-04-22 Sivantos Pte. Ltd. Procédé de fonctionnement d'un instrument auditif et système auditif comprenant un instrument auditif

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060222194A1 (en) 2005-03-29 2006-10-05 Oticon A/S Hearing aid for recording data and learning therefrom
US8948428B2 (en) * 2006-09-05 2015-02-03 Gn Resound A/S Hearing aid with histogram based sound environment classification
US20160249144A1 (en) * 2015-02-24 2016-08-25 Sivantos Pte. Ltd. Method for ascertaining wearer-specific use data for a hearing aid, method for adapting hearing aid settings of a hearing aid, hearing aid system and setting unit for a hearing aid system
EP3641345A1 (fr) * 2018-10-16 2020-04-22 Sivantos Pte. Ltd. Procédé de fonctionnement d'un instrument auditif et système auditif comprenant un instrument auditif

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4258689A1 (fr) 2022-04-07 2023-10-11 Oticon A/s Prothèse auditive comprenant une unité de notification adaptative
EP4340395A1 (fr) 2022-09-13 2024-03-20 Oticon A/s Prothèse auditive comprenant une interface de commande vocale
DE102023202367A1 (de) 2023-03-16 2024-09-19 Sivantos Pte. Ltd. Verfahren zum Betrieb eines Hörgerätes, Hörgerät und Computerprogrammprodukt

Similar Documents

Publication Publication Date Title
US10966034B2 (en) Method of operating a hearing device and a hearing device providing speech enhancement based on an algorithm optimized with a speech intelligibility prediction algorithm
US20220201409A1 (en) Hearing aid device for hands free communication
US11510019B2 (en) Hearing aid system for estimating acoustic transfer functions
EP3930346A1 (fr) Prothèse auditive comprenant un dispositif de suivi de ses propres conversations vocales
US20170256269A1 (en) Monaural intrusive speech intelligibility predictor unit, a hearing aid and a binaural hearing aid system
US10631107B2 (en) Hearing device comprising adaptive sound source frequency lowering
US12058493B2 (en) Hearing device comprising an own voice processor
US11863938B2 (en) Hearing aid determining turn-taking
EP4057644A1 (fr) Aide auditive déterminant les interlocuteurs d'intérêt
EP3525489A1 (fr) Procédé de montage d'un dispositif auditif selon les des besoins d'un utilisateur, dispositif de programmation et système d'écoute
US11589173B2 (en) Hearing aid comprising a record and replay function
EP3934278A1 (fr) Prothèse auditive comprenant un traitement binaural et un système d'aide auditive binaurale
EP2876902A1 (fr) Dispositif d'aide auditive réglable
US20240348992A1 (en) Hearing aid with cognitive adaptation and methods thereof
US20220406328A1 (en) Hearing device comprising an adaptive filter bank

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

B565 Issuance of search results under rule 164(2) epc

Effective date: 20201126

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220629

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240122