EP3930346A1 - A hearing aid comprising an own voice conversation tracker - Google Patents

A hearing aid comprising an own voice conversation tracker Download PDF

Info

Publication number
EP3930346A1
EP3930346A1 EP20181325.0A EP20181325A EP3930346A1 EP 3930346 A1 EP3930346 A1 EP 3930346A1 EP 20181325 A EP20181325 A EP 20181325A EP 3930346 A1 EP3930346 A1 EP 3930346A1
Authority
EP
European Patent Office
Prior art keywords
hearing aid
voice
data
user
hearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20181325.0A
Other languages
German (de)
French (fr)
Inventor
Karsten BONKE
Thomas Mortensen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oticon AS
Original Assignee
Oticon AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oticon AS filed Critical Oticon AS
Priority to EP20181325.0A priority Critical patent/EP3930346A1/en
Publication of EP3930346A1 publication Critical patent/EP3930346A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/35Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using translation techniques
    • H04R25/356Amplitude, e.g. amplitude shift or compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/021Behind the ear [BTE] hearing aids
    • H04R2225/0213Constructional details of earhooks, e.g. shape, material
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/021Behind the ear [BTE] hearing aids
    • H04R2225/0216BTE hearing aids having a receiver in the ear mould
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/31Aspects of the use of accumulators in hearing aids, e.g. rechargeable batteries or fuel cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/39Aspects relating to automatic logging of sound environment parameters and the performance of the hearing aid during use, e.g. histogram logging, or of user selected programs or settings in the hearing aid, e.g. usage logging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/55Communication between hearing aids and external devices via a network for data exchange
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/30Monitoring or testing of hearing aids, e.g. functioning, settings, battery power

Definitions

  • Hearing care is about restoring the ability to hear.
  • a dominant element of this restoration is to regain the ability to understand speech in various sound environments.
  • the ability to monitor, nudge, and guide the wearer to continuously challenge and improve hearing ability and social interaction is important to regain an active social lifestyle.
  • the proposed solution may track and estimate the activity level of the user and derive a score for a relative objective in a rehabilitation plan.
  • the proposed solution may be used to document treatment outcome for users frequently connected as well as users offline between consultations.
  • the hearing instruments may be configured to log the time a wearer is speaking and combine with data concerning the sound environment, such as SNR, or activity level of other identified speech sources.
  • the result may help a hearing care professional to prescribe specific targets to aid in the rehabilitation for a specific user (e.g. active participation in conversations, its frequency and/or duration).
  • specific targets e.g. active participation in conversations, its frequency and/or duration.
  • the tracking of own voice activity and other talker activity can be combined with a ratio of pauses. From this ratio it can be derived if the user is engaging in a conversation rather than passively listening to for example the TV.
  • the ratio (or equivalent data) for a given time window may be stored in the instrument for periodic retrieval.
  • 'a window of variable time' is taken to mean an 'observation window' that may vary in time.
  • the data that are observed in the observation window may e.g. include voice activity detection data (e.g. own voice and other voice activity or no-activity).
  • An indication of the voice activity in the environment of a user may e.g. be provided by a ratio of the sum of time periods with voice activity and the total time period of the observation window (the total being the sum of time periods with voice activity and the sum of speech pauses).
  • Further sub-indicators may be provided by a) the ratio of the sum of time periods with own voice activity and the total time period of the observation window, and b) the ratio of the sum of time periods with other voice activity and the total time period of the observation window.
  • the pauses may be further classified as 'pauses before own voice' and 'pauses before other voice', which may additionally be used to evaluate the user's conversation participation pattern.
  • the ratio for one or more time windows may be transferred to a different apparatus which is capable of further processing the data and/or present the data in a user interface.
  • apparatus or processing device may be constituted by or comprise a fitting system, or a smartphone or a remote control device for the hearing aid, etc.
  • US2006222194A1 deals with a hearing aid comprising a datalogger and with the learning from these data.
  • the hearing aid comprises an input unit, a signal processing unit, and a user interface for converting user interaction to a control signal thereby controlling a processing setting of the signal processing unit.
  • the hearing aid further comprises a memory unit comprising a control section storing a set of control parameters associated with the acoustic environment, and a datalogger section receiving data from the input unit, the signal processing unit, and the user interface.
  • the signal processing unit configures the setting according to the set of control parameters and comprises a learning controller adapted to adjust the set of control parameters according to the data in the data logging section.
  • the proposed solution enriches the result with the social activity and active participation of the wearer.
  • a hearing aid is a hearing aid
  • a hearing aid configured to be worn at or in an ear of a user.
  • the hearing aid comprises
  • the hearing aid may be configured to further log data concerning a sound environment, at least during said time periods of own voice activity.
  • the data concerning a sound environment may be logged with the same (or lower) frequency as the own voice activity is logged.
  • the hearing aid may comprise one or more detectors of the acoustic environment.
  • the hearing aid may be configured o receive data from one or more detectors of the acoustic environment located in other devices or systems, e.g. an external device, such as a smartphone, or a charging station or other auxiliary device in communication with the hearing aid.
  • the data concerning a sound environment may include a measure of sound quality, e.g. a signal to noise ratio (SNR).
  • the hearing aid may comprise a detector for providing a measure of sound quality of the electric input signal or a signal originating therefrom.
  • the hearing aid may comprise a detector for estimating an SNR of the at least one electric input signal, or a processed version thereof.
  • the hearing aid may comprise a level detector for estimating a current level of the at least one electric input signal or of a signal derived therefrom.
  • the data concerning a sound environment may include an activity level of other (identified) speech sources.
  • 'An activity level' of a sound source (an external or the user) may e.g. be a duration of activity in an absolute or relative time scale (e.g. in seconds or in a number of time units (absolute or arbitrary) relative to the number of time units of a total period of observation).
  • 'An activity level' may e.g. include a number of distinct events of activity (e.g. separated by a certain minimum time period) and a total duration of activity in an absolute or relative scale of the sound source in question.
  • the data may comprise a requested gain from a compressive amplification algorithm of the hearing aid.
  • the compressive amplification algorithm may be configured to compensate for the user's hearing impairment.
  • the compressive amplification algorithm may be configured to provide frequency and level dependent gain (amplification or attenuation) to the at least one electric input signal or to a signal derived therefrom.
  • the hearing aid may comprise a voice activity detector configured to detect whether or not, or with what probability, said at least one electric input signal, or a processed version thereof, comprises a human voice and to provide a voice control signal indicative thereof.
  • the voice activity detector may be configured to detect speech.
  • the voice activity detector may be configured to differentiate between the voice of the user wearing the hearing aid and other voices (e.g. using a level differentiation, and/or a trained algorithm, e.g. a neural network), in which case the voice activity detector may include the own voice detector.
  • the hearing aid may be configured to further log absolute or relative time periods of NO own voice activity.
  • the tracking of own voice activity can e.g. be combined with the logging of speech pauses, and/or the logging of total (absolute or relative) time elapsed in the observation window in question.
  • a ratio of time periods of own voice activity to speech pauses may be logged.
  • a ratio of time periods of voice activity to speech pauses may be logged.
  • a ratio of time periods of other voice activity (than own voice) to speech pauses may be logged. From this ratio it can be derived if the user is engaging in a conversation rather than passively listening to for example the TV.
  • the ratio for a given (observation) time window can be stored in the instrument for periodic retrieval.
  • the ratio for one or more windows can be transferred to a different apparatus (e.g. a smartphone, a similar processing device, or a fitting system) which is capable of further processing the data and/or present the data in a user interface.
  • the datalogger may be configured to log data in successive observation windows of variable time, e.g. of increasing length over time, but with a constant or decreasing number of logged data values of successive observation windows.
  • a time window e.g. an observation window of variable time, e.g. an observation window of increasing length over time (but with a constant (or decreasing) number of logged data values of successive windows
  • data can be logged over an extended time even with a limited storage capacity of a memory of the datalogger of the hearing aid, see e.g. FIG. 6 .
  • Data stored by the datalogger may e.g. be off-loaded during charging of a rechargeable battery of the hearing aid in a charging station, se e.g. FIG. 7 .
  • the hearing aid may comprise a communication interface allowing data to be exchanged with another device or system.
  • the communication interface may be based on a cabled connection, e.g. comprising appropriate connectors, allowing easy connection (and dis-connection) of the hearing aid to/from the 'another device or system'.
  • the communication interface may be based on a wireless connection to the 'another device or system', e.g. via a network.
  • the hearing aid may comprise an output unit, and wherein the output unit comprises a number of electrodes of a cochlear implant type hearing aid or a vibrator of a bone conducting hearing aid, or a loudspeaker of an air conduction hearing aid, or a combination thereof.
  • the hearing aid may be adapted to provide a frequency dependent gain and/or a level dependent compression and/or a transposition (with or without frequency compression) of one or more frequency ranges to one or more other frequency ranges, e.g. to compensate for a hearing impairment of a user.
  • the hearing aid may comprise a signal processor for applying one or mor processing algorithms to enhance the input signals and providing a processed output signal.
  • the hearing aid may comprise an output unit for providing a stimulus perceived by the user as an acoustic signal based on a processed electric signal.
  • the output unit may comprise a number of electrodes of a cochlear implant (for a CI type hearing aid) or a vibrator of a bone conducting hearing aid.
  • the output unit may comprise an output transducer.
  • the output transducer may comprise a receiver (loudspeaker) for providing the stimulus as an acoustic signal to the user (e.g. in an acoustic (air conduction based) hearing aid).
  • the output transducer may comprise a vibrator for providing the stimulus as mechanical vibration of a skull bone to the user (e.g. in a bone-attached or bone-anchored hearing aid).
  • the hearing aid may comprise an input unit for providing an electric input signal representing sound.
  • the input unit may comprise an input transducer, e.g. a microphone, for converting an input sound to an electric input signal.
  • the input unit may comprise a wireless receiver for receiving a wireless signal comprising or representing sound and for providing an electric input signal representing said sound.
  • the wireless receiver may e.g. be configured to receive an electromagnetic signal in the radio frequency range (3 kHz to 300 GHz).
  • the wireless receiver may e.g. be configured to receive an electromagnetic signal in a frequency range of light (e.g. infrared light 300 GHz to 430 THz, or visible light, e.g. 430 THz to 770 THz).
  • the hearing aid may comprise a directional system adapted to spatially filter sounds from the environment, and thereby enhance a target acoustic source among a multitude of acoustic sources in the local environment of the user wearing the hearing aid.
  • the directional system may be adapted to detect (such as adaptively detect) from which direction a particular part of the input signal originates. This can be achieved in various different ways as e.g. described in the prior art.
  • a microphone array beamformer is often used for spatially attenuating background noise sources. Many beamformer variants can be found in literature.
  • the minimum variance distortionless response (MVDR) beamformer is widely used in microphone array signal processing.
  • the MVDR beamformer keeps the signals from the target direction (also referred to as the look direction) unchanged, while attenuating sound signals from other directions maximally.
  • the generalized sidelobe canceller (GSC) structure is an equivalent representation of the MVDR beamformer offering computational and numerical advantages over a direct implementation in its original form.
  • Wireless communication may be in the base band (audio frequency range, e.g. between 0 and 20 kHz).
  • communication between the hearing aid and the other device is based on some sort of modulation at frequencies above 100 kHz.
  • frequencies used to establish a communication link between the hearing aid and the other device is below 70 GHz, e.g. located in a range from 50 MHz to 70 GHz, e.g. above 300 MHz, e.g. in an ISM range above 300 MHz, e.g.
  • the wireless link may be based on a standardized or proprietary technology.
  • the wireless link may be based on Bluetooth technology (e.g. Bluetooth Low-Energy technology).
  • the hearing aid may be or form part of a portable (i.e. configured to be wearable) device, e.g. a device comprising a local energy source, e.g. a battery, e.g. a rechargeable battery.
  • the hearing aid may e.g. be a low weight, easily wearable, device, e.g. having a total weight less than 100 g, such as less than 20 g.
  • the hearing aid may comprise a forward or signal path between an input unit (e.g. an input transducer, such as a microphone or a microphone system and/or direct electric input (e.g. a wireless receiver)) and an output unit, e.g. an output transducer.
  • the signal processor may be located in the forward path.
  • the signal processor may be adapted to provide a frequency dependent gain according to a user's particular needs.
  • the hearing aid may comprise an analysis path comprising functional components for analyzing the input signal (e.g. determining a level, a modulation, a type of signal, an acoustic feedback estimate, etc.). Some or all signal processing of the analysis path and/or the signal path may be conducted in the frequency domain. Some or all signal processing of the analysis path and/or the signal path may be conducted in the time domain.
  • the hearing aid may be configured to operate in different modes, e.g. a normal mode and one or more specific modes, e.g. selectable by a user, or automatically selectable.
  • a mode of operation may be optimized to a specific acoustic situation or environment.
  • a mode of operation may include a low-power mode, where functionality of the hearing aid is reduced (e.g. to save power), e.g. to disable wireless communication, and/or to disable specific features of the hearing aid.
  • the hearing aid may comprise a number of detectors configured to provide status signals relating to a current physical environment of the hearing aid (e.g. the current acoustic environment), and/or to a current state of the user wearing the hearing aid, and/or to a current state or mode of operation of the hearing aid.
  • one or more detectors may form part of an external device in communication (e.g. wirelessly) with the hearing aid.
  • An external device may e.g. comprise another hearing aid, a remote control, and audio delivery device, a telephone (e.g. a smartphone), an external sensor, etc.
  • One or more of the number of detectors may operate on the full band signal (time domain).
  • One or more of the number of detectors may operate on band split signals ((time-) frequency domain), e.g. in a limited number of frequency bands.
  • the number of detectors may comprise a level detector for estimating a current level of a signal of the forward path.
  • the detector may be configured to decide whether the current level of a signal of the forward path is above or below a given (L-)threshold value.
  • the level detector operates on the full band signal (time domain).
  • the level detector operates on band split signals ((time-) frequency domain).
  • the hearing aid may comprise a voice activity detector (VAD) for estimating whether or not (or with what probability) an input signal comprises a voice signal (at a given point in time).
  • VAD voice activity detector
  • a voice signal may in the present context be taken to include a speech signal from a human being. It may also include other forms of utterances generated by the human speech system (e.g. singing).
  • the voice activity detector unit may be adapted to classify a current acoustic environment of the user as a VOICE or NO-VOICE environment. This has the advantage that time segments of the electric microphone signal comprising human utterances (e.g. speech) in the user's environment can be identified, and thus separated from time segments only (or mainly) comprising other sound sources (e.g. artificially generated noise).
  • the voice activity detector may be adapted to detect as a VOICE also the user's own voice. Alternatively, the voice activity detector may be adapted to exclude a user's own voice from the detection of a VOICE.
  • the hearing aid may comprise an own voice detector for estimating whether or not (or with what probability) a given input sound (e.g. a voice, e.g. speech) originates from the voice of the user of the system.
  • a microphone system of the hearing aid may be adapted to be able to differentiate between a user's own voice and another person's voice and possibly from NON-voice sounds.
  • the number of detectors may comprise a movement detector, e.g. an acceleration sensor.
  • the movement detector may be configured to detect movement of the user's facial muscles and/or bones, e.g. due to speech or chewing (e.g. jaw movement) and to provide a detector signal indicative thereof.
  • the hearing aid may comprise a classification unit configured to classify the current situation based on input signals from (at least some of) the detectors, and possibly other inputs as well.
  • a current situation' may be taken to be defined by one or more of
  • the hearing aid may comprise of a multi-level data storage system to distil conversation history across current, recent, and past conversations.
  • the data storage scheme comprises that more data are stored in current conversation and made available for other classifiers and less data are stored for conversations that are no longer active.
  • data are aggregated and stored in memory bins representing shorter or longer time intervals.
  • the data may be represented by a single numeric counter or a ratio value, in place of the time-domain classifier result that is logged in an active conversation.
  • the hearing aid may be designed in a way that the available data storage and availability of means for data transport to other apparatus determine the degree of data aggregation. This dynamic aggregation may allow the hearing aid to store conversation tracking data for an arbitrary time period without sacrificing the detailed time-domain data for a specific number of conversation minutes, see e.g. FIG. 6 .
  • the classification unit may be based on or comprise a neural network, e.g. a trained neural network.
  • the hearing aid may further comprise other relevant functionality for the application in question, e.g. compression, noise reduction, feedback control, etc.
  • the hearing aid may comprise a hearing instrument, e.g. a hearing instrument adapted for being located at the ear or fully or partially in the ear canal of a user, e.g. a headset, an earphone, an ear protection device or a combination thereof.
  • a hearing instrument e.g. a hearing instrument adapted for being located at the ear or fully or partially in the ear canal of a user, e.g. a headset, an earphone, an ear protection device or a combination thereof.
  • Use may be provided in a system comprising audio distribution.
  • Use may be provided in a system comprising one or more hearing aids (e.g. hearing instruments), headsets, ear phones, active ear protection systems, etc.
  • a method of operating a hearing aid :
  • a method of operating a hearing aid configured to be worn at of in an ear of a user is provided.
  • the method may comprise
  • the method may apply progressive abstraction of data over a period of time.
  • the hearing aid preserve detailed conversation data logging for a number of minutes, after which the data will be aggregated into more abstract usable counters and ratios. This allow the hearing aid to track a high degree of data resolution if the user has a connected to a connected apparatus and still allow the hearing aid to maintain relevant data between visits to a clinic if the user is offline the entire time, see e.g. FIG. 6 .
  • a method of extracting information about a hearing aid user's conversations :
  • a method of extracting information about a hearing aid social engagement in conversations is provided.
  • the method comprises
  • the user's engagement in conversations may e.g. be estimated by identifying a conversation in the combined data from an own voice detector and a general voice detector (or a dedicated 'not-own voice' detector).
  • a conversation is detected, if a user's voice followed by another voice are detected, one voice following the other, without longer speech pauses between them (i.e. pauses are not larger than a threshold value ⁇ t PAUSE , e.g. 5-10 seconds), see e.g. FIG. 3 , 4 .
  • a computer readable medium or data carrier :
  • a tangible computer-readable medium storing a computer program comprising program code means (instructions) for causing a data processing system (a computer) to perform (carry out) at least some (such as a majority or all) of the (steps of the) method described above, in the 'detailed description of embodiments' and in the claims, when said computer program is executed on the data processing system is furthermore provided by the present application.
  • Such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
  • Other storage media include storage in DNA (e.g. in synthesized DNA strands). Combinations of the above should also be included within the scope of computer-readable media.
  • the computer program can also be transmitted via a transmission medium such as a wired or wireless link or a network, e.g. the Internet, and loaded into a data processing system for being executed at a location different from that of the tangible medium.
  • a transmission medium such as a wired or wireless link or a network, e.g. the Internet
  • a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out (steps of) the method described above, in the 'detailed description of embodiments' and in the claims is furthermore provided by the present application.
  • a data processing system :
  • a data processing system comprising a processor and program code means for causing the processor to perform at least some (such as a majority or all) of the steps of the method described above, in the 'detailed description of embodiments' and in the claims is furthermore provided by the present application.
  • a hearing system :
  • a hearing system comprising a hearing aid as described above, in the 'detailed description of embodiments', and in the claims, AND an auxiliary device is moreover provided.
  • the hearing system may be adapted to establish a communication link between the hearing aid and the auxiliary device to provide that information (e.g. control and status signals, possibly audio signals) can be exchanged or forwarded from one to the other.
  • information e.g. control and status signals, possibly audio signals
  • the auxiliary device may comprise a remote control, a smartphone, or other portable or wearable electronic device, such as a smartwatch or the like.
  • the auxiliary device may be constituted by or comprise a programming device (e.g. running a fitting software for adapting processing of the hearing aid to the needs, e.g. a hearing impairment, of the user of the hearing aid).
  • a programming device e.g. running a fitting software for adapting processing of the hearing aid to the needs, e.g. a hearing impairment, of the user of the hearing aid.
  • the auxiliary device may be constituted by or comprise a remote control for controlling functionality and operation of the hearing aid(s).
  • the function of a remote control may be implemented in a smartphone, the smartphone possibly running an APP allowing to control the functionality of the audio processing device via the smartphone (the hearing aid(s) comprising an appropriate wireless interface to the smartphone, e.g. based on Bluetooth or some other standardized or proprietary scheme).
  • the auxiliary device may be constituted by or comprise another hearing aid.
  • the hearing system may comprise two hearing aids adapted to implement a binaural hearing system, e.g. a binaural hearing aid system.
  • the auxiliary device may e.g. be or comprise a programming device, e.g. implementing a fitting system of the hearing aid.
  • the auxiliary device may comprise a charging station comprising a memory (e.g. acting as an intermediate storage medium, e.g. of 'day-to-day data' from the datalogger, cf. e.g. FIG. 7 ).
  • the auxiliary device may comprise a communication interface allowing the (wired or wireless) communication link to the hearing aid to be established.
  • the communication interface(s) may comprise appropriate antenna and transceiver circuitry to implement a wireless link, e.g. based on Bluetooth or similar technology.
  • the auxiliary device may comprise a communication interface allowing a connection to a server on a network, e.g.
  • the Internet e.g. 'in the cloud'
  • data from the datalogger received from the hearing aid may be relayed from the auxiliary device (e.g. a cellphone or a charging station for the hearing aid) to a server accessible for analysis of the data, e.g. by a fitting system for the hearing aid.
  • the auxiliary device e.g. a cellphone or a charging station for the hearing aid
  • a server accessible for analysis of the data, e.g. by a fitting system for the hearing aid.
  • the hearing system may be configured to download data from said datalogger to said auxiliary device.
  • the auxiliary device may comprise a memory for storing data from the datalogger of the hearing aid.
  • the auxiliary device may comprise an analyzing unit for analyzing data stored in the datalogger or the hearing aid and/or stored in the memory of the auxiliary device originating from the datalogger of the hearing aid.
  • Data in the memory may origin from different time periods, e.g. time periods that together span more than one week, such as more than one month, such as more than 6 months.
  • the auxiliary device may be configured to extract changes overtime of said data originating from the datalogger of the hearing aid. The changes over time may relate to the user's vocal activity, e.g. in connection with other persons' vocal activity (e.g. related to conversations vs.
  • the auxiliary device may comprise a user interface, allowing a user to interact with the auxiliary device, e.g. via touch sensitive display, and or a keyboard.
  • the user interface may be configured to allow a user to display results of an analysis of the from the datalogger.
  • the user interface may e.g. allow a user (e.g. a hearing care professional) to access data from the datalogger origination from previous observation periods, thereby allowing a development or trend in user behavior to be extracted from the data.
  • a non-transitory application termed an APP
  • the APP comprises executable instructions configured to be executed on an auxiliary device to implement a user interface for a hearing aid or a hearing system described above in the 'detailed description of embodiments', and in the claims.
  • the APP may be configured to run on cellular phone, e .g. a smartphone, or on another portable (or stationary) electronic device allowing communication with said hearing aid or said hearing system (e.g. a charging station).
  • the APP may implement a Datalogging APP , from which a user may configure the datalogger.
  • the user may e.g. select the data that should be logged, e.g. Own-voice data, Other voice data, internal and external sensor data (e.g. sensors or detectors related to an acoustic environment, and/or to a state of the user, e.g. a mental state).
  • the sensors or detectors that may be selected for logging together the voice activation data may include a movement sensor, a sound quality detector, a detector of body signals, e.g. brainwaves (e.g. EEG), a PPG sensor, etc.
  • the APP may further allow the user to off-load logged data to another device or system, e.g.
  • the APP may further allow the user to select a strategy or scheme for off-loading logged data to another device or system (e.g. among a number of predefined schemes).
  • a hearing aid e.g. a hearing instrument
  • a hearing aid refers to a device, which is adapted to improve, augment and/or protect the hearing capability of a user by receiving acoustic signals from the user's surroundings, generating corresponding audio signals, possibly modifying the audio signals and providing the possibly modified audio signals as audible signals to at least one of the user's ears.
  • Such audible signals may e.g. be provided in the form of acoustic signals radiated into the user's outer ears, acoustic signals transferred as mechanical vibrations to the user's inner ears through the bone structure of the user's head and/or through parts of the middle ear as well as electric signals transferred directly or indirectly to the cochlear nerve of the user.
  • the hearing aid may be configured to be worn in any known way, e.g. as a unit arranged behind the ear with a tube leading radiated acoustic signals into the ear canal or with an output transducer, e.g. a loudspeaker, arranged close to or in the ear canal, as a unit entirely or partly arranged in the pinna and/or in the ear canal, as a unit, e.g. a vibrator, attached to a fixture implanted into the skull bone, as an attachable, or entirely or partly implanted, unit, etc.
  • the hearing aid may comprise a single unit or several units communicating (e.g. acoustically, electrically or optically) with each other.
  • the loudspeaker may be arranged in a housing together with other components of the hearing aid, or it may be an external unit in itself (possibly in combination with a flexible guiding element, e.g. a dome-like element).
  • a hearing aid comprises an input transducer for receiving an acoustic signal from a user's surroundings and providing a corresponding input audio signal and/or a receiver for electronically (i.e. wired or wirelessly) receiving an input audio signal, a (typically configurable) signal processing circuit (e.g. a signal processor, e.g. comprising a configurable (programmable) processor, e.g. a digital signal processor) for processing the input audio signal and an output unit for providing an audible signal to the user in dependence on the processed audio signal.
  • the signal processor may be adapted to process the input signal in the time domain or in a number of frequency bands.
  • an amplifier and/or compressor may constitute the signal processing circuit.
  • the signal processing circuit typically comprises one or more (integrated or separate) memory elements for executing programs and/or for storing parameters used (or potentially used) in the processing and/or for storing information relevant for the function of the hearing aid and/or for storing information (e.g. processed information, e.g. provided by the signal processing circuit), e.g. for use in connection with an interface to a user and/or an interface to a programming device.
  • the output unit may comprise an output transducer, such as e.g. a loudspeaker for providing an air-borne acoustic signal or a vibrator for providing a structure-borne or liquid-borne acoustic signal.
  • the output unit may comprise one or more output electrodes for providing electric signals (e.g. to a multi-electrode array) for electrically stimulating the cochlear nerve (cochlear implant type hearing aid).
  • the vibrator may be adapted to provide a structure-borne acoustic signal transcutaneously or percutaneously to the skull bone.
  • the vibrator may be implanted in the middle ear and/or in the inner ear.
  • the vibrator may be adapted to provide a structure-borne acoustic signal to a middle-ear bone and/or to the cochlea.
  • the vibrator may be adapted to provide a liquid-borne acoustic signal to the cochlear liquid, e.g. through the oval window.
  • the output electrodes may be implanted in the cochlea or on the inside of the skull bone and may be adapted to provide the electric signals to the hair cells of the cochlea, to one or more hearing nerves, to the auditory brainstem, to the auditory midbrain, to the auditory cortex and/or to other parts of the cerebral cortex.
  • a hearing aid may be adapted to a particular user's needs, e.g. a hearing impairment.
  • a configurable signal processing circuit of the hearing aid may be adapted to apply a frequency and level dependent compressive amplification of an input signal.
  • a customized frequency and level dependent gain (amplification or compression) may be determined in a fitting process by a fitting system based on a user's hearing data, e.g. an audiogram, using a fitting rationale (e.g. adapted to speech).
  • the frequency and level dependent gain may e.g. be embodied in processing parameters, e.g. uploaded to the hearing aid via an interface to a programming device (fitting system), and used by a processing algorithm executed by the configurable signal processing circuit of the hearing aid.
  • a 'hearing system' refers to a system comprising one or two hearing aids
  • a 'binaural hearing system' refers to a system comprising two hearing aids and being adapted to cooperatively provide audible signals to both of the user's ears.
  • Hearing systems or binaural hearing systems may further comprise one or more 'auxiliary devices', which communicate with the hearing aid(s) and affect and/or benefit from the function of the hearing aid(s).
  • Such auxiliary devices may include at least one of a remote control, a remote microphone, an audio gateway device, an entertainment device, e.g. a music player, a wireless communication device, e.g. a mobile phone (such as a smartphone) or a tablet or another device, e.g.
  • Hearing aids, hearing systems or binaural hearing systems may e.g. be used for compensating for a hearing-impaired person's loss of hearing capability, augmenting or protecting a normal-hearing person's hearing capability and/or conveying electronic audio signals to a person.
  • Hearing aids or hearing systems may e.g. form part of or interact with public-address systems, active ear protection systems, handsfree telephone systems, car audio systems, entertainment (e.g. TV, music playing or karaoke) systems, teleconferencing systems, classroom amplification systems, etc.
  • Embodiments of the disclosure may e.g. be useful in applications such as hearing aids for compensation for a user's hearing impairment.
  • the electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g. flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc.
  • MEMS micro-electronic-mechanical systems
  • integrated circuits e.g. application specific
  • DSPs digital signal processors
  • FPGAs field programmable gate arrays
  • PLDs programmable logic devices
  • gated logic discrete hardware circuits
  • PCB printed circuit boards
  • PCB printed circuit boards
  • Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • the present application relates to the field of hearing aids, in particular to data logging.
  • FIG. 1 shows a first embodiment of a hearing aid comprising a datalogger according to the present disclosure.
  • FIG. 1 schematically illustrates a hearing aid (HA) configured to be worn at or in an ear of a user (or for being partially or fully implanted in the head at an ear of the user).
  • the hearing aid (HA) comprises an input unit (IU.
  • the input unit may e.g. comprise one or more input transducers, e.g. one or more microphones, configured to pick up sound (Acoustic input) from the environment of the hearing aid and to provide at least one electric input signal (IN) representing the sound.
  • the input unit (IU) may comprise an analogue to digital converter for converting an analogue signal to a digitized signal (e.g.
  • the input unit (IU) may further comprise an analysis filter bank for converting a (e.g. digitized) time domain signal to a time-frequency domain signal (e.g. represented as a multitude of frequency sub-band signals, each representing a frequency sub-range of the frequency range of operation of the hearing aid).
  • the hearing aid (HA) further comprises an own voice detector (OVD) configured to detect whether or not, or with what probability, the at least one electric input signal (IN), or a processed version thereof, comprises a voice from the user of the hearing aid, and to provide a user voice control signal (UVC) indicative thereof.
  • ODD own voice detector
  • the hearing aid (HA) further comprises a datalogger (DLOG) for logging over time data related to the use of the hearing aid, including absolute or relative time periods of own voice activity in dependence of the own voice control signal.
  • the hearing aid may be configured to log parameters of the current acoustic environment, including the own voice control signal, over time according to a predefined or adaptively determined scheme.
  • the hearing aid may be configured to log parameters of the current acoustic environment with a specific log frequency, e.g. with a frequency larger than 0.1 Hz.
  • the logged data may be (temporarily) stored in a memory of the datalogger.
  • the hearing aid (HA) further comprises a processor (PRO) for applying one or more processing algorithms to the at least one electric input signal (IN).
  • the one or more processing algorithms may include one or more of a compressive amplification algorithm configured to compensate for a hearing impairment of the user, a noise reduction algorithm, a feedback control algorithm, a directional beamforming algorithm, etc.
  • the processor (PRO) provides a processed signal (OUT) representing sound (e.g. the sound picked up by the input unit (IU), and/or sound received from another device), which is fed to an output unit (OU).
  • the output unit is configured to provide stimuli perceivable as sound to the user based on the processed signal (OUT).
  • the output unit (OU) may comprise an output transducer, e.g. a loudspeaker for providing air-conducted sound, or a vibrator for providing bone-conducted sound.
  • the output unit (OU) may comprise a multi-electrode array for directly stimulating the cochlear nerve of an ear of the user.
  • the output unit may further comprise a synthesis filter bank in case the processed output signal (OUT) comprises a multitude of a frequency-sub-band signals (time-frequency domain signal) and/or a digital to analogue converter for converting a digitized signal to an analogue signal according to the specific application.
  • the signal path from the input unit (IU) to the output unit (OU) via the processor (PRO) defines a forward path of the hearing aid (for processing the input sound to an output signal perceivable as sound to the user).
  • the hearing aid further comprises a communication interface (IF), e.g.
  • the communication interface may be based on near-field (e.g. inductive) communication or on far-field communication (e.g. based on Bluetooth or similar technologies).
  • FIG. 2 shows a second embodiment of a hearing aid comprising a datalogger according to the present disclosure.
  • the embodiment of a hearing aid (HA) in FIG. 2 comprises the same elements as the embodiment described in connection with FIG. 1 .
  • the embodiment of FIG. 2 comprises further detectors (DET) including e.g. an SNR estimator and/or a level estimator to monitor the acoustic environment.
  • DET detectors
  • the embodiment of FIG. 2 comprises separate own voice (OVD) and voice activity detectors (VAD) providing respective indicators OVC and VAC regarding the presence of the user's voice ('own voice') and other voices, respectively. Other voices may include or exclude the user's voice as considered practical in the specific application in question.
  • ODD separate own voice
  • VAD voice activity detectors
  • the hearing aid may thereby be configured to log the voice activity (e.g. a level of activity) of the user as well as of other persons in the environment of the user.
  • the hearing aid (e.g. the detector unit (DET)) may comprise an estimator of signal quality, e.g. of SNR, of the at least one electric input signal (IN) or of a signal derived therefrom.
  • the hearing aid may comprise an estimator of an ambient noise level, which may be estimated using the level detector and available voice activity detector(s), e.g. by making a noise estimate during speech pauses as determined by the voice indicator(s) (OVC, VAC).
  • a crude SNR may then be estimated by Level(voice)/Level(noise), the mentioned levels being e.g.
  • the hearing aid may be configured to log the conditions for engaging in a conversation.
  • the hearing aid (HA) may be configured to log data representing a currently requested gain from a compressive amplification algorithm of the hearing aid (cf. signal GRQ from the processor (PRO) to the datalogger (DLOG) in FIG. 2 ).
  • the compressive amplification algorithm may be configured to provide frequency and level dependent gain (amplification or attenuation) to the at least one electric input signal or to a signal derived therefrom.
  • the requested gain or changes to the requested gain reflects properties of the current acoustic environment of the user.
  • the hearing aid may be configured to further log absolute or relative time periods of NO own voice activity.
  • the datalogger comprises or interfaces to a timing unit (cf. unit TIME in FIG. 2 ) providing an absolute time or a relative time elapsed, e.g. since the last power up of the hearing aid (the latter may be relatively easily determined by an appropriate counter and knowledge of the relevant clock frequency of the hearing device).
  • TIME timing unit
  • the logged data for a given time window (e.g. from power on of the hearing aid to power off, e.g. corresponding to a single day of normal operation) or for several time windows, e.g. corresponding to a larger period of time, e.g. a week or a month, or the like, can be stored in a memory of the hearing aid.
  • the logged data (DATA) can e.g. - via the communication interface (IF) - be transferred to a different apparatus (e.g. a smartphone, a similar processing device, or a fitting system) which is capable of analyzing and/or possible further processing the data and/or of presenting the data in a user interface.
  • the hearing aid may e.g.
  • a device or server e.g.in the cloud
  • a specific or adaptive scheme e.g. in dependence of a current amount of logged data (or rest-capacity of a memory), or of a measure of a time elapsed.
  • An absolute timing (e.g. a time of day) may e.g. be obtained from specific timing-circuitry, e.g. included in the hearing aid, e.g. in communication with a time standard (e.g. the DCF77 in Frankfurt), or from another device (e.g. a smartphone or similar device, e.g. a watch) or from a network, e.g. including from a server.
  • a time standard e.g. the DCF77 in Frankfurt
  • another device e.g. a smartphone or similar device, e.g. a watch
  • a network e.g. including from a server.
  • FIG. 3 and 4 The logging of data related to the user's (active) participation in conversations is illustrated in FIG. 3 and 4 .
  • FIG. 3 shows a first time sequence reflecting a conversation of the user of the hearing aid with another person as detected by an own voice detector and a voice activity detector.
  • FIG. 3 shows values of different voice indicators (here control signals UVC (representing the user's voice) and OVC (representing other voice(s))) versus time (Time) for a time segment of an electric input signal of the hearing aid.
  • FIG. 3 shows an output of a voice activity detector that is capable of differentiating a user's voice from other voices in an environment of the user wearing the hearing aid.
  • control signals UVC or OVC respectively, being 1 or 0 (could also or alternatively be indicated by a speech presence probability (SPP) being above or below a threshold, respectively).
  • SPP speech presence probability
  • time periods of user voice and other persons' voice are indicated by different filling.
  • An analysis of the combination of indicators (UVC and OVC, respectively) of the presence or absence of user voice and other persons' voice may reveal a possible conversation with participation of the user. Identification of conversation involving the user may be identified by a sequential (alternating) occurrence of user voice (UVC) and other voice (OVC) indicators over a time period.
  • a criterion regarding the distance ⁇ tUser-Other in time between the user voice indicator (UVC) shifting from active to inactive and the other person's voice indicator (OVC) shifting from inactive to active (or vice versa) may by applied.
  • ⁇ tUser-Other t o,1 - t u,2 and t o,2 - t u,3 , respectively.
  • Such criterion may e.g. be ⁇ tUser-Other ⁇ 2 s.
  • a slight overlap may be accepted, and a further criterion may e.g. be ⁇ tUser-Other ⁇ -2 s. (thereby accepting a small period of 'double-talk).
  • a further criterion regarding the time period of each single period of active voice of the user (and/or the other person(s)) may be imposed, e.g.
  • the minimum duration may e.g. be 5 s.
  • FIG. 4 shows a second time sequence reflecting a varying acoustic environment of the user of the hearing aid, including sub-sequences reflecting a varying degree of speech-participation by the user.
  • FIG. 4 schematically illustrates a time window time dependent values of wherein indictors of the user voice (UVC) and other persons' voice (VAC) are indicated (an 'active indication' of the respective UVC and VAC indicators is shown by different fillings, as in FIG. 3 , bottom).
  • the time window comprises two time periods that indicate a user in conversation with another person, two time periods, that indicate silence (or no significant voice activity) and one time period of another persons' voice (without user participation, e.g. reflecting another person talking (without the user replying), e.g.
  • the time window of FIG. 4 has a range from t 1 to t 6 , i.e. spans a time period of duration ⁇ t w from t 6 - t 1 .
  • the time window of FIG. 4 comprises in consecutive order: (a 1 st period of) 'conversation', (a 1 st period of) 'silence', (a 1 st period of) 'one way speech', (a 2 nd period of) 'silence', and (a 2 nd period of) 'conversation'.
  • the individual time periods of each acoustic event may e.g. be estimated based on the logged data, either in the hearing aid or in another device or system to which the data are transferred.
  • the data logged over time, cf. time windows as illustrated in FIG. 3 , 4 and in practice comprising more acoustic events (represent longer time periods, e.g. days or weeks) and their subsequent analysis may allow extraction of information regarding the user's (voiced) social activity, e.g. in dependence of the acoustic environment (noisy environments may result in decreased activity), e.g. in dependence of the time of the day (a decrease with time of day (or time from power on of the hearing aid e.g. reflecting some sort of cognitive fatigue).
  • the analysis may result in changes being made to the processing of the hearing aid (e.g. increased noise reduction and/or more directionality in noisy environments).
  • the logged data may e.g. be used to extract information about the complexity (and length) of conversations engaged in by the user and in particular to changes in such parameters.
  • the repeated logging over time of own voice activity, other voice activity, input signal level (e.g. low, medium high), noise level and/or signal-to-noise-ratio may e.g. allow such information to be extracted.
  • the logged data may e.g. be up-loaded (off-loaded) to another device or server with a predefined frequency, e.g. 5 minutes, or every hour, or once a day (e.g. as part of a power-off procedure).
  • the hearing aid may be configured to take specific measures in case the intended (planned) off-loading of the logged parameters (to empty the memory and make room for new data) cannot be performed, e.g. due to lack of a communication ling, lack of power of the hearing aid, lack availability of the receiving device or system, etc.
  • Such specific measures may be to minimize the amount of data (and thus being able to cover a longer time window) by averaging values of the parameters (P 1 , ..., P Q ) over time.
  • the parameters may be averaged over different time periods (e.g. so that voice detection data (e.g. in particular own voice detection data) are prioritized over other parameters, e.g. level, or SNR, which may be assumed to vary more slowly, than the dynamic events of a conversation).
  • FIG. 5 shows an embodiment of a hearing system comprising a hearing aid and a programming device according to the present disclosure.
  • the hearing aid (HA) is in communication with a programming device (PD, e.g. a fitting system or other processing device, e.g. a smartphone).
  • the communication is e.g. via a direct link (LINK) or via a network.
  • the programming device (PD) comprises a communication interface (IF) allowing to establish a communication link to the hearing aid and to receive data from and transmit data to the hearing aid.
  • the programming device (PD) may e.g.
  • the processing unit may comprise a digital signal processor configured to run fitting software of the hearing aid, e.g. to adapt processing parameters of the processor (PRO) of the hearing aid to the needs of particular user (cf. double arrowed line between the processor (PRO) and the communication interface (IF) of the hearing aid (HD).
  • the programming device (PD) further comprises a user interface coupled to the processing unit (COMP), the memory (MEM) and the analyser (ANA).
  • the user interface comprises a visual display (DISP) and a keyboard (KEYB) allowing data to be displayed, e.g.
  • the programming device may (via the memory) e.g. have access to logged data from several time windows, e.g. representing observations over a time span of weeks or months.
  • the programming device may have access corresponding data from the available detectors in a time series spanning the mentioned period of weeks or months, e.g. including voice activities of the user and other persons in the environment of the user in a time resolution that allows an analysis of changes in the user's social vocal activity to be identified.
  • a schematic comparison of logged data for a particular user for two different time periods are shown.
  • a development in the user's active participation in conversations is (schematically) indicated from less in time period TP#1 to more in time period TP#2. This may be a result of changed parameter settings of the hearing aid (or the fitting of another (improved) hearing aid model) between time period TP#1 and TP#2, or it may be the result of a deliberate effort of the user to be more active (or both).
  • the results of the analysis may be inputs to a discussion with the user about his or her satisfaction with the hearing aid, and/or to the changing of parameter settings, fitting of a new hearing aid with improved features, etc.
  • Important learnings of the data are possible changes (over time) in the length and complexity of the user's conversations with other people, which can be taken as an indication of an improved social engagement (decreased self-isolation).
  • FIG. 6 schematically illustrates an example of data aggregation according to the present disclosure.
  • FIG. 6 shows values of averaged parameters ('Parameters averaged over ⁇ t', normalized scale), e.g. voice activity detection, overtime ('Time [s]').
  • Each 'vertical box' represent a data container (DataC).
  • DataC data container holds a value (e.g. an average value) of one or more 'conversation parameters' intended for being logged by the hearing aid or an external device connected to the hearing aid).
  • a conversation parameter may e.g. be a ratio of time periods with voice activity (e.g. own voice activity) to time periods of speech pauses.
  • FIG. 6 shows a multitude of observation windows of variable duration in time, here t 1 , t 2 , ..., t n , t n+1 , .
  • Each data container (DataC) of a given observation window t n has a common width ⁇ t n representing the time range that the data of that data container represents, e.g. a single value sampled in the time range time ⁇ t n or an average of values sampled in the time range ⁇ t n .
  • DataC data container
  • ⁇ t n representing the time range of the data container of observation window n is indicated to be smaller than or equal to A, B, and N, INF for observation windows t 1 , t 2 , and t n , t n+1 , respectively. It may be assumed that A ⁇ B ⁇ N ⁇ INF.
  • the observation windows may e.g. be of increasing duration in time (t 1 , t 2 , ).
  • the duration in time may e.g. increase with increasing n, e.g. increasing with increasing n for n larger than a first threshold value n th1 .
  • the duration in time of the observation windows may be different for different hearing aid models or styles (e.g. dependent on processor clock frequency, memory, processing algorithms, etc.).
  • the duration in time of the observation windows may e.g. vary from t 1 being of the order of milliseconds or of the order of minutes or larger.
  • Each observation window (t 1 , t 2 , ...) contains a number N DC,n of data containers (DataC). All observation windows (t 1 , t 2 , ...) may contain the same number N DC of data containers (DataC).
  • the observation windows may comprise different numbers N DC,n of data containers, e.g. decreasing with increasing n, e.g. decreasing with increasing n for n larger than a threshold value second n th2 .
  • the first and second threshold values of n (n th2 , n th2 ) may be equal or different.
  • each data container (DataC, irrespective of its width in time ⁇ t n ) occupies the same space in the memory (because it holds the same number of data values).
  • DataC successive data container
  • the storage frequency is further reduced in the third observation window (e.g. by another factor of 5), etc.
  • the reduction of the storage frequency can be repeated an arbitrary number of times.
  • the reduction of the storage frequency can be terminated after a number of observation windows, after which the storage frequency is kept constant.
  • the strategy for successively reducing the storage frequency can be controlled by a storage controller, e.g. in dependence of one or more of a memory size, a battery status of the hearing aid, an estimated time to the next possible off-loading of logged data, etc.
  • the logged data may e.g. be off-loaded to an external device (e.g.via an APP or directly, e.g. automatically, when the hearing aid is connected to the external device), e.g. to a memory of a portable device, e.g. to a smartphone, or to a fitting system of the hearing aid.
  • a reason for applying such storage strategy is that it may be difficult to predict a time between data off-loads.
  • a successful data off-load may be dependent on connectivity conditions at a given time (e.g., is the data receiver (e.g. a smart phone or a fitting system).
  • a successful data off-load may be dependent on the hearing aid having sufficient power to establish a link to the receiving device or system, etc.
  • a successful data off-load may be dependent on the receiving device or system being ready to receive data from the hearing aid (there may be other tasks that have higher priorities than the reception of logged data from the hearing aid).
  • FIG. 7 schematically illustrates a hearing aid system according to the present disclosure, wherein an external processor (PRO) and memory (MEM) is built-into a charging station (CHAS) for a hearing aid (HA1) or a pair of hearing aids (HA1, HA2).
  • the charging station (CHAS) can thereby be used to off-load data (LOGD) from a datalogger (cf. DLOG in FIG. 1, 2 ) of the hearing aid(s).
  • the charging station comprises a memory (MEM) for receiving data from the datalogger.
  • the charging station comprises (antenna (ANT) and) transceiver circuitry (WLIF) for establishing a communication link (WL) to the hearing aid(s).
  • the charging station may e.g. comprise one or more sensors for classifying the environment around the charging station, e.g. a microphone or other sensor, e.g. for of background noise.
  • the sensor data may be added to the logged data (LOGD) while the hearing aids are located in the charging station (and/or as long as the charging station (CHAS) and the hearing aid (HA1, HA2) are in communication via the communication link (WL), e.g. as long as the distance D between them is smaller than a maximum transmission/reception range of the link (WL).
  • the charging station may e.g. comprise an absolute clock that can be added to the logged data, when the hearing aid are located in the charging station.
  • the processor (PRO) of the charging station may have a larger processing power than a processor of the wearable device.
  • the processor may be configured to analyze the logged data from the hearing aid(s).
  • the charging station may be located on a support (Support), e.g. a table, in an appropriate place with a view to being accessible to the hearing aids when the user moves around.
  • the charging station may be a pocket-size, portable, device comprising an interface (PSIF), e.g. including a connector, to an electricity network, and/or a local (e.g. rechargeable) battery (BAT) for charging a battery or batteries of the hearing aids (HA1, HA2).
  • PSIF interface
  • BAT local (e.g. rechargeable) battery
  • the battery of the charging station is assumed to have a significantly larger capacity than a battery of the hearing aid.
  • the charging station may further comprise an interface (DIF) to a data network.
  • the interface is configured to establish a (here wireless) connection to the data network (cf. link WLDL, e.g. WiFi) e.g. to provide access to servers, e.g. a fitting system, on the Internet (cloud computing). Thereby the off-loaded date may be uploaded to the fitting system via the data network.
  • FIG. 8 shows an embodiment of a hearing aid (HA) according to the present disclosure comprising a BTE-part located behind an ear or a user and an ITE part located in an ear canal of the user, and an auxiliary device (AD) in communication with the hearing aid comprising a user interface (UI).
  • a hearing aid HA
  • auxiliary device AD
  • UI user interface
  • FIG. 8 illustrates an exemplary hearing aid (HA) formed as a receiver in the ear (RITE) type hearing aid comprising a BTE-part ( BTE ) adapted for being located behind pinna and a part (ITE) comprising an output transducer (OT, e.g. a loudspeaker/receiver) adapted for being located in an ear canal (Ear canal) of the user (e.g. exemplifying a hearing aid (HA) as shown in FIG. 1, 2 ).
  • the BTE-part (BTE) and the ITE-part (ITE) are connected (e.g. electrically connected, e.g. via a cable comprising a multitude of conductors, e.g.
  • the BTE part (BTE) comprises two input transducers (here microphones) (M BTE1 , M BTE2 ) each for providing an electric input audio signal representative of an input sound signal (S BTE ) from the environment (in the scenario of FIG. 8 , from sound source S, e.g. a communication partner).
  • the hearing aid (HA) of FIG. 8 further comprises two wireless transceivers (WLR 1 , WLR 2 ) for receiving and/or transmitting signals (e.g. comprising audio and/or information, e.g. logged data according to the present disclosure).
  • the hearing aid (HA) further comprises a substrate (SUB) whereon a number of electronic components are mounted, functionally partitioned according to the application in question (analogue, digital, passive components, etc.), but including a configurable digital signal processor (DSP), a front-end chip (FE), and a memory unit (MEM) coupled to each other and to input and output units via electrical conductors Wx.
  • DSP digital signal processor
  • FE front-end chip
  • MEM memory unit
  • the mentioned functional units may be partitioned in circuits and components according to the application in question (e.g. with a view to size, power consumption, analogue vs digital processing, etc.), e.g. integrated in one or more integrated circuits, or as a combination of one or more integrated circuits and one or more separate electronic components (e.g.
  • the configurable signal processor provides an enhanced audio signal (cf. signal OUT in FIG. 1, 2 ), which is intended to be presented to a user.
  • the front-end integrated circuit FE is adapted for providing an interface between the configurable signal processor (DSP) and the input and output transducers, etc., and typically comprising interfaces between analogue and digital signals.
  • the input and output transducers may be individual separate components, or integrated (e.g. MEMS-based) with other electronic circuitry.
  • a hearing aid device in FIG.
  • the ITE part comprises an output unit in the form of a loudspeaker (receiver) (SPK) for converting the electric signal (OUT) to an acoustic signal (providing, or contributing to, acoustic signal S ED at the ear drum (Ear drum).
  • the ITE-part further comprises an input unit comprising an input transducer (e.g. a microphone) (M ITE ) for providing an electric input audio signal representative of an input sound signal S ITE from the environment at or in the ear canal.
  • the hearing aid may comprise only the BTE-microphones (M BTE1 , M BTE2 ).
  • the hearing aid may comprise an input unit located elsewhere than at the ear canal in combination with one or more input units located in the BTE-part and/or the ITE-part.
  • the ITE-part further comprises a guiding element, e.g. a dome, (DO) for guiding and positioning the ITE-part in the ear canal of the user.
  • the hearing aid (HA) exemplified in FIG. 8 is a portable device and further comprises a battery (BAT) for energizing electronic components of the BTE- and ITE-parts.
  • the hearing aid (HA) may be identical to the hearing aid(s) illustrated in FIG. 7 .
  • the hearing aid may comprise a directional microphone system (e.g. a beamformer filter) adapted to enhance a target acoustic source among a multitude of acoustic sources in the local environment of the user wearing the hearing aid.
  • a directional microphone system e.g. a beamformer filter
  • the memory unit may form part of the datalogger and comprise logged data according to the present disclosure.
  • the hearing aid of FIG. 8 may constitute or form part of a binaural hearing aid system according to the present disclosure.
  • the hearing aid (HA) may comprise a user interface UI, e.g. as shown in the bottom part of FIG. 8 implemented in an auxiliary device (AD), e.g. a remote control, e.g. implemented as an APP in a smartphone or other portable (or stationary) electronic device (e.g. a charging station).
  • auxiliary device e.g. a remote control
  • APP e.g. implemented as an APP in a smartphone or other portable (or stationary) electronic device (e.g. a charging station).
  • the screen of the user interface (UI) illustrates a Datalogging APP.
  • the user may configure the datalogger via the APP.
  • the user may e.g. select the data that should be logged, e.g. Own-voice data, Other voice data, internal and external sensor data (termed HA-sensors and External sensors, respectively, in the exemplified screen of FIG. 8 ).
  • Own-voice, Other voice, and HA-sensors have been selected (as indicated by the filled square symbols ⁇ ).
  • the user may further off-load logged data to another device or system, e.g. to a fitting system, a Smartphone or to a Charging station (see e.g. FIG. 7 ).
  • connection to the smartphone is selected (as indicated by the filled square symbol ⁇ ).
  • Unselected options are indicated by open square symbols ( ⁇ ).
  • the auxiliary device (AD) and the hearing aid (HA) are adapted to allow communication of data representative of the currently selected direction (if deviating from a predetermined direction (already stored in the hearing aid)) to the hearing aid via a, e.g. wireless, communication link (cf. dashed arrow WL2 in FIG. 8 ).
  • the communication link WL2 may e.g. be based on far field communication, e.g. Bluetooth or Bluetooth Low Energy (or similar technology), implemented by appropriate antenna and transceiver circuitry in the hearing aid (HA) and the auxiliary device (AD), indicated by transceiver unit WLR 2 in the hearing aid.
  • connection or “coupled” as used herein may include wirelessly connected or coupled.
  • the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

A hearing aid is configured to be worn at or in an ear of a user, and comprises a) an input unit comprising at least one input transducer, e.g. a microphone, for picking up sound from the environment of the hearing aid and configured to provide at least one electric input signal representing said sound; b) an own voice detector configured to detect whether or not, or with what probability, said at least one electric input signal, or a processed version thereof, comprises a voice from the user of the hearing aid, and to provide an own voice control signal indicative thereof; and c) a datalogger for logging data related to the use of said hearing aid over time. The hearing aid is configured to log absolute or relative time periods of own voice activity in dependence of said own voice control in the datalogger. A method of operating a hearing aid is further disclosed.

Description

    SUMMARY
  • Hearing care is about restoring the ability to hear. A dominant element of this restoration is to regain the ability to understand speech in various sound environments.
  • Another less explored element of rehabilitation is the importance of daily use of the hearing aids. Currently we can track if the instruments are being worn, but not if they are actively used to re-engage the wearer in conversation.
  • The ability to monitor, nudge, and guide the wearer to continuously challenge and improve hearing ability and social interaction is important to regain an active social lifestyle.
  • The proposed solution may track and estimate the activity level of the user and derive a score for a relative objective in a rehabilitation plan. The proposed solution may be used to document treatment outcome for users frequently connected as well as users offline between consultations.
  • Using an own voice detector in the hearing aids, the hearing instruments may be configured to log the time a wearer is speaking and combine with data concerning the sound environment, such as SNR, or activity level of other identified speech sources.
  • The result may help a hearing care professional to prescribe specific targets to aid in the rehabilitation for a specific user (e.g. active participation in conversations, its frequency and/or duration).
  • By observing a window of variable time, the tracking of own voice activity and other talker activity can be combined with a ratio of pauses. From this ratio it can be derived if the user is engaging in a conversation rather than passively listening to for example the TV. The ratio (or equivalent data) for a given time window may be stored in the instrument for periodic retrieval.
  • In the present context, 'a window of variable time' is taken to mean an 'observation window' that may vary in time. The data that are observed in the observation window may e.g. include voice activity detection data (e.g. own voice and other voice activity or no-activity). An indication of the voice activity in the environment of a user may e.g. be provided by a ratio of the sum of time periods with voice activity and the total time period of the observation window (the total being the sum of time periods with voice activity and the sum of speech pauses). Further sub-indicators may be provided by a) the ratio of the sum of time periods with own voice activity and the total time period of the observation window, and b) the ratio of the sum of time periods with other voice activity and the total time period of the observation window. The pauses may be further classified as 'pauses before own voice' and 'pauses before other voice', which may additionally be used to evaluate the user's conversation participation pattern.
  • The ratio for one or more time windows may be transferred to a different apparatus which is capable of further processing the data and/or present the data in a user interface. Such apparatus or processing device may be constituted by or comprise a fitting system, or a smartphone or a remote control device for the hearing aid, etc.
  • US2006222194A1 deals with a hearing aid comprising a datalogger and with the learning from these data. The hearing aid comprises an input unit, a signal processing unit, and a user interface for converting user interaction to a control signal thereby controlling a processing setting of the signal processing unit. The hearing aid further comprises a memory unit comprising a control section storing a set of control parameters associated with the acoustic environment, and a datalogger section receiving data from the input unit, the signal processing unit, and the user interface. The signal processing unit configures the setting according to the set of control parameters and comprises a learning controller adapted to adjust the set of control parameters according to the data in the data logging section.
  • Compared to previous wearing time tracking, the proposed solution enriches the result with the social activity and active participation of the wearer.
  • A hearing aid:
  • In an aspect of the present application, a hearing aid configured to be worn at or in an ear of a user is provided. The hearing aid comprises
    • An input unit comprising at least one input transducer, e.g. a microphone, for picking up sound from the environment of the hearing aid and configured to provide at least one electric input signal representing said sound;
    • An own voice detector configured to detect whether or not, or with what probability, said at least one electric input signal, or a processed version thereof, comprises a voice from the user of the hearing aid, and to provide an own voice control signal indicative thereof; and
    • A datalogger for logging data related to the use of said hearing aid over time.
    The hearing aid may be configured to log data - in said datalogger - representative of absolute or relative time periods of own voice activity in dependence of said own voice control.
  • Thereby an improved hearing aid may be provided.
  • The hearing aid may be configured to further log data concerning a sound environment, at least during said time periods of own voice activity. The data concerning a sound environment may be logged with the same (or lower) frequency as the own voice activity is logged. The hearing aid may comprise one or more detectors of the acoustic environment. The hearing aid may be configured o receive data from one or more detectors of the acoustic environment located in other devices or systems, e.g. an external device, such as a smartphone, or a charging station or other auxiliary device in communication with the hearing aid.
  • The data concerning a sound environment may include a measure of sound quality, e.g. a signal to noise ratio (SNR). The hearing aid may comprise a detector for providing a measure of sound quality of the electric input signal or a signal originating therefrom. The hearing aid may comprise a detector for estimating an SNR of the at least one electric input signal, or a processed version thereof. The hearing aid may comprise a level detector for estimating a current level of the at least one electric input signal or of a signal derived therefrom.
  • The data concerning a sound environment may include an activity level of other (identified) speech sources. 'An activity level' of a sound source (an external or the user) may e.g. be a duration of activity in an absolute or relative time scale (e.g. in seconds or in a number of time units (absolute or arbitrary) relative to the number of time units of a total period of observation). 'An activity level' may e.g. include a number of distinct events of activity (e.g. separated by a certain minimum time period) and a total duration of activity in an absolute or relative scale of the sound source in question.
  • The data may comprise a requested gain from a compressive amplification algorithm of the hearing aid. The compressive amplification algorithm may be configured to compensate for the user's hearing impairment. The compressive amplification algorithm may be configured to provide frequency and level dependent gain (amplification or attenuation) to the at least one electric input signal or to a signal derived therefrom.
  • The hearing aid may comprise a voice activity detector configured to detect whether or not, or with what probability, said at least one electric input signal, or a processed version thereof, comprises a human voice and to provide a voice control signal indicative thereof. The voice activity detector may be configured to detect speech. The voice activity detector may be configured to differentiate between the voice of the user wearing the hearing aid and other voices (e.g. using a level differentiation, and/or a trained algorithm, e.g. a neural network), in which case the voice activity detector may include the own voice detector. The voice activity detector may, however, also be configured NOT to differentiate between the voice of the user wearing the hearing aid and other voices. In such case, time periods where a voice other than the user's voice is present may be determined from a (e.g. logic) combination of the own voice control signals and the (other) voice control signals., e.g. OTHER VOICE = VOICE AND (NOT OWN VOICE).
  • The hearing aid may be configured to further log absolute or relative time periods of NO own voice activity. The tracking of own voice activity can e.g. be combined with the logging of speech pauses, and/or the logging of total (absolute or relative) time elapsed in the observation window in question. A ratio of time periods of own voice activity to speech pauses may be logged. A ratio of time periods of voice activity to speech pauses may be logged. A ratio of time periods of other voice activity (than own voice) to speech pauses may be logged. From this ratio it can be derived if the user is engaging in a conversation rather than passively listening to for example the TV. The ratio for a given (observation) time window can be stored in the instrument for periodic retrieval. The ratio for one or more windows can be transferred to a different apparatus (e.g. a smartphone, a similar processing device, or a fitting system) which is capable of further processing the data and/or present the data in a user interface.
  • The datalogger may be configured to log data in successive observation windows of variable time, e.g. of increasing length over time, but with a constant or decreasing number of logged data values of successive observation windows. By observing a time window, e.g. an observation window of variable time, e.g. an observation window of increasing length over time (but with a constant (or decreasing) number of logged data values of successive windows), data can be logged over an extended time even with a limited storage capacity of a memory of the datalogger of the hearing aid, see e.g. FIG. 6. This may be advantageous in cases where a time between opportunities for off-loading data from the datalogger are unknown. Data stored by the datalogger may e.g. be off-loaded during charging of a rechargeable battery of the hearing aid in a charging station, se e.g. FIG. 7.
  • The hearing aid may comprise a communication interface allowing data to be exchanged with another device or system. The communication interface may be based on a cabled connection, e.g. comprising appropriate connectors, allowing easy connection (and dis-connection) of the hearing aid to/from the 'another device or system'. The communication interface may be based on a wireless connection to the 'another device or system', e.g. via a network.
  • The hearing aid may comprise an output unit, and wherein the output unit comprises a number of electrodes of a cochlear implant type hearing aid or a vibrator of a bone conducting hearing aid, or a loudspeaker of an air conduction hearing aid, or a combination thereof.
  • The hearing aid may be adapted to provide a frequency dependent gain and/or a level dependent compression and/or a transposition (with or without frequency compression) of one or more frequency ranges to one or more other frequency ranges, e.g. to compensate for a hearing impairment of a user. The hearing aid may comprise a signal processor for applying one or mor processing algorithms to enhance the input signals and providing a processed output signal.
  • The hearing aid may comprise an output unit for providing a stimulus perceived by the user as an acoustic signal based on a processed electric signal. The output unit may comprise a number of electrodes of a cochlear implant (for a CI type hearing aid) or a vibrator of a bone conducting hearing aid. The output unit may comprise an output transducer. The output transducer may comprise a receiver (loudspeaker) for providing the stimulus as an acoustic signal to the user (e.g. in an acoustic (air conduction based) hearing aid). The output transducer may comprise a vibrator for providing the stimulus as mechanical vibration of a skull bone to the user (e.g. in a bone-attached or bone-anchored hearing aid).
  • The hearing aid may comprise an input unit for providing an electric input signal representing sound. The input unit may comprise an input transducer, e.g. a microphone, for converting an input sound to an electric input signal. The input unit may comprise a wireless receiver for receiving a wireless signal comprising or representing sound and for providing an electric input signal representing said sound. The wireless receiver may e.g. be configured to receive an electromagnetic signal in the radio frequency range (3 kHz to 300 GHz). The wireless receiver may e.g. be configured to receive an electromagnetic signal in a frequency range of light (e.g. infrared light 300 GHz to 430 THz, or visible light, e.g. 430 THz to 770 THz).
  • The hearing aid may comprise a directional system adapted to spatially filter sounds from the environment, and thereby enhance a target acoustic source among a multitude of acoustic sources in the local environment of the user wearing the hearing aid. The directional system may be adapted to detect (such as adaptively detect) from which direction a particular part of the input signal originates. This can be achieved in various different ways as e.g. described in the prior art. In hearing aids, a microphone array beamformer is often used for spatially attenuating background noise sources. Many beamformer variants can be found in literature. The minimum variance distortionless response (MVDR) beamformer is widely used in microphone array signal processing. Ideally the MVDR beamformer keeps the signals from the target direction (also referred to as the look direction) unchanged, while attenuating sound signals from other directions maximally. The generalized sidelobe canceller (GSC) structure is an equivalent representation of the MVDR beamformer offering computational and numerical advantages over a direct implementation in its original form.
  • Communication between the hearing aid and other devices or systems may be wired or wireless. Wireless communication may e.g. be in the base band (audio frequency range, e.g. between 0 and 20 kHz). Preferably, communication between the hearing aid and the other device is based on some sort of modulation at frequencies above 100 kHz. Preferably, frequencies used to establish a communication link between the hearing aid and the other device is below 70 GHz, e.g. located in a range from 50 MHz to 70 GHz, e.g. above 300 MHz, e.g. in an ISM range above 300 MHz, e.g. in the 900 MHz range or in the 2.4 GHz range or in the 5.8 GHz range or in the 60 GHz range (ISM=Industrial, Scientific and Medical, such standardized ranges being e.g. defined by the International Telecommunication Union, ITU). The wireless link may be based on a standardized or proprietary technology. The wireless link may be based on Bluetooth technology (e.g. Bluetooth Low-Energy technology).
  • The hearing aid may be or form part of a portable (i.e. configured to be wearable) device, e.g. a device comprising a local energy source, e.g. a battery, e.g. a rechargeable battery. The hearing aid may e.g. be a low weight, easily wearable, device, e.g. having a total weight less than 100 g, such as less than 20 g.
  • The hearing aid may comprise a forward or signal path between an input unit (e.g. an input transducer, such as a microphone or a microphone system and/or direct electric input (e.g. a wireless receiver)) and an output unit, e.g. an output transducer. The signal processor may be located in the forward path. The signal processor may be adapted to provide a frequency dependent gain according to a user's particular needs. The hearing aid may comprise an analysis path comprising functional components for analyzing the input signal (e.g. determining a level, a modulation, a type of signal, an acoustic feedback estimate, etc.). Some or all signal processing of the analysis path and/or the signal path may be conducted in the frequency domain. Some or all signal processing of the analysis path and/or the signal path may be conducted in the time domain.
  • The hearing aid may be configured to operate in different modes, e.g. a normal mode and one or more specific modes, e.g. selectable by a user, or automatically selectable. A mode of operation may be optimized to a specific acoustic situation or environment. A mode of operation may include a low-power mode, where functionality of the hearing aid is reduced (e.g. to save power), e.g. to disable wireless communication, and/or to disable specific features of the hearing aid.
  • The hearing aid may comprise a number of detectors configured to provide status signals relating to a current physical environment of the hearing aid (e.g. the current acoustic environment), and/or to a current state of the user wearing the hearing aid, and/or to a current state or mode of operation of the hearing aid. Alternatively or additionally, one or more detectors may form part of an external device in communication (e.g. wirelessly) with the hearing aid. An external device may e.g. comprise another hearing aid, a remote control, and audio delivery device, a telephone (e.g. a smartphone), an external sensor, etc.
  • One or more of the number of detectors may operate on the full band signal (time domain). One or more of the number of detectors may operate on band split signals ((time-) frequency domain), e.g. in a limited number of frequency bands.
  • The number of detectors may comprise a level detector for estimating a current level of a signal of the forward path. The detector may be configured to decide whether the current level of a signal of the forward path is above or below a given (L-)threshold value. The level detector operates on the full band signal (time domain). The level detector operates on band split signals ((time-) frequency domain).
  • The hearing aid may comprise a voice activity detector (VAD) for estimating whether or not (or with what probability) an input signal comprises a voice signal (at a given point in time).
  • A voice signal may in the present context be taken to include a speech signal from a human being. It may also include other forms of utterances generated by the human speech system (e.g. singing). The voice activity detector unit may be adapted to classify a current acoustic environment of the user as a VOICE or NO-VOICE environment. This has the advantage that time segments of the electric microphone signal comprising human utterances (e.g. speech) in the user's environment can be identified, and thus separated from time segments only (or mainly) comprising other sound sources (e.g. artificially generated noise). The voice activity detector may be adapted to detect as a VOICE also the user's own voice. Alternatively, the voice activity detector may be adapted to exclude a user's own voice from the detection of a VOICE.
  • The hearing aid may comprise an own voice detector for estimating whether or not (or with what probability) a given input sound (e.g. a voice, e.g. speech) originates from the voice of the user of the system. A microphone system of the hearing aid may be adapted to be able to differentiate between a user's own voice and another person's voice and possibly from NON-voice sounds.
  • The number of detectors may comprise a movement detector, e.g. an acceleration sensor. The movement detector may be configured to detect movement of the user's facial muscles and/or bones, e.g. due to speech or chewing (e.g. jaw movement) and to provide a detector signal indicative thereof.
  • The hearing aid may comprise a classification unit configured to classify the current situation based on input signals from (at least some of) the detectors, and possibly other inputs as well. In the present context 'a current situation' may be taken to be defined by one or more of
    1. a) the physical environment (e.g. including the current electromagnetic environment, e.g. the occurrence of electromagnetic signals (e.g. comprising audio and/or control signals) intended or not intended for reception by the hearing aid, or other properties of the current environment than acoustic);
    2. b) the current acoustic situation (input level, feedback, etc.), and
    3. c) the current mode or state of the user (movement, temperature, cognitive load, etc.);
    4. d) the current mode or state of the hearing aid (program selected, time elapsed since last user interaction, etc.) and/or of another device in communication with the hearing aid.
  • The hearing aid may comprise of a multi-level data storage system to distil conversation history across current, recent, and past conversations. The data storage scheme comprises that more data are stored in current conversation and made available for other classifiers and less data are stored for conversations that are no longer active. E.g. data are aggregated and stored in memory bins representing shorter or longer time intervals. For each storage bin, the data may be represented by a single numeric counter or a ratio value, in place of the time-domain classifier result that is logged in an active conversation. The hearing aid may be designed in a way that the available data storage and availability of means for data transport to other apparatus determine the degree of data aggregation. This dynamic aggregation may allow the hearing aid to store conversation tracking data for an arbitrary time period without sacrificing the detailed time-domain data for a specific number of conversation minutes, see e.g. FIG. 6.
  • The classification unit may be based on or comprise a neural network, e.g. a trained neural network.
  • The hearing aid may further comprise other relevant functionality for the application in question, e.g. compression, noise reduction, feedback control, etc.
  • The hearing aid may comprise a hearing instrument, e.g. a hearing instrument adapted for being located at the ear or fully or partially in the ear canal of a user, e.g. a headset, an earphone, an ear protection device or a combination thereof.
  • Use:
  • In an aspect, use of a hearing aid as described above, in the 'detailed description of embodiments' and in the claims, is moreover provided. Use may be provided in a system comprising audio distribution. Use may be provided in a system comprising one or more hearing aids (e.g. hearing instruments), headsets, ear phones, active ear protection systems, etc.
  • A method of operating a hearing aid:
  • In an aspect, a method of operating a hearing aid configured to be worn at of in an ear of a user is provided. The method may comprise
    • providing at least one electric input signal representing sound;
    • detecting whether or not, or with what probability, said at least one electric input signal, or a processed version thereof, comprises a voice from the user of the hearing aid, and providing an own voice control signal indicative thereof;
    • logging data related to the use of said hearing aid over time is furthermore provided by the present application.
    The logging data may comprise logging absolute or relative time periods of own voice activity in dependence of said own voice control signal.
  • It is intended that some or all of the structural features of the device described above, in the 'detailed description of embodiments' or in the claims can be combined with embodiments of the method, when appropriately substituted by a corresponding process and vice versa. Embodiments of the method have the same advantages as the corresponding devices.
  • The method may apply progressive abstraction of data over a period of time. The hearing aid preserve detailed conversation data logging for a number of minutes, after which the data will be aggregated into more abstract usable counters and ratios. This allow the hearing aid to track a high degree of data resolution if the user has a connected to a connected apparatus and still allow the hearing aid to maintain relevant data between visits to a clinic if the user is offline the entire time, see e.g. FIG. 6.
  • A method of extracting information about a hearing aid user's conversations:
  • In an aspect, a method of extracting information about a hearing aid social engagement in conversations is provided.
  • The method comprises
    • Logging data in a hearing aid worn by the user over an extended period of time (e.g. weeks or months);
    • Wherein the logged data includes data representative of
      • ∘ the user's own voice activity over time, and
      • ∘ a general voice activity in an environment of the user over time;
    • Analyzing the logged data with a view to the user's own voice activity and the voice activity in the environment, to estimate the user's engagement in conversations.
  • The user's engagement in conversations may e.g. be estimated by identifying a conversation in the combined data from an own voice detector and a general voice detector (or a dedicated 'not-own voice' detector). A conversation is detected, if a user's voice followed by another voice are detected, one voice following the other, without longer speech pauses between them (i.e. pauses are not larger than a threshold value ΔtPAUSE, e.g. 5-10 seconds), see e.g. FIG. 3, 4.
  • A computer readable medium or data carrier:
  • In an aspect, a tangible computer-readable medium (a data carrier) storing a computer program comprising program code means (instructions) for causing a data processing system (a computer) to perform (carry out) at least some (such as a majority or all) of the (steps of the) method described above, in the 'detailed description of embodiments' and in the claims, when said computer program is executed on the data processing system is furthermore provided by the present application.
  • By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Other storage media include storage in DNA (e.g. in synthesized DNA strands). Combinations of the above should also be included within the scope of computer-readable media. In addition to being stored on a tangible medium, the computer program can also be transmitted via a transmission medium such as a wired or wireless link or a network, e.g. the Internet, and loaded into a data processing system for being executed at a location different from that of the tangible medium.
  • A computer program:
  • A computer program (product) comprising instructions which, when the program is executed by a computer, cause the computer to carry out (steps of) the method described above, in the 'detailed description of embodiments' and in the claims is furthermore provided by the present application.
  • A data processing system:
  • In an aspect, a data processing system comprising a processor and program code means for causing the processor to perform at least some (such as a majority or all) of the steps of the method described above, in the 'detailed description of embodiments' and in the claims is furthermore provided by the present application.
  • A hearing system:
  • In a further aspect, a hearing system comprising a hearing aid as described above, in the 'detailed description of embodiments', and in the claims, AND an auxiliary device is moreover provided.
  • The hearing system may be adapted to establish a communication link between the hearing aid and the auxiliary device to provide that information (e.g. control and status signals, possibly audio signals) can be exchanged or forwarded from one to the other.
  • The auxiliary device may comprise a remote control, a smartphone, or other portable or wearable electronic device, such as a smartwatch or the like.
  • The auxiliary device may be constituted by or comprise a programming device (e.g. running a fitting software for adapting processing of the hearing aid to the needs, e.g. a hearing impairment, of the user of the hearing aid).
  • The auxiliary device may be constituted by or comprise a remote control for controlling functionality and operation of the hearing aid(s). The function of a remote control may be implemented in a smartphone, the smartphone possibly running an APP allowing to control the functionality of the audio processing device via the smartphone (the hearing aid(s) comprising an appropriate wireless interface to the smartphone, e.g. based on Bluetooth or some other standardized or proprietary scheme).
  • The auxiliary device may be constituted by or comprise another hearing aid. The hearing system may comprise two hearing aids adapted to implement a binaural hearing system, e.g. a binaural hearing aid system.
  • The auxiliary device may e.g. be or comprise a programming device, e.g. implementing a fitting system of the hearing aid. The auxiliary device may comprise a charging station comprising a memory (e.g. acting as an intermediate storage medium, e.g. of 'day-to-day data' from the datalogger, cf. e.g. FIG. 7). The auxiliary device may comprise a communication interface allowing the (wired or wireless) communication link to the hearing aid to be established. The communication interface(s) may comprise appropriate antenna and transceiver circuitry to implement a wireless link, e.g. based on Bluetooth or similar technology. The auxiliary device may comprise a communication interface allowing a connection to a server on a network, e.g. the Internet, e.g. 'in the cloud', to be established. Thereby data from the datalogger received from the hearing aid may be relayed from the auxiliary device (e.g. a cellphone or a charging station for the hearing aid) to a server accessible for analysis of the data, e.g. by a fitting system for the hearing aid.
  • The hearing system may be configured to download data from said datalogger to said auxiliary device. The auxiliary device may comprise a memory for storing data from the datalogger of the hearing aid. The auxiliary device may comprise an analyzing unit for analyzing data stored in the datalogger or the hearing aid and/or stored in the memory of the auxiliary device originating from the datalogger of the hearing aid. Data in the memory may origin from different time periods, e.g. time periods that together span more than one week, such as more than one month, such as more than 6 months. The auxiliary device may be configured to extract changes overtime of said data originating from the datalogger of the hearing aid. The changes over time may relate to the user's vocal activity, e.g. in connection with other persons' vocal activity (e.g. related to conversations vs. passive listening). The auxiliary device may comprise a user interface, allowing a user to interact with the auxiliary device, e.g. via touch sensitive display, and or a keyboard. The user interface may be configured to allow a user to display results of an analysis of the from the datalogger. The user interface may e.g. allow a user (e.g. a hearing care professional) to access data from the datalogger origination from previous observation periods, thereby allowing a development or trend in user behavior to be extracted from the data.
  • An APP:
  • In a further aspect, a non-transitory application, termed an APP, is furthermore provided by the present disclosure. The APP comprises executable instructions configured to be executed on an auxiliary device to implement a user interface for a hearing aid or a hearing system described above in the 'detailed description of embodiments', and in the claims. The APP may be configured to run on cellular phone, e .g. a smartphone, or on another portable (or stationary) electronic device allowing communication with said hearing aid or said hearing system (e.g. a charging station).
  • The APP may implement a Datalogging APP, from which a user may configure the datalogger. The user may e.g. select the data that should be logged, e.g. Own-voice data, Other voice data, internal and external sensor data (e.g. sensors or detectors related to an acoustic environment, and/or to a state of the user, e.g. a mental state). The sensors or detectors that may be selected for logging together the voice activation data, may include a movement sensor, a sound quality detector, a detector of body signals, e.g. brainwaves (e.g. EEG), a PPG sensor, etc. The APP may further allow the user to off-load logged data to another device or system, e.g. to a fitting system, to a smartphone or to a charging station, etc. (see e.g. FIG. 7). The APP may further allow the user to select a strategy or scheme for off-loading logged data to another device or system (e.g. among a number of predefined schemes).
  • Definitions:
  • In the present context, a hearing aid, e.g. a hearing instrument, refers to a device, which is adapted to improve, augment and/or protect the hearing capability of a user by receiving acoustic signals from the user's surroundings, generating corresponding audio signals, possibly modifying the audio signals and providing the possibly modified audio signals as audible signals to at least one of the user's ears. Such audible signals may e.g. be provided in the form of acoustic signals radiated into the user's outer ears, acoustic signals transferred as mechanical vibrations to the user's inner ears through the bone structure of the user's head and/or through parts of the middle ear as well as electric signals transferred directly or indirectly to the cochlear nerve of the user.
  • The hearing aid may be configured to be worn in any known way, e.g. as a unit arranged behind the ear with a tube leading radiated acoustic signals into the ear canal or with an output transducer, e.g. a loudspeaker, arranged close to or in the ear canal, as a unit entirely or partly arranged in the pinna and/or in the ear canal, as a unit, e.g. a vibrator, attached to a fixture implanted into the skull bone, as an attachable, or entirely or partly implanted, unit, etc. The hearing aid may comprise a single unit or several units communicating (e.g. acoustically, electrically or optically) with each other. The loudspeaker may be arranged in a housing together with other components of the hearing aid, or it may be an external unit in itself (possibly in combination with a flexible guiding element, e.g. a dome-like element).
  • More generally, a hearing aid comprises an input transducer for receiving an acoustic signal from a user's surroundings and providing a corresponding input audio signal and/or a receiver for electronically (i.e. wired or wirelessly) receiving an input audio signal, a (typically configurable) signal processing circuit (e.g. a signal processor, e.g. comprising a configurable (programmable) processor, e.g. a digital signal processor) for processing the input audio signal and an output unit for providing an audible signal to the user in dependence on the processed audio signal. The signal processor may be adapted to process the input signal in the time domain or in a number of frequency bands. In some hearing aids, an amplifier and/or compressor may constitute the signal processing circuit. The signal processing circuit typically comprises one or more (integrated or separate) memory elements for executing programs and/or for storing parameters used (or potentially used) in the processing and/or for storing information relevant for the function of the hearing aid and/or for storing information (e.g. processed information, e.g. provided by the signal processing circuit), e.g. for use in connection with an interface to a user and/or an interface to a programming device. In some hearing aids, the output unit may comprise an output transducer, such as e.g. a loudspeaker for providing an air-borne acoustic signal or a vibrator for providing a structure-borne or liquid-borne acoustic signal. In some hearing aids, the output unit may comprise one or more output electrodes for providing electric signals (e.g. to a multi-electrode array) for electrically stimulating the cochlear nerve (cochlear implant type hearing aid).
  • In some hearing aids, the vibrator may be adapted to provide a structure-borne acoustic signal transcutaneously or percutaneously to the skull bone. In some hearing aids, the vibrator may be implanted in the middle ear and/or in the inner ear. In some hearing aids, the vibrator may be adapted to provide a structure-borne acoustic signal to a middle-ear bone and/or to the cochlea. In some hearing aids, the vibrator may be adapted to provide a liquid-borne acoustic signal to the cochlear liquid, e.g. through the oval window. In some hearing aids, the output electrodes may be implanted in the cochlea or on the inside of the skull bone and may be adapted to provide the electric signals to the hair cells of the cochlea, to one or more hearing nerves, to the auditory brainstem, to the auditory midbrain, to the auditory cortex and/or to other parts of the cerebral cortex.
  • A hearing aid may be adapted to a particular user's needs, e.g. a hearing impairment. A configurable signal processing circuit of the hearing aid may be adapted to apply a frequency and level dependent compressive amplification of an input signal. A customized frequency and level dependent gain (amplification or compression) may be determined in a fitting process by a fitting system based on a user's hearing data, e.g. an audiogram, using a fitting rationale (e.g. adapted to speech). The frequency and level dependent gain may e.g. be embodied in processing parameters, e.g. uploaded to the hearing aid via an interface to a programming device (fitting system), and used by a processing algorithm executed by the configurable signal processing circuit of the hearing aid.
  • A 'hearing system' refers to a system comprising one or two hearing aids, and a 'binaural hearing system' refers to a system comprising two hearing aids and being adapted to cooperatively provide audible signals to both of the user's ears. Hearing systems or binaural hearing systems may further comprise one or more 'auxiliary devices', which communicate with the hearing aid(s) and affect and/or benefit from the function of the hearing aid(s). Such auxiliary devices may include at least one of a remote control, a remote microphone, an audio gateway device, an entertainment device, e.g. a music player, a wireless communication device, e.g. a mobile phone (such as a smartphone) or a tablet or another device, e.g. comprising a graphical interface.. Hearing aids, hearing systems or binaural hearing systems may e.g. be used for compensating for a hearing-impaired person's loss of hearing capability, augmenting or protecting a normal-hearing person's hearing capability and/or conveying electronic audio signals to a person. Hearing aids or hearing systems may e.g. form part of or interact with public-address systems, active ear protection systems, handsfree telephone systems, car audio systems, entertainment (e.g. TV, music playing or karaoke) systems, teleconferencing systems, classroom amplification systems, etc.
  • Embodiments of the disclosure may e.g. be useful in applications such as hearing aids for compensation for a user's hearing impairment.
  • BRIEF DESCRIPTION OF DRAWINGS
  • The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
    • FIG. 1 shows a first embodiment of a hearing aid comprising a datalogger according to the present disclosure,
    • FIG. 2 shows a second embodiment of a hearing aid comprising a datalogger according to the present disclosure,
    • FIG. 3 shows a first time sequence reflecting a conversation of the user of the hearing aid with another person as detected by an own voice detector and a voice activity detector,
    • FIG. 4 shows a second time sequence reflecting a varying acoustic environment of the user of the hearing aid, including sub-sequences reflecting a varying degree of speech-participation by the user,
    • FIG. 5 shows an embodiment of a hearing system comprising a hearing aid and a programming device according to the present disclosure,
    • FIG. 6 schematically illustrates an example of data aggregation according to the present disclosure,
    • FIG. 7 schematically illustrates a hearing aid system according to the present disclosure, wherein an external processor and memory is built-into a charging station for a hearing aid or a pair of hearing aids, which can be used to offload data from a datalogger of the hearing aid(s), and
    • FIG. 8 shows an embodiment of a hearing aid according to the present disclosure comprising a BTE-part located behind an ear or a user and an ITE part located in an ear canal of the user, and an auxiliary device comprising a user interface.
  • The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.
  • Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.
  • DETAILED DESCRIPTION OF EMBODIMENTS
  • The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as "elements"). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
  • The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g. flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • The present application relates to the field of hearing aids, in particular to data logging.
  • FIG. 1 shows a first embodiment of a hearing aid comprising a datalogger according to the present disclosure. FIG. 1 schematically illustrates a hearing aid (HA) configured to be worn at or in an ear of a user (or for being partially or fully implanted in the head at an ear of the user). The hearing aid (HA) comprises an input unit (IU. The input unit may e.g. comprise one or more input transducers, e.g. one or more microphones, configured to pick up sound (Acoustic input) from the environment of the hearing aid and to provide at least one electric input signal (IN) representing the sound. The input unit (IU) may comprise an analogue to digital converter for converting an analogue signal to a digitized signal (e.g. with a specific sampling frequency, e.g. fs=20 kHz). The input unit (IU) may further comprise an analysis filter bank for converting a (e.g. digitized) time domain signal to a time-frequency domain signal (e.g. represented as a multitude of frequency sub-band signals, each representing a frequency sub-range of the frequency range of operation of the hearing aid). The hearing aid (HA) further comprises an own voice detector (OVD) configured to detect whether or not, or with what probability, the at least one electric input signal (IN), or a processed version thereof, comprises a voice from the user of the hearing aid, and to provide a user voice control signal (UVC) indicative thereof. The hearing aid (HA) further comprises a datalogger (DLOG) for logging over time data related to the use of the hearing aid, including absolute or relative time periods of own voice activity in dependence of the own voice control signal. The hearing aid may be configured to log parameters of the current acoustic environment, including the own voice control signal, over time according to a predefined or adaptively determined scheme. The hearing aid may be configured to log parameters of the current acoustic environment with a specific log frequency, e.g. with a frequency larger than 0.1 Hz. The logged data may be (temporarily) stored in a memory of the datalogger. The hearing aid (HA) further comprises a processor (PRO) for applying one or more processing algorithms to the at least one electric input signal (IN). The one or more processing algorithms may include one or more of a compressive amplification algorithm configured to compensate for a hearing impairment of the user, a noise reduction algorithm, a feedback control algorithm, a directional beamforming algorithm, etc. The processor (PRO) provides a processed signal (OUT) representing sound (e.g. the sound picked up by the input unit (IU), and/or sound received from another device), which is fed to an output unit (OU). The output unit is configured to provide stimuli perceivable as sound to the user based on the processed signal (OUT). The output unit (OU) may comprise an output transducer, e.g. a loudspeaker for providing air-conducted sound, or a vibrator for providing bone-conducted sound. The output unit (OU) may comprise a multi-electrode array for directly stimulating the cochlear nerve of an ear of the user. The output unit may further comprise a synthesis filter bank in case the processed output signal (OUT) comprises a multitude of a frequency-sub-band signals (time-frequency domain signal) and/or a digital to analogue converter for converting a digitized signal to an analogue signal according to the specific application. The signal path from the input unit (IU) to the output unit (OU) via the processor (PRO) defines a forward path of the hearing aid (for processing the input sound to an output signal perceivable as sound to the user). The hearing aid further comprises a communication interface (IF), e.g. comprising an appropriate connector or antenna and transceiver circuitry, allowing data to be exchanged between the hearing aid and another device or system, e.g. via a network. The communication interface (IF) may be based on near-field (e.g. inductive) communication or on far-field communication (e.g. based on Bluetooth or similar technologies).
  • FIG. 2 shows a second embodiment of a hearing aid comprising a datalogger according to the present disclosure. The embodiment of a hearing aid (HA) in FIG. 2 comprises the same elements as the embodiment described in connection with FIG. 1. In addition, the embodiment of FIG. 2 comprises further detectors (DET) including e.g. an SNR estimator and/or a level estimator to monitor the acoustic environment. The embodiment of FIG. 2 comprises separate own voice (OVD) and voice activity detectors (VAD) providing respective indicators OVC and VAC regarding the presence of the user's voice ('own voice') and other voices, respectively. Other voices may include or exclude the user's voice as considered practical in the specific application in question. As mentioned, the voices of the user and that of other voices may (if necessary) be excluded by appropriate combination of the two indicators (OVC, VAD). The hearing aid may thereby be configured to log the voice activity (e.g. a level of activity) of the user as well as of other persons in the environment of the user. The hearing aid may e.g. be configured to log data concerning a sound environment (e.g. level, SNR) at least during specific time periods, e.g. periods of (a certain) own voice activity (e.g. OVC=1, or ≥ 50%), and/or triggered by specific events, e.g. during changes of an acoustic environment. The embodiment a hearing aid (HA) of FIG. 2 comprises a detector unit (DET), which may comprise a level estimator for estimating a current level of the at least one electric input signal (IN) or of a signal derived therefrom. The hearing aid (e.g. the detector unit (DET)) may comprise an estimator of signal quality, e.g. of SNR, of the at least one electric input signal (IN) or of a signal derived therefrom. The hearing aid may comprise an estimator of an ambient noise level, which may be estimated using the level detector and available voice activity detector(s), e.g. by making a noise estimate during speech pauses as determined by the voice indicator(s) (OVC, VAC). A crude SNR may then be estimated by Level(voice)/Level(noise), the mentioned levels being e.g. determined during speech (e.g. Signal level = Level(VAC = 1)) and speech pauses (e.g. Noise level = Level(VAC =0)). By monitoring the acoustic environment (including a noise level) the hearing aid may be configured to log the conditions for engaging in a conversation.
  • The hearing aid (HA) may be configured to log data representing a currently requested gain from a compressive amplification algorithm of the hearing aid (cf. signal GRQ from the processor (PRO) to the datalogger (DLOG) in FIG. 2). The compressive amplification algorithm may be configured to provide frequency and level dependent gain (amplification or attenuation) to the at least one electric input signal or to a signal derived therefrom. The requested gain or changes to the requested gain reflects properties of the current acoustic environment of the user.
  • The hearing aid may be configured to further log absolute or relative time periods of NO own voice activity. In the embodiment of FIG. 2, the datalogger comprises or interfaces to a timing unit (cf. unit TIME in FIG. 2) providing an absolute time or a relative time elapsed, e.g. since the last power up of the hearing aid (the latter may be relatively easily determined by an appropriate counter and knowledge of the relevant clock frequency of the hearing device). By observing a specific time window, the tracking of own voice activity as the sum of time segments wherein the own voice indicator is high (e.g. = 1, user voice detected) and no own voice activity as the sum of time segments wherein the own voice indicator is low (e.g. = 0, no user voice detected) can be combined, e.g. to define a ratio of vocal activity to vocal pauses of the user (or vocal activity to total time elapsed). From this ratio, a degree of active user engagement in conversations and a degree of passive listening (example to the TV) can be estimated.
  • The logged data for a given time window (e.g. from power on of the hearing aid to power off, e.g. corresponding to a single day of normal operation) or for several time windows, e.g. corresponding to a larger period of time, e.g. a week or a month, or the like, can be stored in a memory of the hearing aid. The logged data (DATA) can e.g. - via the communication interface (IF) - be transferred to a different apparatus (e.g. a smartphone, a similar processing device, or a fitting system) which is capable of analyzing and/or possible further processing the data and/or of presenting the data in a user interface. The hearing aid may e.g. be configured to off-load logged data to another device or server (e.g.in the cloud) according to a specific or adaptive scheme, e.g. in dependence of a current amount of logged data (or rest-capacity of a memory), or of a measure of a time elapsed.
  • An absolute timing (e.g. a time of day) may e.g. be obtained from specific timing-circuitry, e.g. included in the hearing aid, e.g. in communication with a time standard (e.g. the DCF77 in Frankfurt), or from another device (e.g. a smartphone or similar device, e.g. a watch) or from a network, e.g. including from a server.
  • The logging of data related to the user's (active) participation in conversations is illustrated in FIG. 3 and 4.
  • FIG. 3 shows a first time sequence reflecting a conversation of the user of the hearing aid with another person as detected by an own voice detector and a voice activity detector. FIG. 3 shows values of different voice indicators (here control signals UVC (representing the user's voice) and OVC (representing other voice(s))) versus time (Time) for a time segment of an electric input signal of the hearing aid. FIG. 3 shows an output of a voice activity detector that is capable of differentiating a user's voice from other voices in an environment of the user wearing the hearing aid. The vocal activity or inactivity of the user or other persons is implied by control signals UVC or OVC, respectively, being 1 or 0 (could also or alternatively be indicated by a speech presence probability (SPP) being above or below a threshold, respectively). In the time sequence depicted in FIG. 3, the top graph represents vocal activity of the user (between time tu,1 and tu,2 (time period ΔtUser(1)= tu,2 - tu,1) and between time tu,3 and tu,4 (time period ΔtUser(2)= tu,4 - tu,3). The middle graph represents vocal activity of other persons (between time to,1 and to,2 (time period ΔtOther(1)= tu,2 - tu,1) and the lower graph represents vocal activity of the user and other persons in combination (at times and time periods s indicated for the top and middle graphs). In the bottom graph time periods of user voice and other persons' voice are indicated by different filling. An analysis of the combination of indicators (UVC and OVC, respectively) of the presence or absence of user voice and other persons' voice may reveal a possible conversation with participation of the user. Identification of conversation involving the user may be identified by a sequential (alternating) occurrence of user voice (UVC) and other voice (OVC) indicators over a time period. In the simplified example of FIG. 3, a conversation involving the user from time tu,1 to tu,4 (i.e. over a total time period of tu,4 - tu,1 = ΔtUser(1) + ΔtOther(1) + ΔtUser(2)) can be identified. During analysis, a criterion regarding the distance ΔtUser-Other in time between the user voice indicator (UVC) shifting from active to inactive and the other person's voice indicator (OVC) shifting from inactive to active (or vice versa) may by applied. For the two 'transitions' of FIG. 3, ΔtUser-Other = to,1 - tu,2 and to,2 - tu,3, respectively. Such criterion may e.g. be ΔtUser-Other ≤ 2 s. A slight overlap may be accepted, and a further criterion may e.g. be ΔtUser-Other ≥ -2 s. (thereby accepting a small period of 'double-talk). A further criterion regarding the time period of each single period of active voice of the user (and/or the other person(s)) may be imposed, e.g. ΔtUser(j) ≥ Δtu,min., j=1, ..., J, where J is the number of 'contributions' of the user in a given conversation (in FIG. 3, J=2). The minimum duration may e.g. be 5 s. Other analysis criteria may relate to the average length of the 'contributions' of the user <ΔtUser(j)> in a given conversation (j=1, ..., J) and/or over all conversations of a given time period (e.g. a day or a week).
  • FIG. 4 shows a second time sequence reflecting a varying acoustic environment of the user of the hearing aid, including sub-sequences reflecting a varying degree of speech-participation by the user. FIG. 4 schematically illustrates a time window time dependent values of wherein indictors of the user voice (UVC) and other persons' voice (VAC) are indicated (an 'active indication' of the respective UVC and VAC indicators is shown by different fillings, as in FIG. 3, bottom). The time window comprises two time periods that indicate a user in conversation with another person, two time periods, that indicate silence (or no significant voice activity) and one time period of another persons' voice (without user participation, e.g. reflecting another person talking (without the user replying), e.g. voice from a radio, TV or other audio delivery device, or a person talking in the environment of the user). The time window of FIG. 4 has a range from t1 to t6, i.e. spans a time period of duration Δtw from t6 - t1. The time window of FIG. 4 comprises in consecutive order: (a 1st period of) 'conversation', (a 1st period of) 'silence', (a 1st period of) 'one way speech', (a 2nd period of) 'silence', and (a 2nd period of) 'conversation'. The individual time periods of each acoustic event ('conversation'(user voice, another voice), 'one way speech' (either the user or another voice), 'silence' (no voice)) may e.g. be estimated based on the logged data, either in the hearing aid or in another device or system to which the data are transferred.
  • The data logged over time, cf. time windows as illustrated in FIG. 3, 4 (and in practice comprising more acoustic events (represent longer time periods, e.g. days or weeks) and their subsequent analysis may allow extraction of information regarding the user's (voiced) social activity, e.g. in dependence of the acoustic environment (noisy environments may result in decreased activity), e.g. in dependence of the time of the day (a decrease with time of day (or time from power on of the hearing aid e.g. reflecting some sort of cognitive fatigue). The analysis may result in changes being made to the processing of the hearing aid (e.g. increased noise reduction and/or more directionality in noisy environments). The logged data may e.g. be used to extract information about the complexity (and length) of conversations engaged in by the user and in particular to changes in such parameters.
  • The repeated logging over time of own voice activity, other voice activity, input signal level (e.g. low, medium high), noise level and/or signal-to-noise-ratio may e.g. allow such information to be extracted.
  • Corresponding values of the parameters (P1, ..., PQ), Q being the number of logged parameters, e.g. Q ≤ 5, may e.g. be logged with a predefined frequency fL, e.g. every 100 ms (i.e. fL=10 Hz). The logged data may e.g. be up-loaded (off-loaded) to another device or server with a predefined frequency, e.g. 5 minutes, or every hour, or once a day (e.g. as part of a power-off procedure).
  • The hearing aid may be configured to take specific measures in case the intended (planned) off-loading of the logged parameters (to empty the memory and make room for new data) cannot be performed, e.g. due to lack of a communication ling, lack of power of the hearing aid, lack availability of the receiving device or system, etc. Such specific measures may be to minimize the amount of data (and thus being able to cover a longer time window) by averaging values of the parameters (P1, ..., PQ) over time. The parameters may be averaged over different time periods (e.g. so that voice detection data (e.g. in particular own voice detection data) are prioritized over other parameters, e.g. level, or SNR, which may be assumed to vary more slowly, than the dynamic events of a conversation).
  • FIG. 5 shows an embodiment of a hearing system comprising a hearing aid and a programming device according to the present disclosure. In the hearing system of FIG. 5, the hearing aid (HA) is in communication with a programming device (PD, e.g. a fitting system or other processing device, e.g. a smartphone). The communication is e.g. via a direct link (LINK) or via a network. The programming device (PD) comprises a communication interface (IF) allowing to establish a communication link to the hearing aid and to receive data from and transmit data to the hearing aid. The programming device (PD) may e.g. receive logged data from the hearing aid and store the date in a memory (MEM) for analysis by an analyser (ANA) and possibly further processing of the data in a processing unit (COMP). The processing unit may comprise a digital signal processor configured to run fitting software of the hearing aid, e.g. to adapt processing parameters of the processor (PRO) of the hearing aid to the needs of particular user (cf. double arrowed line between the processor (PRO) and the communication interface (IF) of the hearing aid (HD). The programming device (PD) further comprises a user interface coupled to the processing unit (COMP), the memory (MEM) and the analyser (ANA). The user interface comprises a visual display (DISP) and a keyboard (KEYB) allowing data to be displayed, e.g. graphically (DISP) and data to be entered (KEYB). The programming device may (via the memory) e.g. have access to logged data from several time windows, e.g. representing observations over a time span of weeks or months. The programming device may have access corresponding data from the available detectors in a time series spanning the mentioned period of weeks or months, e.g. including voice activities of the user and other persons in the environment of the user in a time resolution that allows an analysis of changes in the user's social vocal activity to be identified. In the screen of the display, a schematic comparison of logged data for a particular user for two different time periods are shown. A development in the user's active participation in conversations is (schematically) indicated from less in time period TP#1 to more in time period TP#2. This may be a result of changed parameter settings of the hearing aid (or the fitting of another (improved) hearing aid model) between time period TP#1 and TP#2, or it may be the result of a deliberate effort of the user to be more active (or both). The results of the analysis may be inputs to a discussion with the user about his or her satisfaction with the hearing aid, and/or to the changing of parameter settings, fitting of a new hearing aid with improved features, etc. Important learnings of the data are possible changes (over time) in the length and complexity of the user's conversations with other people, which can be taken as an indication of an improved social engagement (decreased self-isolation).
  • FIG. 6 schematically illustrates an example of data aggregation according to the present disclosure. FIG. 6 shows values of averaged parameters ('Parameters averaged over Δt', normalized scale), e.g. voice activity detection, overtime ('Time [s]').
  • Each 'vertical box' represent a data container (DataC). Each data container holds a value (e.g. an average value) of one or more 'conversation parameters' intended for being logged by the hearing aid or an external device connected to the hearing aid). A conversation parameter may e.g. be a ratio of time periods with voice activity (e.g. own voice activity) to time periods of speech pauses.
  • FIG. 6 shows a multitude of observation windows of variable duration in time, here t1, t2, ..., tn, tn+1, . Each data container (DataC) of a given observation window tn has a common width Δtn representing the time range that the data of that data container represents, e.g. a single value sampled in the time range time Δtn or an average of values sampled in the time range Δtn. In the embodiment of FIG. 6, Δtn representing the time range of the data container of observation window n is indicated to be smaller than or equal to A, B, and N, INF for observation windows t1, t2, and tn, tn+1, respectively. It may be assumed that A < B < N < INF.
  • The observation windows (indexed by n) may e.g. be of increasing duration in time (t1, t2, ...). The duration in time may e.g. increase with increasing n, e.g. increasing with increasing n for n larger than a first threshold value nth1. The duration in time of the observation windows may be different for different hearing aid models or styles (e.g. dependent on processor clock frequency, memory, processing algorithms, etc.). The duration in time of the observation windows may e.g. vary from t1 being of the order of milliseconds or of the order of minutes or larger.
  • Each observation window (t1, t2, ...) contains a number NDC,n of data containers (DataC). All observation windows (t1, t2, ...) may contain the same number NDC of data containers (DataC).
  • The observation windows may comprise different numbers NDC,n of data containers, e.g. decreasing with increasing n, e.g. decreasing with increasing n for n larger than a threshold value second nth2. The first and second threshold values of n (nth2, nth2) may be equal or different.
  • Regarding memory space, it is here assumed that each data container (DataC, irrespective of its width in time Δtn) occupies the same space in the memory (because it holds the same number of data values).
  • The duration t1 of the first observation window may e.g. be the shortest of the multitude of observation windows (n=1, 2, ..., NW) of variable duration in time. The 'time width' Δt1 of the data containers of the first observation window may correspond to a sampling time (ts=1/fs, where fs is a sampling frequency), or a down-sampled version thereof, e.g. corresponding to the length of a time frame (e.g. Δt1 = 3.2 ms, for fs = 20 kHz, and 64 samples per time frame).
  • After NDC,1 parameter values have been stored during the first observation window, the storage frequency is reduced in the second observation window (n=2), e.g. in that a multitude (e.g. 5 or more) of successive sample values corresponding to Δt2 are averaged and stored in each successive data container (DataC) of the second observation window. Thereby the use of memory for storage of the relevant data can be reduced. Likewise, after the storage of NDC,2 (averaged) parameter values in respective of the NDC,2 data containers of second observation window have, the storage frequency is further reduced in the third observation window (e.g. by another factor of 5), etc. The reduction of the storage frequency can be repeated an arbitrary number of times. The reduction of the storage frequency can be terminated after a number of observation windows, after which the storage frequency is kept constant.
  • The strategy for successively reducing the storage frequency can be controlled by a storage controller, e.g. in dependence of one or more of a memory size, a battery status of the hearing aid, an estimated time to the next possible off-loading of logged data, etc.
  • Thereby, it is possible to provide that even a memory of a relatively small size (as in a hearing aid), can hold data representing a relatively long time period, and thus capture relevant data representative of the time between data off-loads (nearly irrespective of how long time it takes).
  • The logged data may e.g. be off-loaded to an external device (e.g.via an APP or directly, e.g. automatically, when the hearing aid is connected to the external device), e.g. to a memory of a portable device, e.g. to a smartphone, or to a fitting system of the hearing aid. A reason for applying such storage strategy is that it may be difficult to predict a time between data off-loads. A successful data off-load may be dependent on connectivity conditions at a given time (e.g., is the data receiver (e.g. a smart phone or a fitting system). A successful data off-load may be dependent on the hearing aid having sufficient power to establish a link to the receiving device or system, etc. A successful data off-load may be dependent on the receiving device or system being ready to receive data from the hearing aid (there may be other tasks that have higher priorities than the reception of logged data from the hearing aid).
  • FIG. 7 schematically illustrates a hearing aid system according to the present disclosure, wherein an external processor (PRO) and memory (MEM) is built-into a charging station (CHAS) for a hearing aid (HA1) or a pair of hearing aids (HA1, HA2). The charging station (CHAS) can thereby be used to off-load data (LOGD) from a datalogger (cf. DLOG in FIG. 1, 2) of the hearing aid(s). The charging station comprises a memory (MEM) for receiving data from the datalogger.
  • The charging station (CHAS) comprises (antenna (ANT) and) transceiver circuitry (WLIF) for establishing a communication link (WL) to the hearing aid(s). The charging station may e.g. comprise one or more sensors for classifying the environment around the charging station, e.g. a microphone or other sensor, e.g. for of background noise. The sensor data may be added to the logged data (LOGD) while the hearing aids are located in the charging station (and/or as long as the charging station (CHAS) and the hearing aid (HA1, HA2) are in communication via the communication link (WL), e.g. as long as the distance D between them is smaller than a maximum transmission/reception range of the link (WL). The charging station may e.g. comprise an absolute clock that can be added to the logged data, when the hearing aid are located in the charging station.
  • The processor (PRO) of the charging station may have a larger processing power than a processor of the wearable device. The processor may be configured to analyze the logged data from the hearing aid(s). The charging station may be located on a support (Support), e.g. a table, in an appropriate place with a view to being accessible to the hearing aids when the user moves around.
  • The charging station may be a pocket-size, portable, device comprising an interface (PSIF), e.g. including a connector, to an electricity network, and/or a local (e.g. rechargeable) battery (BAT) for charging a battery or batteries of the hearing aids (HA1, HA2). Thereby the charging station can provide its function for a limited time, even in the absence of access to the electricity network. The battery of the charging station is assumed to have a significantly larger capacity than a battery of the hearing aid.
  • The charging station may further comprise an interface (DIF) to a data network. The interface is configured to establish a (here wireless) connection to the data network (cf. link WLDL, e.g. WiFi) e.g. to provide access to servers, e.g. a fitting system, on the Internet (cloud computing). Thereby the off-loaded date may be uploaded to the fitting system via the data network.
  • FIG. 8 shows an embodiment of a hearing aid (HA) according to the present disclosure comprising a BTE-part located behind an ear or a user and an ITE part located in an ear canal of the user, and an auxiliary device (AD) in communication with the hearing aid comprising a user interface (UI). Together, the hearing aid (HA) and the auxiliary device (AD) may constitute a hearing system according to the present disclosure.
  • FIG. 8 illustrates an exemplary hearing aid (HA) formed as a receiver in the ear (RITE) type hearing aid comprising a BTE-part (BTE) adapted for being located behind pinna and a part (ITE) comprising an output transducer (OT, e.g. a loudspeaker/receiver) adapted for being located in an ear canal (Ear canal) of the user (e.g. exemplifying a hearing aid (HA) as shown in FIG. 1, 2). The BTE-part (BTE) and the ITE-part (ITE) are connected (e.g. electrically connected, e.g. via a cable comprising a multitude of conductors, e.g. three or more, auch as six or more) by a connecting element (IC). In the embodiment of a hearing aid of FIG. 8, the BTE part (BTE) comprises two input transducers (here microphones) (MBTE1, MBTE2) each for providing an electric input audio signal representative of an input sound signal (SBTE) from the environment (in the scenario of FIG. 8, from sound source S, e.g. a communication partner). The hearing aid (HA) of FIG. 8 further comprises two wireless transceivers (WLR1, WLR2) for receiving and/or transmitting signals (e.g. comprising audio and/or information, e.g. logged data according to the present disclosure). The hearing aid (HA) further comprises a substrate (SUB) whereon a number of electronic components are mounted, functionally partitioned according to the application in question (analogue, digital, passive components, etc.), but including a configurable digital signal processor (DSP), a front-end chip (FE), and a memory unit (MEM) coupled to each other and to input and output units via electrical conductors Wx. The mentioned functional units (as well as other components) may be partitioned in circuits and components according to the application in question (e.g. with a view to size, power consumption, analogue vs digital processing, etc.), e.g. integrated in one or more integrated circuits, or as a combination of one or more integrated circuits and one or more separate electronic components (e.g. inductor, capacitor, etc.). The configurable signal processor (DSP) provides an enhanced audio signal (cf. signal OUT in FIG. 1, 2), which is intended to be presented to a user. The front-end integrated circuit (FE) is adapted for providing an interface between the configurable signal processor (DSP) and the input and output transducers, etc., and typically comprising interfaces between analogue and digital signals. The input and output transducers may be individual separate components, or integrated (e.g. MEMS-based) with other electronic circuitry. In the embodiment of a hearing aid device in FIG. 8, the ITE part (ITE) comprises an output unit in the form of a loudspeaker (receiver) (SPK) for converting the electric signal (OUT) to an acoustic signal (providing, or contributing to, acoustic signal SED at the ear drum (Ear drum). The ITE-part further comprises an input unit comprising an input transducer (e.g. a microphone) (MITE) for providing an electric input audio signal representative of an input sound signal SITE from the environment at or in the ear canal. In another embodiment, the hearing aid may comprise only the BTE-microphones (MBTE1, MBTE2). In yet another embodiment, the hearing aid may comprise an input unit located elsewhere than at the ear canal in combination with one or more input units located in the BTE-part and/or the ITE-part. The ITE-part further comprises a guiding element, e.g. a dome, (DO) for guiding and positioning the ITE-part in the ear canal of the user.
  • The hearing aid (HA) exemplified in FIG. 8 is a portable device and further comprises a battery (BAT) for energizing electronic components of the BTE- and ITE-parts. The hearing aid (HA) may be identical to the hearing aid(s) illustrated in FIG. 7.
  • The hearing aid (HA) may comprise a directional microphone system (e.g. a beamformer filter) adapted to enhance a target acoustic source among a multitude of acoustic sources in the local environment of the user wearing the hearing aid.
  • The memory unit (MEM) may form part of the datalogger and comprise logged data according to the present disclosure.
  • The hearing aid of FIG. 8 may constitute or form part of a binaural hearing aid system according to the present disclosure.
  • The hearing aid (HA) according to the present disclosure may comprise a user interface UI, e.g. as shown in the bottom part of FIG. 8 implemented in an auxiliary device (AD), e.g. a remote control, e.g. implemented as an APP in a smartphone or other portable (or stationary) electronic device (e.g. a charging station). In the embodiment of FIG.8, the screen of the user interface (UI) illustrates a Datalogging APP. The user may configure the datalogger via the APP. The user may e.g. select the data that should be logged, e.g. Own-voice data, Other voice data, internal and external sensor data (termed HA-sensors and External sensors, respectively, in the exemplified screen of FIG. 8). In the embodiment of FIG. 8, Own-voice, Other voice, and HA-sensors have been selected (as indicated by the filled square symbols ■). The user may further off-load logged data to another device or system, e.g. to a fitting system, a Smartphone or to a Charging station (see e.g. FIG. 7). In the embodiment of FIG. 8, connection to the smartphone is selected (as indicated by the filled square symbol ■). Unselected options are indicated by open square symbols (□).
  • The auxiliary device (AD) and the hearing aid (HA) are adapted to allow communication of data representative of the currently selected direction (if deviating from a predetermined direction (already stored in the hearing aid)) to the hearing aid via a, e.g. wireless, communication link (cf. dashed arrow WL2 in FIG. 8). The communication link WL2 may e.g. be based on far field communication, e.g. Bluetooth or Bluetooth Low Energy (or similar technology), implemented by appropriate antenna and transceiver circuitry in the hearing aid (HA) and the auxiliary device (AD), indicated by transceiver unit WLR2 in the hearing aid.
  • It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.
  • As used, the singular forms "a," "an," and "the" are intended to include the plural forms as well (i.e. to have the meaning "at least one"), unless expressly stated otherwise. It will be further understood that the terms "includes," "comprises," "including," and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element but an intervening element may also be present, unless expressly stated otherwise. Furthermore, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise.
  • It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "an aspect" or features included as "may" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
  • The claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." Unless specifically stated otherwise, the term "some" refers to one or more.
  • Accordingly, the scope should be judged in terms of the claims that follow.
  • REFERENCES

Claims (15)

  1. A hearing aid configured to be worn at or in an ear of a user, the hearing aid comprising
    • An input unit comprising at least one input transducer, e.g. a microphone, for picking up sound from the environment of the hearing aid and configured to provide at least one electric input signal representing said sound;
    • An own voice detector configured to detect whether or not, or with what probability, said at least one electric input signal, or a processed version thereof, comprises a voice from the user of the hearing aid, and to provide an own voice control signal indicative thereof; and
    • A datalogger for logging data related to the use of said hearing aid over time,
    wherein the hearing aid is configured to log absolute or relative time periods of own voice activity in dependence of said own voice control in said datalogger.
  2. A hearing aid according to claim 1 configured to further log data concerning a sound environment, at least during said time periods of own voice activity.
  3. A hearing aid according to claim 2 wherein said data concerning a sound environment includes a measure of sound quality, e.g. a signal to noise ratio (SNR).
  4. A hearing aid according to claim 2 or 3 wherein said data concerning a sound environment includes an activity level of other speech sources.
  5. A hearing aid according to any one of claims 1-4 wherein said data comprises a requested gain from a compressive amplification algorithm of the hearing aid.
  6. A hearing aid according to any one of claims 1-5 comprising a voice activity detector configured to detect whether or not, or with what probability, said at least one electric input signal, or a processed version thereof, comprises a human voice and to provide a voice control signal indicative thereof.
  7. A hearing aid according to any one of claims 1-7 configured to further log absolute or relative time periods of NO own voice activity.
  8. A hearing aid according to any one of claims 1-7 wherein the datalogger is configured to log data in successive observation windows of variable time, e.g. of increasing length over time, but with a constant or decreasing number of logged data values of successive observation windows.
  9. A hearing aid according to any one of claims 1-8 comprising a communication interface allowing data to be exchanged with another device or system.
  10. A hearing aid according to any one of claims 1-9 comprising an output unit, and wherein the output unit comprises a number of electrodes of a cochlear implant type hearing aid or a vibrator of a bone conducting hearing aid, or a loudspeaker of an air conduction hearing aid, or a combination thereof.
  11. A method of operating a hearing aid configured to be worn at of in an ear of a user, the method comprising
    • providing at least one electric input signal representing sound;
    • detecting whether or not, or with what probability, said at least one electric input signal, or a processed version thereof, comprises a voice from the user of the hearing aid, and providing an own voice control signal indicative thereof;
    • logging data related to the use of said hearing aid over time, wherein said logging data comprises
    logging absolute or relative time periods of own voice activity in dependence of said own voice control signal.
  12. Use of a hearing aid as claimed in any one of claims 1-10.
  13. A hearing system comprising a hearing aid according to any one of claims 1-10 and an auxiliary device, the hearing system being adapted to establish a communication link between the hearing aid and the auxiliary device to provide that information can be exchanged or forwarded from one to the other.
  14. A hearing system according to claim 13 configured to download data from said datalogger to said auxiliary device.
  15. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 11.
EP20181325.0A 2020-06-22 2020-06-22 A hearing aid comprising an own voice conversation tracker Pending EP3930346A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20181325.0A EP3930346A1 (en) 2020-06-22 2020-06-22 A hearing aid comprising an own voice conversation tracker

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP20181325.0A EP3930346A1 (en) 2020-06-22 2020-06-22 A hearing aid comprising an own voice conversation tracker

Publications (1)

Publication Number Publication Date
EP3930346A1 true EP3930346A1 (en) 2021-12-29

Family

ID=71120042

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20181325.0A Pending EP3930346A1 (en) 2020-06-22 2020-06-22 A hearing aid comprising an own voice conversation tracker

Country Status (1)

Country Link
EP (1) EP3930346A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4258689A1 (en) 2022-04-07 2023-10-11 Oticon A/s A hearing aid comprising an adaptive notification unit
EP4340395A1 (en) 2022-09-13 2024-03-20 Oticon A/s A hearing aid comprising a voice control interface

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060222194A1 (en) 2005-03-29 2006-10-05 Oticon A/S Hearing aid for recording data and learning therefrom
US8948428B2 (en) * 2006-09-05 2015-02-03 Gn Resound A/S Hearing aid with histogram based sound environment classification
US20160249144A1 (en) * 2015-02-24 2016-08-25 Sivantos Pte. Ltd. Method for ascertaining wearer-specific use data for a hearing aid, method for adapting hearing aid settings of a hearing aid, hearing aid system and setting unit for a hearing aid system
EP3641345A1 (en) * 2018-10-16 2020-04-22 Sivantos Pte. Ltd. A method for operating a hearing instrument and a hearing system comprising a hearing instrument

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060222194A1 (en) 2005-03-29 2006-10-05 Oticon A/S Hearing aid for recording data and learning therefrom
US8948428B2 (en) * 2006-09-05 2015-02-03 Gn Resound A/S Hearing aid with histogram based sound environment classification
US20160249144A1 (en) * 2015-02-24 2016-08-25 Sivantos Pte. Ltd. Method for ascertaining wearer-specific use data for a hearing aid, method for adapting hearing aid settings of a hearing aid, hearing aid system and setting unit for a hearing aid system
EP3641345A1 (en) * 2018-10-16 2020-04-22 Sivantos Pte. Ltd. A method for operating a hearing instrument and a hearing system comprising a hearing instrument

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4258689A1 (en) 2022-04-07 2023-10-11 Oticon A/s A hearing aid comprising an adaptive notification unit
EP4340395A1 (en) 2022-09-13 2024-03-20 Oticon A/s A hearing aid comprising a voice control interface

Similar Documents

Publication Publication Date Title
US11671773B2 (en) Hearing aid device for hands free communication
US10966034B2 (en) Method of operating a hearing device and a hearing device providing speech enhancement based on an algorithm optimized with a speech intelligibility prediction algorithm
CN110062318B (en) Hearing aid system
US20180263562A1 (en) Hearing system for monitoring a health related parameter
US10176821B2 (en) Monaural intrusive speech intelligibility predictor unit, a hearing aid and a binaural hearing aid system
US11510019B2 (en) Hearing aid system for estimating acoustic transfer functions
US10631107B2 (en) Hearing device comprising adaptive sound source frequency lowering
EP3979666A2 (en) A hearing device comprising an own voice processor
EP3930346A1 (en) A hearing aid comprising an own voice conversation tracker
US11863938B2 (en) Hearing aid determining turn-taking
US11589173B2 (en) Hearing aid comprising a record and replay function
US20220295191A1 (en) Hearing aid determining talkers of interest
US11576001B2 (en) Hearing aid comprising binaural processing and a binaural hearing aid system
EP3525489A1 (en) A method of fitting a hearing device to a user&#39;s needs, a programming device, and a hearing system
EP2876902A1 (en) Adjustable hearing aid device
US20220406328A1 (en) Hearing device comprising an adaptive filter bank

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

B565 Issuance of search results under rule 164(2) epc

Effective date: 20201126

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220629

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240122