EP4197587A1 - Dispositif d'interface neurale auditive - Google Patents

Dispositif d'interface neurale auditive Download PDF

Info

Publication number
EP4197587A1
EP4197587A1 EP21215578.2A EP21215578A EP4197587A1 EP 4197587 A1 EP4197587 A1 EP 4197587A1 EP 21215578 A EP21215578 A EP 21215578A EP 4197587 A1 EP4197587 A1 EP 4197587A1
Authority
EP
European Patent Office
Prior art keywords
neurostimulation
signal
auditory
sound signal
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21215578.2A
Other languages
German (de)
English (en)
Inventor
Saman HAGH GOOIE
Bálint VÁRKUTI
Ricardo SMITS SERENA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ceregate GmbH
Original Assignee
Ceregate GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ceregate GmbH filed Critical Ceregate GmbH
Priority to EP21215578.2A priority Critical patent/EP4197587A1/fr
Priority to US17/698,341 priority patent/US20230191129A1/en
Priority to PCT/EP2022/058761 priority patent/WO2022207910A1/fr
Publication of EP4197587A1 publication Critical patent/EP4197587A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36036Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the outer, middle or inner ear
    • A61N1/36038Cochlear stimulation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/3605Implantable neurostimulators for stimulating central or peripheral nerve system
    • A61N1/36057Implantable neurostimulators for stimulating central or peripheral nerve system adapted for stimulating afferent nerves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/02Details
    • A61N1/04Electrodes
    • A61N1/05Electrodes for implantation or insertion into the body, e.g. heart electrode
    • A61N1/0526Head electrodes
    • A61N1/0541Cochlear electrodes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/3605Implantable neurostimulators for stimulating central or peripheral nerve system
    • A61N1/3606Implantable neurostimulators for stimulating central or peripheral nerve system adapted for a particular treatment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/3605Implantable neurostimulators for stimulating central or peripheral nerve system
    • A61N1/3606Implantable neurostimulators for stimulating central or peripheral nerve system adapted for a particular treatment
    • A61N1/36062Spinal stimulation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/3605Implantable neurostimulators for stimulating central or peripheral nerve system
    • A61N1/36128Control systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/372Arrangements in connection with the implantation of stimulators
    • A61N1/37211Means for communicating with stimulators
    • A61N1/37252Details of algorithms or data aspects of communication system, e.g. handshaking, transmitting specific data or segmenting data
    • A61N1/37282Details of algorithms or data aspects of communication system, e.g. handshaking, transmitting specific data or segmenting data characterised by communication with experts in remote locations using a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/558Remote control, e.g. of amplification, frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/3605Implantable neurostimulators for stimulating central or peripheral nerve system
    • A61N1/36128Control systems
    • A61N1/36146Control systems specified by the stimulation parameters
    • A61N1/36182Direction of the electrical field, e.g. with sleeve around stimulating electrode
    • A61N1/36185Selection of the electrode configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility

Definitions

  • the present disclosure relates to an auditory neural interface device for supporting or enabling sound perception by an individual.
  • Sound perception is essential for survival and living a normal life in modern society.
  • the communication between humans relies on spoken language.
  • Also experiencing the joy of music is typically not possible without being able to perceive sound.
  • Proper communication between humans ensures the ability of individuals to develop and evolve in a social environment. This is particularly important for children at their early stage in life.
  • Conductive hearing loss can usually be treated or improved by way of a surgery or infection treatment.
  • a cochlear implant or auditory brainstem implant (ABI). It is known that merely about 1 in 20 patients who could potentially benefit from such an implant, do actually receive it. This is mainly attributed to a limited access to the complex surgical procedures necessary for implantation. Further, implanting these devices in the skull can have adverse effects to the patient, because there is the inherent risk of side effects, such as nerve damage, dizziness and/or balance problems, hearing loss, tinnitus, leaks of the fluid around the brain, meningitis etc. Even further, children who could hear some sounds and/or speech with hearing aids, may not be eligible for cochlear implants although improved hearing capability would drastically improve their personal development. Further risks associated with existing cochlear implant technology, or the related surgical procedures are, for instance, risk of losing residual hearing, inability to understand language, complex explanation procedure in case of device failure, and more.
  • a further detrimental effect of cochlear implants is that they cannot provide for hearing aid when deafness is caused by an injury or an absence of the auditory nerve fibers themselves, for instance in case of Neurofibromatosis type 2.
  • ABIs are used as an alternative that bypasses the cochlear nerve to electrically stimulate second order neurons in the cochlear nucleus.
  • implanting an ABI is an extremely invasive surgery accompanied by a high risk of failure, and even if successful, most patients do not achieve open set speech perception even with extensive training.
  • US 7,251,530 B1 relates to errors in pitch allocation within a cochlear implant. Those errors are said to be corrected in order to provide a significant and profound improvement in the quality of sound perceived by the cochlear implant user.
  • the user is stimulated with a reference signal, e.g., the tone "A” (440 Hz) and then the user is stimulated with a probe signal, separated from the reference signal by an octave, e.g., high "A” (880 Hz).
  • the user adjusts the location where the probe signal is applied, using current steering, until the pitch of the probe signal, as perceived by the user, matches the pitch of the reference signal, as perceived by the user.
  • the user maps frequencies to stimulation locations in order to tune his or her implant system to his or her unique cochlea.
  • ECAP electrically evoked compound action potential
  • US 9,786,201 B2 and US 9,679,546 B2 both relate to vibratory motors that are used to generate a haptic language for music or other sound that is integrated into wearable technology.
  • This technology enables the creation of a family of devices that allow people with hearing impairments to experience sounds such as music or other auditory input to the system.
  • a "sound vest" or one or more straps comprising a set of motors transforms musical input to haptic signals so that users can experience their favorite music in a unique way and can also recognize auditory cues in the user's everyday environment and convey this information to the user using haptic signals.
  • EP 3 574 951 B1 relates to an apparatus and method for use in treating tinnitus, which employs a sound processing unit, a tactile unit, and an interface therebetween.
  • the tactile unit comprises an array of stimulators each of which can be independently actuated to apply a tactile stimulus to a subject, and the tactile unit comprises an input for receiving a plurality of actuation signals from the interface and directing individual actuation signals to individual stimulators.
  • US 9,078,065 B2 relates to a method and a system for presenting audio signals as vibrotactile stimuli to the body in accordance with a Model Human Cochlea. Audio signals are obtained for presentation. The audio signals are separated into multiple bands of discrete frequency ranges that encompass the complete audio signal. Those signals are output to multiple vibrotactile devices.
  • the vibrotactile devices maybe positioned in a respective housing to intensify and constrain a vibrational energy from the vibrotactile devices. Output of the vibrotactile devices stimulate the cutaneous receptors of the skin at the locations where the vibrotactile devices are placed.
  • Applicant's own DE 10 2019 202 666 A1 relates to a system for providing neural stimulation signals.
  • the system is configured to elicit sensory percepts in the cortex of an individual that may be used for communicating conceptual information to an individual.
  • the system comprises means for selecting at least one neural stimulation signal to be applied to at least one afferent axon directed to at least one sensory neuron in the cortex of the individual.
  • the at least one neural stimulation signal corresponds to the conceptual information to be communicated.
  • the system further comprises means for transmitting the at least one neural stimulation signal to stimulation means of the individual.
  • US 2016/0012688 A1 relates to providing information to a user through somatosensory feedback.
  • a hearing device is provided to enable hearing-to-touch sensory substitution as a therapeutic approach to deafness.
  • the hearing device may provide better accuracy with the hearing-to-touch sensory substitution.
  • the signal processing includes low bitrate audio compression algorithms, such as linear predictive coding, mathematical transforms, such as Fourier transforms, and/or wavelet algorithms.
  • the processed signals may activate tactile interface devices that provide touch sensation to a user.
  • the tactile interface devices may be vibrating devices attached to a vest, which is worn by the user.
  • US 8,065,013 B2 relates to a method of transitioning stimulation energy (e.g., electrical stimulation pulses) between a plurality of electrodes that are implanted within a patient.
  • stimulation energy e.g., electrical stimulation pulses
  • US 10,437,335 B2 relates to a wearable Haptic Human/Machine Interface (HHMI) which receives electrical activity from muscles and nerves of a user. An electrical signal is determined having characteristics based on the received electrical activity. The electrical signal is generated and applied to an object to cause an action dependent on the received electrical activity.
  • the object can be a biological component of the user, such as a muscle, another user, or a remotely located machine such as a drone.
  • US 10,869,142 B2 relates to a new binaural hearing aid system, which is provided with a hearing aid in which signals that are received from external devices, such as a spouse microphone, a media player, a hearing loop system, a teleconference system, a radio, a TV, a telephone, a device with an alarm, etc., are filtered with binaural filters in such a way that a user perceives the signals to be emitted by respective sound sources positioned in different spatial positions in the sound environment of the user, whereby improved spatial separation of the different sound sources is facilitated.
  • external devices such as a spouse microphone, a media player, a hearing loop system, a teleconference system, a radio, a TV, a telephone, a device with an alarm, etc.
  • the provided auditory neural interface device, system and computer program allow to restore or support sound perception even for individuals that cannot receive a cochlear implant or ABI and /or provide high-fidelity sound perception that cannot be achieved with prior art technologies.
  • an auditory neural interface device for supporting or enabling sound perception by an individual, comprising: a receiver module (or receiver) configured to receive sound signals (e.g., analog or digital electrical signals generated by a microphone or obtained from remote sound transducer apparatus), a processing module (or processor) operably connected to the receiver module and configured to encode a received sound signal as a multi-channel neurostimulation signal.
  • sound signals e.g., analog or digital electrical signals generated by a microphone or obtained from remote sound transducer apparatus
  • a processing module or processor operably connected to the receiver module and configured to encode a received sound signal as a multi-channel neurostimulation signal.
  • the multi-channel neurostimulation signal is configured to directly stimulate afferent sensory neurons of the central nervous system, CNS, (i.e., of the brain and / or the spinal cord) of the individual and thereby to elicit, for each channel of the multi-channel neurostimulation signal, one or more non-auditory, preferably somatosensory, perceptions in a cortex area of the individual, wherein each channel of the neurostimulation signal is associated with a different non-auditory perception.
  • the device further comprises a neurostimulation module (or neurostimulator) operably connected to the processing module and configured to apply the multi-channel neurostimulation signal to a neurostimulation means of the individual (e.g., a multi-channel neurostimulation electrode).
  • the device comprises a transmitter module configured to transmit the multi-channel neurostimulation signal to a remote neurostimulation device which in turn is configured to apply the multi-channel neurostimulation signal to a neurostimulation means of the individual.
  • CNS central nervous system
  • SCS Spinal-Cord-Simulation
  • encoding by the processing module may comprise applying a filter operation to the received sound signal to generate a plurality of subcomponent signals of the sound signal and mapping each subcomponent signal to a different channel of the multi-channel neurostimulation signal.
  • the sound signal can be decomposed with a method that is chosen on the basis of how much information the neural interface can transmit.
  • said filter operation may involve performing spectral analysis, wavelet analysis, principal component analysis, independent component analysis, using a filter bank, and/or a combination thereof.
  • a received sound signal e.g., a sample of speech or a sample of a piece of music, etc.
  • a bank of N bandpass filters e.g., a bank of N bandpass filters
  • the neural interface device can enable or support sound perception even for patients that cannot be treated via conventional cochlear implants or ABIs.
  • the physiologic structure and function of the auditory nerve and upstream auditory processing may substantially improve flexibility, channel count and the fidelity of sound signal representation. In this manner, even complex auditory stimuli such as speech in a cocktail party environment or classical music can be perceived with sufficient fidelity.
  • a patient can learn to associate the information content of physical sound signals (e.g., the conceptual information encoded in speech, traffic noise, music, etc.) with the non-auditory perceptions elicited by the multi-channel neurostimulation signal.
  • the neural representation of the physical sound signal that is generated by the multi-channel neurostimulation signal is complex and variable enough that the relevant information content can be preserved during auditory processing and subsequent neurostimulation.
  • the processing module may be configured to determine, preferably via an on-line auto-calibration procedure, a maximal number N of different perceivable perceptual channels that are specific for the individual and select the applied filter operation based on the determination, such that a fidelity of a representation of the received sound signal by the plurality of subcomponent signals is maximized for the determined number of perceptual channels.
  • independent component analysis or a similar filter operation can be applied to the received sound signal in order to subdivide it into N subcomponent signals in such a manner that the information content / entropy of the neural representation of the sound signal elicited by applying the subcomponent signals to the afferent neurons is maximized.
  • Such an on-line autocalibration of the neural interface device / neurostimulation signal may be based on observing the excitation behavior or neural activation function of afferent sensory nerve fibers that can be stimulated by a given neurostimulation means such as a SCS-electrode or DBS electrode connected to corresponding a neurostimulation module or device.
  • a given neurostimulation means such as a SCS-electrode or DBS electrode connected to corresponding a neurostimulation module or device.
  • This approach is based on the insight that there exist strong correlations between the highly non-linear bioelectric response of an active stimulated afferent sensory nerve fiber (e.g., ECAP) or plurality of such fibers and a corresponding artificial sensory perception / artificial sensation elicited in a sensory cortex area of the individual.
  • ECAP active stimulated afferent sensory nerve fiber
  • This non-linear bioelectric response essentially serves as a fingerprint of the afferent sensory nerve fiber that can be measured and used for on-line recalibration of neurostimulation signal parameters for direct neurostimulation of afferent sensory neurons targeting directly or indirectly (i.e., via multi-synaptic afferent pathways) sensory neurons in a specific target sensory cortex area. In this manner, long-term stability of highly specific, fine-grained and multi-dimensional information transfer to the brain can be ensured.
  • the auditory neural interface device may be configured (e.g., via a suitable firmware routine or software application) to carry out an on-line auto-calibration procedure that may comprise the following steps:
  • determining the N different (artificial) sensations may comprises comparing the sensed bioelectric responses with a set of reference responses stored in a memory module of the neural interface device or obtained via a wired or wireless communication interface of the neural interface device.
  • the auto-calibration procedure may further comprise receiving, via a communication interface or user interface of the neural interface device, sensory feedback information from the individual associated with one or more of the sensations elicited by the plurality of neurostimulation test signals; and using the sensory feedback information for determining and / or characterizing the N different sensations and / or using the sensory feedback information for determining and / or subdividing the determined dynamic range of the one or more neurostimulation signals that are configured to elicit the one or more determined sensations.
  • the fidelity of perceptual channel characterization can be improved, since the recorded bioelectric responses can be correlated with the (subjective) sensory feedback information provided by the patient / individual.
  • the feedback information may comprise one or more indications of one or more of the following characteristics of the elicited sensations: a sensory modality, a location, an intensity and a frequency.
  • Determining the number N of usable perceptual channels (and the number M of symbols / differentiable perceptual levels / qualities per channel) in this manner allows the filters / signal transformations to be applied in a dynamic manner to the received sound signal, so that the fidelity of the neural representation is adapted (e.g., maximized) in real-time and in an on-line fashion in sync with the auto-calibration. For instance, if the relative distance between the stimulation electrode and the targeted afferent sensory neurons changes (e.g., due to a slow drift of a SCS-electrode or due to a movement of the patient), stimulation parameters can be adjusted such that the number of distinct perceptual channels and thereby sound signal representation fidelity stays as large as possible.
  • the processing module may be further configured to apply the filter operation according to multiple selectable filter modes wherein the generation of the subcomponent signals and / or the mapping of the subcomponent signals to the multiple channels of the neurostimulation signal may be based on the selected filter mode.
  • the filter mode may be user selectable (e.g., via a user interface) or automatically determined by the processing module.
  • the processing module may be further configured to determine, preferably based on an analysis of the received sound signal, an auditory environment and / or a likely type of sound signal source associated with the received sound signal; and encode the received sound signal based on the determined auditory environment and / or type of sound signal source.
  • certain frequency bands, phoneme subcomponent signals, musical instrument subcomponent signals or more abstract subcomponents signals may, for a whole class or subclass of received sound signals (e.g., speech, classical music), typically contain the majority of the information content of the received sound signal whereas other frequency bands / subcomponent signals mainly contain noise.
  • the processing module can select a filter operation best suited for an expected class subclass of sound signals.
  • the processing module may select a set of Gabor filters forming a Gabor filter bank best suited for extracting the spectro-temporal information that is typical for speech signals whereas a band pass filter bank with adjustable gains and bandwidths may be better suited for perceiving an orchestra playing classical music.
  • the set of perceptual channels may be adjusted based on the determined auditory environment and / or a likely type of sound signal source. For instance, a set of distinct somato-sensory sensations (e.g., a subset of the dermatomes or peripheral nerve fields of the back side of the torso; see Fig.
  • phosphenes e.g., perceived in the periphery of the retina
  • speech perception e.g., via mapping a set of Gabor-filtered subcomponent signals to a set of phosphenes that can be distinguished by the individual as different vowels, consonants, phonemes etc.
  • the multiple filter modes may comprise one or more of the following: a speech perception mode, a music perception mode, a closed space mode, an open space mode, a foreign language mode, a multi-source environment mode and a traffic mode.
  • the processing module may be configured to select the filter mode based on the determined auditory environment and / or likely type of sound signal source.
  • each filter mode may be associated with a plurality of filters being applied to the received sound signal to generate the plurality of subcomponent signals, wherein the filters may comprise bandpass filters, wavelet filters and / or Gabor filters or the like.
  • the filters may be configured to filter out distinct characteristics of the received sound signal that are typical for an auditory environment and / or a likely type of sound signal source associated with the selected filter mode.
  • different sets of filters / filter functions may be designed for filtering out vowels, consonants, phonemes, musical instruments, cars, animals, etc. and stored in a memory device of the auditory neural interface device.
  • the processing module determines, for example, that the likely sound source is music, it might access the memory device and retrieve a set of filters designed for music perception. As discussed above, this pre-configured set may then be further adapted based on the number N of available perceptual channels.
  • the number N of channels of the neurostimulation signal may be at least 2 (for representing simple sound characteristics), preferably at least 5 and more preferably at least 20 (for almost natural speech perception).
  • the number of different perceivable perceptual qualities per perceptual channel may larger than 2 (e.g., loud vs. quiet), preferably larger than 3 (e.g., loud, medium, quiet) and more preferably larger than 10 (e.g., spanning 30dB of sound pressure level in steps of 3dB).
  • the processing module may be configured to execute an autocalibration procedure, preferably interleaved with normal operation, to determine, for a given neurostimulation means or device of the individual, the number of differentiable perceptual channel and / or the number differentiable levels per channel.
  • At least one of the multiple channels of the multi-channel neurostimulation signal may be an auxiliary channel that encodes at least one of the following characteristics of the received sound signal, a sound power or amplitude, a sound pitch, a sound timing, a direction of the sound signal source and a motional state of the sound signal source.
  • the processing module may be configured to determine the direction, distance and / or the velocity vector (i.e., direction and magnitude) of a (moving) sound signal source and encode this information in one or more of perceptual channels established by the multi-channel neurostimulation signal. For example, if two or more spatially separated sound sensors provide sound signals to the auditory neural interface device, arrival time difference, a phase difference and / or a sound signal amplitude difference may be used to determine the spatial direction of a sound signal source. If the type of sound signal source is known, also the total distance may be determined from an amplitude comparison with a reference sound signal. Finally, by determining a Doppler shift associated with sound signals received from a moving sound signal source also the magnitude and direction (i.e. approaching or receding) of the velocity vector can be determined and subsequently communicated to the individual.
  • the velocity vector i.e., direction and magnitude
  • the sound signal may be received from at least two spatially separated sound sensors and the processor may be configured to determine a direction of the sound signal source based on information in the sound signal associated with the at least two spatially separated sound sensors, preferably based on a phase difference, a timing difference and / or an sound signal amplitude difference associated with the spatial separation of the at least two sound sensors.
  • the channel that encodes the sound signal direction may be configured to elicit somatosensory perceptions in adjacent areas of a body part, wherein each area corresponds to a different direction.
  • such an auxiliary channel may also encode context information associated with the received sound signal such as information about the sound signal source, a sound signal start or stop indication, one or more sign language symbols associated with the received sound signal, an indication of the emotional state of the sound signal source; and indication of the language used by the sound signal source.
  • context information such as information about the sound signal source, a sound signal start or stop indication, one or more sign language symbols associated with the received sound signal, an indication of the emotional state of the sound signal source; and indication of the language used by the sound signal source.
  • the auxiliary channel may even use a different type of perception than the channels used for sound perception.
  • a (multi-channel) SCS-electrode may be used by the auditory neural interface device to elicit a plurality of sound perceptions representing the received sound signal and a DBS-electrode may be used to elicit artificial sensations / perceptions of a different type / modality, such as vision or smell to implement the auxiliary channel.
  • the neurostimulation signal may be configured such that adjacent channels of the neurostimulation signal elicit somatosensory perceptions in adjacent areas of a body part of the individual or in adjacent body parts, preferably in a tonotopic manner. In this manner, patients that were used to normal cochlear sound processing, that also is based on a tonotopic organization of the sensory cells in the cochlear, will more easily adapt to the auditory interface device.
  • the neurostimulation signal may be configured such that the areas of the body part are arranged in an essentially 2D array and, wherein one direction of the array encodes sound source direction, and the other direction is used for mapping the adjacent channels. More generally, as illustrated in Fig. 1 below different sound representation channels may be mapped to different dermatomes and / or sub-areas of a dermatome, e.g., via using a look-up table.
  • Some embodiments relate to an auditory neural interface system for sound perception by an individual, comprising the auditory neural interface device as discussed above and one or more sound sensors providing input signals to the receiver module of the auditory neural interface device and optionally, a neurostimulation device for stimulating afferent sensory neurons in the brain and / or the spinal cord of the individual.
  • Such a computer program may comprise further instructions for operating the neural interface device in order to implement the functionalities as described above for the various embodiments of the neural interrace device.
  • the various modules of the devices and systems disclosed herein can for instance be implemented in hardware, software or a combination thereof.
  • the various modules of the devices and systems disclosed herein may be implemented via application specific hardware components such as application specific integrated circuits, ASICs, and / or field programmable gate arrays, FPGAs, and / or similar components and / or application specific software modules being executed on multi-purpose data and signal processing equipment such as CPUs, DSPs and / or systems on a chip (SOCs) or similar components or any combination thereof.
  • application specific hardware components such as application specific integrated circuits, ASICs, and / or field programmable gate arrays, FPGAs, and / or similar components
  • multi-purpose data and signal processing equipment such as CPUs, DSPs and / or systems on a chip (SOCs) or similar components or any combination thereof.
  • the various modules of the auditory neural interface device discussed above may be implemented on a multi-purpose data and signal processing device configured for executing application specific software modules and for communicating with various sensor devices and / or neurostimulation devices or systems via conventional wireless communication interfaces such as a NFC, a WIFI and / or a Bluetooth interface.
  • the various modules of the auditory neural interface device discussed above may also be part of an integrated neurostimulation apparatus, further comprising specialized electronic circuitry (e.g. neurostimulation signal generators, amplifiers etc.) for generating and applying the multi-channel neurostimulation signal to a neurostimulation interface of the individual (e.g. a multi-contact spinal cord stimulation electrode, a deep brain stimulation (DBS) electrode, etc.).
  • specialized electronic circuitry e.g. neurostimulation signal generators, amplifiers etc.
  • a neurostimulation interface of the individual e.g. a multi-contact spinal cord stimulation electrode, a deep brain stimulation (DBS) electrode, etc.
  • the neurostimulation signals generated by the auditory neural interface device described above may for instance also be transmitted to a neuronal stimulation device comprising a signal amplifier driving a multi-contact DBS electrode, spinal cord electrode, etc. that may already be implanted into a patient's nervous system for a purpose different than providing a hearing aid.
  • a neuronal stimulation device comprising a signal amplifier driving a multi-contact DBS electrode, spinal cord electrode, etc. that may already be implanted into a patient's nervous system for a purpose different than providing a hearing aid.
  • dedicated DBS-like electrodes or spinal cord stimulation electrodes may be implanted for the purpose of applying the neurostimulation signals generated by the auditory neural interface device via established and approved surgical procedures that were developed for implantation of conventional DBS electrodes or spinal cord stimulation electrodes etc.
  • the auditory neural interface device may also be integrated together with a neuronal stimulation device into a single device.
  • an auditory neural interface device that can be interfaced with neuronal stimulation electrodes such as spinal cord stimulation electrodes, DBS electrodes, etc., via an intermediate neuronal stimulation device.
  • neuronal stimulation electrodes such as spinal cord stimulation electrodes, DBS electrodes, etc.
  • the present disclosure can also be used with any other neuronal stimulation interface that is capable of stimulating afferent sensory axons of the CNS targeting one or more sensory cortex areas of an individual.
  • the various modules of the devices and systems disclosed herein can for instance be implemented in hardware, software, or a combination thereof.
  • the various modules of the devices and systems disclosed herein may be implemented via application specific hardware components such as application specific integrated circuits, ASICs, and / or field programmable gate arrays, FPGAs, and / or similar components and / or application specific software modules being executed on multi-purpose data and signal processing equipment such as CPUs, DSPs and / or systems on a chip (SOCs) or similar components or any combination thereof.
  • application specific hardware components such as application specific integrated circuits, ASICs, and / or field programmable gate arrays, FPGAs, and / or similar components and / or application specific software modules being executed on multi-purpose data and signal processing equipment such as CPUs, DSPs and / or systems on a chip (SOCs) or similar components or any combination thereof.
  • the various modules of the auditory neural interface device discussed herein above may be implemented on a multi-purpose data and signal processing device configured for executing application specific software modules and for communicating with various sensor devices and / or neurostimulation devices or systems via conventional wireless communication interfaces such as a Near Field Communication (NFC), a WIFI and / or a Bluetooth interface.
  • NFC Near Field Communication
  • WIFI Wireless Fidelity
  • the various modules of the auditory neural interface device may also be part of an integrated neurostimulation apparatus, further comprising specialized electronic circuitry (e.g. neurostimulation signal generators, amplifiers etc.) for generating and applying the determined neurostimulation signals to a neurostimulation interface of the individual (e.g. a multi-contact electrode, a spinal cord stimulation electrode, a DBS electrode etc.).
  • specialized electronic circuitry e.g. neurostimulation signal generators, amplifiers etc.
  • a neurostimulation interface of the individual e.g. a multi-contact electrode, a spinal cord stimulation electrode, a DBS electrode etc.
  • Figure 1 illustrates a person / individual 100 that is equipped with an auditory neural interface device as described in section 3 above and illustrated in an exemplary manner in Fig. 2 below.
  • the auditory neural interface device is implemented via direct neurostimulation of afferent sensory nerve fibers in the spinal cord via one or more multi-contact electrodes 104 driven by an implantable pulse generator (IPG) 102 that may be operatively / communicatively connected to or integrated with an auditory neural interface device as disclosed herein.
  • IPG implantable pulse generator
  • the auditory neural interface device may be calibrated such that neurostimulation signals generated by the auditory neural interface device and applied via the IPG 102 and the multi-contact electrode 104 elicit one or more action potentials 106 in one or more afferent sensory nerve fibers of the spinal cord 106 targeting (e.g. via multi-synaptic afferent sensory pathways) one or more sensory cortex areas 110 of the individual 100 where the one or more action potentials 106 generate (directly or indirectly) artificial sensory perceptions that can be used to represent a received sound signal (se Fig. 3 below) to be perceived by the brain of the individual 100.
  • artificial sensory perceptions that are elicited in a sensory cortex area can also be associated with any kind of abstract information that is intelligible (i.e. consciously or subconsciously) by the individual 100.
  • the auditory neural interface device receives sound signals recorded via one or more sound sensors / microphones 108 that may be worn by the individual 100, be integrated with the auditory neural interface device and / or be provided by a general purpose data and signal processing device such as a smart phone.
  • a general purpose data and signal processing device such as a smart phone.
  • some or all functionalities of the auditory neural interface devices discussed in detail in section 3 above may be implemented via application specific software modules executed by such a general-purpose data and signal processing device which in turn may be interfaced (e.g., wirelessly) with the IPG 102 or a similar neurostimulation device operating in conjunction to implement a sensory substitution-based hearing aid.
  • the perceptual channels correspond to different dermatomes 114a - 114g innervated by spinal nerve fibers branching of the spinal cord at location 112a to 112g.
  • different contacts of the stimulation electrode may be used to stimulate regions of the spinal cord typically relaying sensory information from a given dermatome (e.g., a dermatome 114a located on the front torso of the person).
  • complex, multi-contact neural stimulation signals may also be used to selectively stimulate single peripheral nerve fields within a given dermatome or combinations of dermatomes and / or peripheral nerve fields.
  • FIG. 2 shows an exemplary auditory neural interface device 200 according to an embodiment of the present disclosure.
  • the CBI device comprises an integrated neurostimulation and sensing module 230 (e.g. comprising a neuronal signal generator and an output amplifier as well as a sensing amplifier and an analog to digital converted and similar circuitry) that is connected to a plurality of output signal leads 235 and a plurality of separate or identical sensing signal leads 235 that may be interfaced with a neurostimulation interface of the individual (e.g. a multi-contact spinal cord stimulation electrode such as the electrode 104 shown in Fig. 1 ).
  • the exemplary auditory neural interface device may further comprise a communication antenna 260 operably connected to a communication interface module 210, configured for wireless communication (e.g., via NFC, Bluetooth, or a similar wireless communication technology).
  • the communication interface module 210 may be configured, for example, to receive one or more sound signals from one or more sound sensors (not shown; e.g., a set of microphones worn by the individual) and / or control information from a control device such as a remote control or a smart phone.
  • the communication interface module 210 is operably connected to a data / signal processing module 220 configured to generate one or more neurostimulation signals and /or signal parameters (e.g., waveform, pulse shape, amplitude, frequency, burst count, burst duration etc.) for generating the one or more neurostimulation signals.
  • the processing module 220 may access a data storage module 240 configured to store a plurality of sound signal filters for the various filter modes as described in section 3.
  • a plurality of neurostimulation signals or parameters used for generating a plurality of neurostimulation signals
  • auxiliary information e.g., for establishing a perceptual channel used to indicate the sound source direction, the motional state of the sound signal source and / or context information such as the emotional state of a speaker.
  • the generated neurostimulation signals and / or the signal parameters are input into the integrated neurostimulation and sensing module 230 that may be configured to process (e.g., modulate, switch, amplify, covert, rectify, multiplex, phase shift, etc.) the one or more (multi-channel) neurostimulation signals generated by the processing module 220 or to generate the one or more neurostimulation signals based on the signal parameters provided by the processing module 220.
  • process e.g., modulate, switch, amplify, covert, rectify, multiplex, phase shift, etc.
  • the generated and processed neurostimulation signals are then output by the neurostimulation and sensing module 230 and can be applied to one or more electric contacts of a neurostimulation electrode (e.g., a DBS electrode or spinal cord stimulation electrode as shown in Fig. 1 ) via output leads 235.
  • a neurostimulation electrode e.g., a DBS electrode or spinal cord stimulation electrode as shown in Fig. 1
  • the auditory neural interface device of Fig. 2 may also comprise a rechargeable power source 250 that, for instance may be wirelessly charged via a wireless charging interface 265.
  • the data / signal processing module 220 may be further configured to, e.g. in conjunction with the data storage module 240 and the neurostimulation and sensing module 230, to execute an on-line autocalibration method as discussed in section 3 above.
  • the auditory neural interface device may also comprise a transmitter module (e.g., the communication interface 210) as an alternative to the neurostimulation and sensing module 230 to communicate with a remote neurostimulation device.
  • Figures 3 and 4 illustrate a general example how some embodiments of the present invention can be used to establish a three-channel, non-auditory hearing aid for a patient.
  • the processing module filters a received sound signal (see waveform in top trace of Fig. 3 ) via a three-channel filter bank (see spectrogram in lower trace of Fig. 3 ).
  • each of the subcomponent signals is configured to elicit an artificial sensation perceived by the individual in the lips (channel 1; high frequency components of the received sound signal), in the right hand (channel 2, medium frequency components of the received sound signal) and the left hand (channel 3, low frequency components of the received sound signal).
  • filter operations such as wavelet or Gabor filters may also be used to subdivide a received sound signal into subcomponent signals that are then mapped to different perceptual channels.
  • the disclosed auditory neural interface device may be calibrated and N perceptual channels are identified as discussed in section 3 above. Each different channel could then be mapped to a different frequency band.
  • the number N (and the differentiated levels within each channel) will define the maximum resolution or bandwidth of the perceptual / transmission matrix, which relate to a specific characteristic of the implant type and implant location with respect to the neural tissue defined per individual patient.
  • the decomposition algorithm / filter operation of sound signals can be customized, so that e.g., an ICA is conducted which solves for a target number of components equals N.
  • This decomposition matrix may be fixed for the patient and subsequently a completely customized translation of the sound signal occurs that is optimized for the respective patient.
  • precalculated ICA decomposition matrices may be applied which are based on e.g. language-specific audio file training sets.
  • Figure 5 illustrates how some embodiments of the disclosed auditory neural interface device 200 can be equipped with source detection / discrimination modules (soft- and/or hardware based) that can enable the auditory neural interface device 200 to determine which part of a complex auditory environment should be perceived by the individual (not shown) with high fidelity and / or priority (e.g., the sound of an approaching car), which sounds with low fidelity / priority (e.g., a person 520 directly talking to the individual) and which sounds are to be filtered out completely (e.g., background noise generated by a remote group of people 530 talking).
  • source detection / discrimination modules soft- and/or hardware based
  • the filter modes and / or filter function stored in the memory module 240 of the auditory neural interface devices 200 can, for example, automatically be selected by the processing module, after a determination that the individual is located in an outdoor environment with likelihood of motorized traffic.
  • a traffic filter mode may for example use a specialized spatio-temporal filter operation to filter out sounds typically generated by dangerous objects (e.g., cars) with high fidelity and select one of the perceptual channels to transmit this subcomponent signal with high priority and / or signal strength.
  • dangerous objects e.g., cars
  • Figure 6 illustrates an embodiment of the disclosed auditory neural interface devices that is configured to transmit auxiliary information such as a sound signal duration or context information such as the emotional state of a speaker via a separate DBS electrode 610, while at the same time an SCS-electrode 104 (as illustrated in detail in Fig. 1 above) is operated to transmit the multi-channel neurostimulation signal used for sound signal representation.
  • auxiliary information such as a sound signal duration or context information such as the emotional state of a speaker via a separate DBS electrode 610
  • SCS-electrode 104 as illustrated in detail in Fig. 1 above
  • the processing module of the auditory neural interface device is configured to map, based on a selected filter mode and / or operation different types of sound signal sources (music, speech, alarms) to different perceptual channel addressable via the SCS-electrode.
  • the processor may also comprise or execute a semantics and /or context detection module that allows the auditory neural interface device to determine relevant context information, such as the language used by a sound source.
  • an auxiliary taste channel may be used to signal to the individual whether a sound signal source uses a foreign language (sweet) or the native language of the individual.
  • modern speech processing software e.g., trained multi-layered neural networks
  • Figure 7 illustrates that some embodiments of the present disclosure can also be used to supplement or support persons having residual hearing providing even further benefits over conventional Cochlear implants.
  • the auditory neural interface device may also comprise a hard- and / or software implemented sign language encoder module that can support sound perception by the individual by operating in a sign-language assistance mode.
  • the typical sign-language hand poses can be translated into a combination of individually detectable perceptual channels and be used to support sound perception by the individual.
  • Figure 8 illustrates the auto-recalibration procedure that is discussed in detail in section 3 above.
  • the neuronal sensing module 230 receives sound signals and processes (e.g., filters, maps, etc.) them as discussed above the neuronal sensing module 230 (see Fig. 2 above) constantly records the bioelectric responses (e.g., ECAP or somatosensory EESP, or extracellularly measured action potentials or similar bioelectric response) of the stimulated nerves / nerve fibers / neurons and derives an activation function that can be compared to a reference activation function 810 (as disclosed in US patent application 17/224,953 , incorporated herein in it's entirety).
  • bioelectric responses e.g., ECAP or somatosensory EESP, or extracellularly measured action potentials or similar bioelectric response
  • sensory feedback 820 from the patient can be used to determine whether the fidelity of the sound signal representation is still optimal or may be improved by readjusting the signal parameters and / or the filter operation used to generate the multi-channel neurostimulation signal. In this manner, the performance of the non-auditory hearing aid implemented by the auditory neural interface device can be maintained as good as possible even in normally behaving (e.g., moving) patients.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Cardiology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
EP21215578.2A 2021-04-01 2021-12-17 Dispositif d'interface neurale auditive Pending EP4197587A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21215578.2A EP4197587A1 (fr) 2021-12-17 2021-12-17 Dispositif d'interface neurale auditive
US17/698,341 US20230191129A1 (en) 2021-12-17 2022-03-18 Auditory neural interface device
PCT/EP2022/058761 WO2022207910A1 (fr) 2021-04-01 2022-04-01 Prothèse d'équilibre et dispositif d'interface auditive et programme informatique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP21215578.2A EP4197587A1 (fr) 2021-12-17 2021-12-17 Dispositif d'interface neurale auditive

Publications (1)

Publication Number Publication Date
EP4197587A1 true EP4197587A1 (fr) 2023-06-21

Family

ID=78957238

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21215578.2A Pending EP4197587A1 (fr) 2021-04-01 2021-12-17 Dispositif d'interface neurale auditive

Country Status (2)

Country Link
US (1) US20230191129A1 (fr)
EP (1) EP4197587A1 (fr)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7251530B1 (en) 2002-12-11 2007-07-31 Advanced Bionics Corporation Optimizing pitch and other speech stimuli allocation in a cochlear implant
US20110129093A1 (en) * 2009-05-27 2011-06-02 Maria Karam System and method for displaying sound as vibrations
US8065013B2 (en) 2002-02-04 2011-11-22 Boston Scientific Neuromodulation Corporation Method for optimizing search for spinal cord stimulation parameter setting
WO2012069429A1 (fr) * 2010-11-23 2012-05-31 National University Of Ireland Maynooth Méthode et appareil de substitution sensorielle
US20150332659A1 (en) * 2014-05-16 2015-11-19 Not Impossible LLC Sound vest
US20160012688A1 (en) 2014-07-09 2016-01-14 Baylor College Of Medicine Providing information to a user through somatosensory feedback
EP3170479A1 (fr) * 2015-11-17 2017-05-24 Neuromod Devices Limited Appareil et procédé pour traiter un trouble neurologique du système auditif
US9786201B2 (en) 2014-05-16 2017-10-10 Not Impossible LLC Wearable sound
US10437335B2 (en) 2015-04-14 2019-10-08 John James Daniels Wearable electronic, multi-sensory, human/machine, human/human interfaces
DE102019202666A1 (de) 2019-02-27 2020-08-27 CereGate GmbH Neuronales Kommunikationssystem
US10869142B2 (en) 2013-05-23 2020-12-15 Gn Hearing A/S Hearing aid with spatial signal enhancement

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8065013B2 (en) 2002-02-04 2011-11-22 Boston Scientific Neuromodulation Corporation Method for optimizing search for spinal cord stimulation parameter setting
US7251530B1 (en) 2002-12-11 2007-07-31 Advanced Bionics Corporation Optimizing pitch and other speech stimuli allocation in a cochlear implant
US20110129093A1 (en) * 2009-05-27 2011-06-02 Maria Karam System and method for displaying sound as vibrations
US9078065B2 (en) 2009-05-27 2015-07-07 Maria Karam System and method for displaying sound as vibrations
WO2012069429A1 (fr) * 2010-11-23 2012-05-31 National University Of Ireland Maynooth Méthode et appareil de substitution sensorielle
EP3574951B1 (fr) 2010-11-23 2021-06-09 National University of Ireland, Maynooth Procédé et appareil de substitution sensorielle
US10869142B2 (en) 2013-05-23 2020-12-15 Gn Hearing A/S Hearing aid with spatial signal enhancement
US9786201B2 (en) 2014-05-16 2017-10-10 Not Impossible LLC Wearable sound
US9679546B2 (en) 2014-05-16 2017-06-13 Not Impossible LLC Sound vest
US20150332659A1 (en) * 2014-05-16 2015-11-19 Not Impossible LLC Sound vest
US20160012688A1 (en) 2014-07-09 2016-01-14 Baylor College Of Medicine Providing information to a user through somatosensory feedback
US10437335B2 (en) 2015-04-14 2019-10-08 John James Daniels Wearable electronic, multi-sensory, human/machine, human/human interfaces
EP3170479A1 (fr) * 2015-11-17 2017-05-24 Neuromod Devices Limited Appareil et procédé pour traiter un trouble neurologique du système auditif
DE102019202666A1 (de) 2019-02-27 2020-08-27 CereGate GmbH Neuronales Kommunikationssystem

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CAN ECAP MEASURES BE USED FOR TOTALLY OBJECTIVE PROGRAMMING OF COCHLEAR IMPLANTS'', 10 1007/S10162-013-0417-9 (DOI, 19 September 2013 (2013-09-19)

Also Published As

Publication number Publication date
US20230191129A1 (en) 2023-06-22

Similar Documents

Publication Publication Date Title
Wouters et al. Sound coding in cochlear implants: From electric pulses to hearing
Loizou Mimicking the human ear
Zeng Trends in cochlear implants
US6925332B2 (en) Methods for programming a neural prosthesis
US8155747B2 (en) Electric and acoustic stimulation fitting systems and methods
WO2015035058A1 (fr) Système et procédé pour interface neurale animal-être humain
US9775999B2 (en) System comprising a cochlear stimulation device and a second hearing stimulation device and a method for adjustment according to a response to combined stimulation
Harczos et al. Making use of auditory models for better mimicking of normal hearing processes with cochlear implants: the SAM coding strategy
US20240024677A1 (en) Balance compensation
Feigenbaum Cochlear implant devices for the profoundly hearing impaired
Clark The multi‐channel cochlear implant: Past, present and future perspectives
EP4197587A1 (fr) Dispositif d'interface neurale auditive
Nguyen et al. Engineering challenges in cochlear implants design and practice
US20150328457A1 (en) Systems and methods for controlling a width of an excitation field created by current applied by a cochlear implant system
US20230308815A1 (en) Compensation of balance dysfunction
WO2024068005A1 (fr) Dispositif, système et programme informatique de suppression d'acouphènes
WO2022207910A1 (fr) Prothèse d'équilibre et dispositif d'interface auditive et programme informatique
US20130267767A1 (en) Artifact Cancellation In Hybrid Audio Prostheses
Clark Learning to understand speech with the cochlear implant
Gantz et al. COCHLEAR IMPLANT COMPRISONS
WO2024141900A1 (fr) Intervention audiologique
Melnikov et al. Analysis of Coding Strategies in Cochlear Implant Systems
Wilson et al. Use of auditory models in developing coding strategies for cochlear implants
WO2023209598A1 (fr) Test de parole basé sur une liste dynamique
Ifukube et al. Functional Electrical Stimulation to Auditory Nerves

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20211217

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR