EP4395884A1 - Machine learning for treatment of physiological disorders - Google Patents
Machine learning for treatment of physiological disordersInfo
- Publication number
- EP4395884A1 EP4395884A1 EP22863720.3A EP22863720A EP4395884A1 EP 4395884 A1 EP4395884 A1 EP 4395884A1 EP 22863720 A EP22863720 A EP 22863720A EP 4395884 A1 EP4395884 A1 EP 4395884A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- user
- data
- tinnitus
- physiological
- prior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 70
- 230000035790 physiological processes and functions Effects 0.000 claims abstract description 93
- 238000000034 method Methods 0.000 claims abstract description 68
- 208000009205 Tinnitus Diseases 0.000 claims description 242
- 231100000886 tinnitus Toxicity 0.000 claims description 242
- 238000002560 therapeutic procedure Methods 0.000 claims description 193
- 230000000638 stimulation Effects 0.000 claims description 49
- 230000005236 sound signal Effects 0.000 claims description 23
- 230000013016 learning Effects 0.000 claims description 21
- 230000007613 environmental effect Effects 0.000 claims description 16
- 230000001537 neural effect Effects 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 abstract description 11
- 239000007943 implant Substances 0.000 description 58
- 238000012545 processing Methods 0.000 description 57
- 238000012549 training Methods 0.000 description 19
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 16
- 230000004936 stimulating effect Effects 0.000 description 16
- 208000035475 disorder Diseases 0.000 description 15
- 230000008569 process Effects 0.000 description 13
- 230000001720 vestibular Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 8
- 210000003477 cochlea Anatomy 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 206010052804 Drug tolerance Diseases 0.000 description 5
- 210000004556 brain Anatomy 0.000 description 5
- 230000026781 habituation Effects 0.000 description 5
- 230000000873 masking effect Effects 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 208000027601 Inner ear disease Diseases 0.000 description 4
- 210000000988 bone and bone Anatomy 0.000 description 4
- 230000004007 neuromodulation Effects 0.000 description 4
- 238000013186 photoplethysmography Methods 0.000 description 4
- 208000024891 symptom Diseases 0.000 description 4
- 210000001519 tissue Anatomy 0.000 description 4
- 208000027520 Somatoform disease Diseases 0.000 description 3
- 208000012886 Vertigo Diseases 0.000 description 3
- 208000002173 dizziness Diseases 0.000 description 3
- 210000000959 ear middle Anatomy 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000005055 memory storage Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 210000003205 muscle Anatomy 0.000 description 3
- 208000027753 pain disease Diseases 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 239000003826 tablet Substances 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 231100000889 vertigo Toxicity 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000007175 bidirectional communication Effects 0.000 description 2
- 239000008280 blood Substances 0.000 description 2
- 210000004369 blood Anatomy 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 210000000860 cochlear nerve Anatomy 0.000 description 2
- 230000006854 communication Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 238000012377 drug delivery Methods 0.000 description 2
- 210000003027 ear inner Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000004520 electroporation Methods 0.000 description 2
- 230000001037 epileptic effect Effects 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 210000002768 hair cell Anatomy 0.000 description 2
- 208000016354 hearing loss disease Diseases 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 238000002582 magnetoencephalography Methods 0.000 description 2
- 239000011664 nicotinic acid Substances 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 201000002859 sleep apnea Diseases 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 206010011878 Deafness Diseases 0.000 description 1
- 208000016952 Ear injury Diseases 0.000 description 1
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 238000004497 NIR spectroscopy Methods 0.000 description 1
- 241000356847 Otolithes Species 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 210000003850 cellular structure Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 210000000613 ear canal Anatomy 0.000 description 1
- 210000000883 ear external Anatomy 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000002567 electromyography Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 231100000888 hearing loss Toxicity 0.000 description 1
- 230000010370 hearing loss Effects 0.000 description 1
- 208000013403 hyperactivity Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007383 nerve stimulation Effects 0.000 description 1
- 230000002842 otolith Effects 0.000 description 1
- 210000001265 otolithic membrane Anatomy 0.000 description 1
- 230000003836 peripheral circulation Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003860 sleep quality Effects 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4836—Diagnosis combined with treatment in closed-loop systems or methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/12—Audiometering
- A61B5/128—Audiometering evaluating tinnitus
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6846—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive
- A61B5/6847—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive mounted on an invasive device
- A61B5/686—Permanently implanted devices, e.g. pacemakers, other stimulators, biochips
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6846—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive
- A61B5/6867—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive specially adapted to be attached or implanted in a specific body part
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N1/00—Electrotherapy; Circuits therefor
- A61N1/18—Applying electric currents by contact electrodes
- A61N1/32—Applying electric currents by contact electrodes alternating or intermittent currents
- A61N1/36—Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
- A61N1/36036—Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the outer, middle or inner ear
- A61N1/36038—Cochlear stimulation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2560/00—Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
- A61B2560/02—Operational features
- A61B2560/0242—Operational features adapted to measure environmental factors, e.g. temperature, pollution
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
Definitions
- implantable medical devices now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a user. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.
- a tinnitus therapy apparatus comprises: a stimulation component configured to deliver stimulation signals to at least one user; and a machine learning device that detects onset of tinnitus events of the least one user with respect to an external sound environment, wherein the machine learning device includes: a state observing unit that obtains state data comprising physiological data representing a current physiological state of the at least one user and environmental data representing an external sound environment of the at least one user, a label data unit that acquires label data associated with the onset of tinnitus events, and a learning unit that, by using the state data and the label data, detects the onset of the tinnitus events of the at least one user and generates device configuration data, wherein the device configuration data indicates a tinnitus therapy for delivery to the at least one user via the stimulation component.
- non-transitory computer readable storage media comprise instructions that, when executed by a processor, cause the processor to: obtain, with a state observing unit, state data indicating a current physiological state of at least one user; obtain, with a label data unit, label data associated with onset of physiological events; and use the state data and the label data in a machine-learning model to automatically detect onset of physiological events of the at least one user and generate device configuration data indicates a therapy for delivery to the at least one user.
- FIG. IB is a side view of a user wearing a sound processing unit of the cochlear implant system of FIG. 1A;
- FIG. 1C is a schematic view of components of the cochlear implant system of FIG. 1 A;
- FIG. 4 is a functional block diagram illustrating integration of a machine-learning therapy device within a tinnitus therapy system, in accordance with certain embodiments presented herein;
- FIG. 5 is a flowchart of an example method, in accordance with embodiments presented herein.
- FIG. 6 is a schematic diagram illustrating a vestibular implant system with which aspects of the techniques presented can be implemented.
- a physiological disorder is an illness that interferes with the way that the functions of the body are carried out.
- Physiological disorders are generally caused when the normal or proper functioning of the body is affected because the body’s organs have malfunctioned, are not working, and/or the actual cellular structures have changed over a period of time causing illness.
- a “physiological event” refers to the onset or presence of a symptom of a physiological disorder, such as the onset/presence of tinnitus, pain, etc.
- a “physiological event” refers to the onset or presence of a symptom of a physiological disorder, such as the onset/presence of tinnitus, pain, etc.
- the techniques presented herein will generally be described with reference to treatment of inner ear physiological disorders (inner ear disorders) and, in particular, with reference to treatment of tinnitus.
- the techniques presented herein can be used to treat other inner ear disorders (e.g., vertigo, dizziness, etc.) and other types of physiological disorders (e.g., pain disorders, etc.).
- the techniques presented herein are primarily described with reference to a cochlear implant systems and/or tinnitus therapy systems. However, it is to be appreciated that the techniques presented herein may also be partially or fully implemented by other types of devices, including implantable medical devices, computing devices, consumer electronic devices, etc.
- the techniques presented herein may be implemented by other auditory prosthesis systems that include one or more other types of auditory prostheses, such as middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-acoustic prostheses, auditory brain stimulators, combinations or variations thereof, etc.
- the techniques presented herein may also be implemented by dedicated tinnitus therapy devices and tinnitus therapy device systems and/or tinnitus therapy devices forming part of another type of device (e.g., part of a hearing device).
- the techniques presented herein may also be implemented by, or used in conjunction with, vestibular devices (e.g., vestibular implants), visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation devices, etc.
- the techniques presented herein may also be partially or fully implemented by consumer devices, such as tablet computers, mobile phones, wearable devices, etc.
- Tinnitus is the perception of noise or “ringing” in the ears which currently affects an estimated 10-15% of the general population, increasing with age. Tinnitus is a common artefact of hearing loss, but can also be a symptom of other underlying conditions, such as ear injuries, circulatory system disorders, etc. Although tinnitus affects can range from mild to severe, almost one-quarter of those with tinnitus describe their tinnitus as disabling or nearly disabling/incapacitating, deteriorate the quality of a person’s life, and can drastically impact sleep quality. Tinnitus can be particularly debilitating in silent or crowed environments.
- Tinnitus has a particularly high prevalence in hearing-impaired persons and electrical stimulation of the inner ear, through for instance cochlear implant, has shown promising results on tinnitus relief and can be considered as a tinnitus management solution.
- a large number of cochlear implant users experience tinnitus reduction after cochlear implant activation.
- this particular population may not suffer from tinnitus when the cochlear implant is activated/on (e.g., delivering electrical stimulation to evoke hearing percepts), these users can still experience tinnitus when the cochlear implant is switched off and/or idle (e.g., in quiet environments).
- tinnitus therapies are activated, for example, manually when the user notices the presence of tinnitus and, in general, the particular tinnitus therapy will last for a predetermined of time or until the user deactivates the therapy.
- conventional tinnitus therapy is activated at certain times of the day (e.g., when the user is attempting to sleep), when the cochlear implant is turned off, etc.
- the OTE sound processing unit 106 is merely illustrative of the external devices that could operate with implantable component 112.
- the external component may comprise a behind-the-ear (BTE) sound processing unit or a micro-BTE sound processing unit and a separate external.
- BTE sound processing unit comprises a housing that is shaped to be worn on the outer ear of the user and is connected to the separate external coil assembly via a cable, where the external coil assembly is configured to be magnetically and inductively coupled to the implantable coil 114.
- alternative external components could be located in the user’s ear canal, worn on the body, etc.
- the OTE sound processing unit 106 also comprises the external coil 108, a charging coil 130, a closely-coupled transmitter/receiver (RF transceiver) 122, sometimes referred to as or radio-frequency (RF) transceiver 122, at least one rechargeable battery 132, and an external sound processing module 124.
- the external sound processing module 124 may comprise, for example, one or more processors and a memory device (memory) that includes sound processing logic.
- the memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices.
- the one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the sound processing logic stored in memory device.
- the implantable component 112 comprises an implant body (main module) 134, a lead region 136, and the intra-cochlear stimulating assembly 116, all configured to be implanted under the skin/tissue (tissue) 115 of the user.
- the implant body 134 generally comprises a hermetically-sealed housing 138 in which RF interface circuitry 140 and a stimulator unit 142 are disposed.
- the implant body 134 also includes the internal/implantable coil 114 that is generally external to the housing 138, but which is connected to the transceiver 140 via a hermetic feedthrough (not shown in FIG. ID).
- sound processing unit 106 includes the external sound processing module 124.
- the external sound processing module 124 is configured to convert received input signals (received at one or more of the input devices) into output signals for use in stimulating a first ear of a user (i.e., the external sound processing module 124 is configured to perform sound processing on input signals received at the sound processing unit 106).
- the one or more processors in the external sound processing module 124 are configured to execute sound processing logic in memory to convert the received input signals into output signals that represent electrical stimulation for delivery to the user.
- the memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices.
- NVM Non-Volatile Memory
- FRAM Ferroelectric Random Access Memory
- ROM read only memory
- RAM random access memory
- magnetic disk storage media devices optical storage media devices
- flash memory devices electrical, optical, or other physical/tangible memory storage devices.
- the one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the sound processing logic stored in memory device.
- the implantable sound sensors 160 are configured to detect/capture signals (e.g., acoustic sound signals, vibrations, etc.), which are provided to the implantable sound processing module 158.
- the implantable sound processing module 158 is configured to convert received input signals (received at one or more of the implantable sound sensors 160) into output signals for use in stimulating the first ear of a user (i.e., the processing module 158 is configured to perform sound processing operations).
- the one or more processors in implantable sound processing module 158 are configured to execute sound processing logic in memory to convert the received input signals into output signals 156 that are provided to the stimulator unit 142.
- FIG. 2 is a functional block diagram illustrating training and final operation of a machine-learning therapy device 262, in accordance with embodiments presented herein. More specifically, the machine-learning therapy device 262 shown in FIG. 2 includes a state observing unit (state unit) 282, a label data unit 284, and a learning unit 286. As described below, the machine-learning therapy device 262 is configured to generate “device configuration data” 269 (e.g., one or more control outputs) representing at least a selected treatment/therapy for use by the system (implantable medical device) to treat a physiological disorder experienced by the user, where the physiological disorder is manifest as a physiological event. Stated differently, the machine-learning therapy device 262 is configured to determine a preferred therapy for use by the system to treat the user’s physiological disorder.
- device configuration data e.g., one or more control outputs
- the machine-learning therapy device 262 is configured to determine a preferred therapy for use by the system to treat the user’s physiological disorder.
- the label data 285 can take different forms depending on the stage of the training process.
- the user notifies the system when he/she wants to change the therapy setting and will grade his/her subjective need via a user interface (as shown in FIGs. 3A, 3B, and 3C). That is, the label data 285 can represent both a preferred therapy and a subjective ranking/grading of a severity of a physiological event.
- the learning unit 286 correlates the state data 279 and the label data 285, over time, to develop the ability to automatically detect the occurrence of a specific physiological event and to automatically select a preferred therapy for the user, given the specific attributes of the detected physiological event and the user’s subjective preferences.
- the label data unit 284 can be a dynamic and progressive unit that collects label data differently depending on the phase of training/use.
- the label data 285 is data collected by the label data unit 284 in real-time.
- the user is asked/instructed to notify the system when he/she wants to change/optimize the treatment based on his/her subjective input.
- the label data 285 can comprise a real-time selection of a preferred therapy in the presence of a physiological event.
- the learning unit 286 is trained to determine which state data 279 represents a specific physiological event and how the user prefers to treat that specific physiological event.
- the learning unit 286 builds upon the training of the first phase and operates to detect physiological events and selects therapies for treatment of the physiological events over a period of time.
- this is a form semi-supervised learning where the user is asked to confirm or deny the therapy selections made by the system during the time period.
- the label data 285 collected by the label data unit 284 is retrospective data corresponding to a previous period of time during which the system made selections of preferred therapies and/or therapy changes. For example, the user is asked retrospectively to evaluate therapies automatically selected/adapted by the system during the previous hour, day, etc.
- the label data 285 can comprise a retrospective confirmation or evaluation of one or more therapies automatically selected by the system in the presence of a physiological event.
- Label data 285 collected retrospectively is sometimes referred to herein as “retrospective event reporting data” as it indicates the retrospective subjective preferences of the user in relation to the event detection and/or therapy selections made by the learning unit 286.
- Label data 285 collected retrospectively within a tinnitus therapy system is sometimes referred to as “retrospective tinnitus event reporting data.”
- the label data 285 collected by the label data unit 284 is data generated automatically based on feedback history (e.g., prior user selections, including real-time event reporting data and retrospective event reporting data) and the user does not necessarily provide any manual inputs.
- feedback history e.g., prior user selections, including real-time event reporting data and retrospective event reporting data
- the label data 285 is generated automatically based on the prior training phases.
- the user is not asked to give inputs to the system during the automatedoperation phase, such inputs can still be provided, as needed, either in real-time or retrospectively.
- the entry of a user input at this stage results in a change to the feedback/training history (e.g., the feedback history updated if the user notifies the system of a therapy change).
- the user validation can operate as a reward/penalty input 267 for adaption of the machine-learning process (e.g., adaption of the learning unit 286).
- Label data 285 generated automatically based on feedback history is sometimes referred to herein as “historical event reporting data” as it is built upon prior real-time and retrospective subjective preferences of the user in relation to the event detection and/or therapy selections.
- Label data 285 collected retrospectively within a tinnitus therapy system is sometimes referred to as “historical tinnitus event reporting data.”
- the learning unit 286 generates the device configuration data 269 from the state data 279, the label data 285, and, in certain examples, the operating state data 277.
- the label data 285 can be progressively changed, over time, so as to decrease in the level of involvement and awareness of the user to the selection of a therapy at a given time.
- the user does not need to notify the system of a physiological event or to change the selected therapy (e.g., operating state of his/her tinnitus management program) because the system automatically identifies the physiological event and selects the user’s preferred therapy for treatment of the physiological event based on the historical training data.
- the above three training phases are merely illustrative and that the techniques presented herein can use other training phases to train a system to detect a physiological event and selects the user’s preferred therapy for treatment of the physiological event.
- FIGs. 3A, 3B, and 3C are example user interfaces that can be used in the above or other example training phases to provide inputs that result in the generation of label data specifically in relation to tinnitus. More specifically, the interfaces shown in FIGs. 3 A and/or 3B could be used during the first training phase to activate, stop, or change a tinnitus therapy. The interface shown in FIG. 3C could be used, for example, to provide an indication of the severity of a tinnitus event. It is to be appreciated that these three interfaces are merely illustrative.
- FIG. 4 illustrates a specific use of the techniques presented to select a preferred tinnitus therapy for a user. That is, FIG. 4 is a functional block diagram illustrating an example tinnitus therapy system 202 configured with a machine-learning therapy device, such as machine-learning therapy device 262, for automated selection of tinnitus therapies in response to detected tinnitus events.
- a machine-learning therapy device such as machine-learning therapy device 262
- the tinnitus therapy system 202 could be a stand-alone implantable tinnitus therapy device, incorporated as part of an auditory prosthesis, such as a cochlear implant, bone conduction device, middle ear auditory prosthesis, direct acoustic stimulator, auditory brain stimulator, etc.
- an auditory prosthesis such as a cochlear implant, bone conduction device, middle ear auditory prosthesis, direct acoustic stimulator, auditory brain stimulator, etc.
- the tinnitus therapy system 202 can comprise or be a component of, for example, a medical device system (e.g., a cochlear implant system), a computing device, a consumer electronic device, etc.
- a medical device system e.g., a cochlear implant system
- the term “user” is used to generic refer to any user of a tinnitus therapy system, such as tinnitus therapy system 202, who suffers from tinnitus. The user can also suffer from hearing impairments or physiological disorders other than tinnitus.
- the tinnitus therapy system 202 comprises a sensor unit 264, a processing unit 266, and a stimulation unit 268.
- the sensor unit 264, the processing unit 266, and the stimulation unit 268 can each be implemented across one or more different devices and, as such, the specific configuration shown in FIG. 4 is merely illustrative.
- the sensor module 264 comprises a plurality of sensors 265(1)-265(N) that are each configured to capture signals representing one or more of a current physiological state of a user or an ambient/extemal sound environment of the user.
- the signals captured by the sensors 265(1)-265(N) are the “state data” or “state variables” 279 (FIG. 2) and can take a number of different forms and can be captured by a number of different sensors.
- the sensors 265(1)-265(N) can comprise sound sensors (e.g., microphones capturing sound signals), movement sensors (e.g., accelerometers capturing accelerometer signals), body noise sensors, medical sensors, such as electroencephalogram (EEG) sensors (e.g., one or more external or implantable electrodes and one or more associated recording amplifiers configured to record/measure electrical activity in the user’s brain), electromyography (EMG) sensors or other muscle or eye movement detector (e.g., one or more external or implantable electrodes and one or more associated recording amplifiers configured to record/measure muscle response or electrical activity in response to a nerve's stimulation of the muscle), photoplethysmography (PPG) sensor (e.g., sensors configured to optically detect volumetric changes in blood in peripheral circulation), electro-oculogram (EOG) sensors, polysomnographic sensors, Magnetoencephalography (MEG) sensors, heart rate sensors, temperature sensors, skin conductance sensors, Functional Near-Infrared Spectroscopy (fNIRS) sensors,
- the state data 279 can also include not only the direct sensor signals, but also processed version of the sensor signals.
- the state data 279 can include sound/environmental classification data generated from captured sound signals.
- a sound classification module is configured to evaluate/analyze the sound signals and determine the sound class of the sound signals. That is, the sound classification module is configured to use the received sound signals to “classify” the ambient sound environment and/or the sound signals into one or more sound categories (i.e., determine the input signal type).
- the sound classes/categories may include, but are not limited to, “Speech,” “Noise,” “Speech+Noise,” “Music,” and “Quiet.”
- the sound classification module can also estimate the signal-to-noise ratio (SNR.) of the sound signals.
- SNR signal-to-noise ratio
- the sound classification module generates sound classification data that can be part of the state data 279.
- the state data 279 represents a combination of skin conductance values, heart rate variability values, and accelerometer signals.
- the state data 279 represents a combination of skin conductance, photoplethysmography (PPG) sensor signals, such as heart rate variability values and blood volume.
- the state data 279 a combination of neurophysiological measurements, such as EEG signals, MEG signals, and fNIRS signals. It is to be appreciated that these specific combinations of sensor outputs as state data 279 are merely illustrative and that any of a number of different combinations of sensor outputs can be used in alternative embodiments.
- the state data 279 captured by, or generated from, the sensors 265(1)-265(N) are converted into electrical input signals (if not already in an electrical form), which are represented in FIG. 4 by arrow 279.
- the state data 279 (electrical input signals) is provided to the machine-learning therapy device 262.
- the machine-learning therapy device 262 is used specifically to treat a user’s tinnitus
- the machine-learning tinnitus therapy device 262 can be referred to as a “machine-learning tinnitus therapy device (e.g., a machine-learning model configured specifically for treatment of tinnitus).
- the processing unit 266 comprises the machine-learning tinnitus therapy device 262, a control module 272, and a remote control module 278. It is to be appreciated that the functional arrangement shown in FIG. 4 is merely illustrative and does not require or imply any specific structural arrangements. The various functional modules shown in FIG. 4 can be implemented in any combination of hardware, software, firmware, etc., and one or more of the modules could be omitted in different embodiments.
- the machine-learning tinnitus therapy device 262 uses the state data 279, the label data 285, and potentially the operating state data 277, to determine whether tinnitus is present and, at least in a final or automated-operation phase to generate device configuration data 269, based on this determination, that is used to generate tinnitus therapy signals 283 for delivery to the user. That is, as noted, the device configuration data 269 represent the user’s preferred tinnitus therapy settings/program, as determined through a machine-learning process, such as the one described above with reference to FIG. 2.
- the control module 272 is configured to use device configuration data 269 to select, set, determine, or otherwise adjust a tinnitus therapy for the user, as a function of the detected tinnitus e.g., implement the appropriate tinnitus therapy for the user, as determined by the machine-learning tinnitus therapy device 262.
- the tinnitus therapy that is to be provided to the user is specifically determined and adjusted, in real-time, based on the user’s state (e.g., stress, specific needs, etc.) in the presence of tinnitus, potentially at different levels, as determined by the machine-learning tinnitus detection device 262.
- the tinnitus therapy could also be adapted based on the ambient sound environment.
- the tinnitus therapy signals 283 can be masking signals that are configured to mask/cover the user’s tinnitus symptoms (e.g., expose the user to sounds/noises at a loud enough volume that it partially or completely covers the sound of their tinnitus).
- the tinnitus therapy signals 283 can be distraction signals that are configured to divert the user’s attention from the sound of tinnitus.
- the tinnitus therapy signals 283 can be habituation signals that are configured to assist the user’s brain in reclassifying tinnitus as an unimportant sound that can be consciously ignored.
- the tinnitus therapy signals 283 can be neuromodulation signals that are configured to minimize the neural hyperactivity thought to be the underlying cause of tinnitus. In certain embodiments, the tinnitus therapy signals 283 can be any combination of masking signals, distraction signals, habituation signals, and/or neuromodulation signals.
- the tinnitus therapy system 202 includes the simulation unit 268 that is configured to generate the tinnitus therapy signals 283, whether configured for masking, distraction, habituation, and/or neuromodulation purposes.
- the simulation unit 268 operates based on tinnitus therapy control signals 281 from the control module 272.
- the tinnitus therapy control signals 281 can dictate a number of different attributes/parameters for the tinnitus therapy signals 283.
- the control signals 281 can be such that the tinnitus therapy signals 283 will be pure tone signals, multi tone signals, broadband noise, narrowband noise, low-pass filtered signals, high-pass filtered signals, bandpass filter signals, predetermined recordings, etc.
- the tinnitus therapy control signals 281 can also set modulations in the tinnitus therapy signals 283, transitions, etc. It is to be appreciated that these specific parameters are merely illustrative, and that the tinnitus therapy signals 283 can have any of a number of different forms.
- the tinnitus therapy signals 283 can be electrical stimulation signals, mechanical stimulation signals, electro-mechanical stimulation signals (e.g., electrical signals and mechanical signals delivered simultaneously or in close temporal proximity to one another), acoustic stimulation signals, electro-acoustic stimulation signals (e.g., electrical signals and acoustic signals delivered simultaneously or in close temporal proximity to one another), etc.
- the machine-learning tinnitus therapy device 262 is trained to determine the preferred tinnitus therapy.
- the machine-learning tinnitus therapy device 262 can be trained to dynamically adjust a level (amplitude) of the tinnitus therapy signals 283 based on the level of the tinnitus (e.g., from a level of zero to a max level).
- the machine-learning tinnitus therapy device 262 can be trained to adjust a frequency or modulation of the tinnitus therapy signals 283.
- the machine-learning tinnitus therapy device 262 can be trained to adjust the type of tinnitus therapy signals 283 (e.g., select one of, or switch between, masking signals, distraction signals, habituation signals, and/or neuromodulation purposes).
- the tinnitus therapy signals 283 are electrical stimulation (current) signals
- the machine-learning tinnitus therapy device 262 can be trained to adjust one or more of the current level, pulse rate or pulse width of the tinnitus therapy signals 283.
- the control module 272 is configured to store a plurality of different tinnitus therapy maps 275.
- each of the tinnitus therapy maps 275 is a set/collection of parameters that, when selected, control the generation of the tinnitus therapy signals (e.g., used to generate tinnitus therapy control signals 281).
- the parameters can control the sound type (e.g., white noise, wave sounds, rain sounds, etc.), fluctuation or modulation rate, amplitude, sound or masker level settings, on/off, pitch settings transition time settings, etc.
- different tinnitus therapy maps 275 can be created (e.g., by the software, an audiologist/clinician, through artificial intelligence, etc.) for different situations (i.e., different combinations of body noise classification(s) and environmental classifications).
- maps for different therapies such as specific maps for masking, specific maps for distraction, specific maps for habituation, specific maps for retraining, etc.
- the machine-learning tinnitus therapy device 262 can be trained to select one of the tinnitus therapy maps 255 for use in generating the tinnitus therapy signals delivered to the user and/or dynamically adjust settings attributes of the tinnitus therapy signals 283.
- the presence of multiple tinnitus maps is merely illustrative and that other embodiments could include one or zero tinnitus maps.
- the different tinnitus therapy maps 275 could be omitted in alternative embodiments and, instead, the machine-learning tinnitus therapy device 262 is trained to dynamically determine the settings/attributes for tinnitus therapy control signals 281. That is, the specific use of tinnitus therapy maps is merely illustrative and that embodiments presented herein can be implemented without the use of stored tinnitus maps.
- the processing unit 266 also comprises a remote control module 278.
- the remote control module 278 can be used to update/adjust, over time, what tinnitus therapy map is selected by the control module 272 based, for example, on user preferences. That is, the remote control module 278 can be used as part of the training process described with reference to FIG. 2 to, for example, receive control data from an external device (e.g., mobile phone) operating with the tinnitus therapy system 202.
- an external device e.g., mobile phone
- the tinnitus therapy system 202 is, in certain examples, configured to deliver stimulation signals to the user in order to remediate her tinnitus.
- the tinnitus therapy can be started when needed and/or ended when not needed anymore.
- the stimulation signals referred to herein as tinnitus therapy signals, can be subthreshold signals (e.g., inaudible electrical stimulation signals) or suprathreshold (e.g., audible electrical stimulation signals).
- FIG. 4 illustrates an embodiment in which the machine-learning tinnitus detection module 262 is configured to implement an automated learning or adaption process to learn what tinnitus relief settings are optimal for the user (e.g., which signals and parameter settings enable the user to go to sleep the fastest, which signals and parameter settings are preferred by the user, etc.).
- the machine-learned tinnitus detection module 262 is, or includes, a classification function/model configured to generate a classification of whether tinnitus is present or not that is accordingly used to set a therapy.
- FIG. 5 is a flowchart of an example method 590 for treating tinnitus events using machine learning, in accordance with certain embodiments presented herein.
- Method 590 begins at 592 where a state observing unit obtains state data indicating a current physiological state of at least one user.
- a label data unit obtains label data associated with onset of tinnitus events.
- the presented herein may also be implemented by, or used in conjunction with, vestibular devices (e.g., vestibular implants), visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation devices, etc.
- vestibular devices e.g., vestibular implants
- visual devices i.e., bionic eyes
- sensors pacemakers
- defibrillators functional electrical stimulation devices
- catheters e.g., seizure devices for monitoring and/or treating epileptic events
- seizure devices e.g., devices for monitoring and/or treating epileptic events
- sleep apnea devices e.g., electroporation devices, etc.
- electroporation devices e.g., electroporation devices, etc.
- FIG. 6 illustrates an example vestibular stimulator system 602, with which embodiments presented herein can be implemented.
- the vestibular stimulator system 602 comprises an implantable component (vestibular stimulator) 612 and an external device/component 604 (e.g., external processing device, battery charger, remote control, etc.).
- the external device 604 comprises a wireless power transmitter unit 660 that may have an arrangement that is similar to, for example, wireless power transmitter units 360 or 860, described above.
- the external device 604 is configured to transfer power (and potentially data) to the vestibular stimulator 612,
- the stimulating assembly 616 is configured such that a surgeon can implant the stimulating assembly adjacent the user’s otolith organs via, for example, the user’s oval window. It is to be appreciated that this specific embodiment with three stimulation electrodes is merely illustrative and that the techniques presented herein may be used with stimulating assemblies having different numbers of stimulation electrodes, stimulating assemblies having different lengths, etc.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Pathology (AREA)
- Otolaryngology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Radiology & Medical Imaging (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Physical Education & Sports Medicine (AREA)
- Prostheses (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163240421P | 2021-09-03 | 2021-09-03 | |
PCT/IB2022/057559 WO2023031712A1 (en) | 2021-09-03 | 2022-08-12 | Machine learning for treatment of physiological disorders |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4395884A1 true EP4395884A1 (en) | 2024-07-10 |
Family
ID=85412017
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP22863720.3A Pending EP4395884A1 (en) | 2021-09-03 | 2022-08-12 | Machine learning for treatment of physiological disorders |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4395884A1 (zh) |
CN (1) | CN117897203A (zh) |
WO (1) | WO2023031712A1 (zh) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008031169A1 (en) * | 2006-09-14 | 2008-03-20 | Cochlear Limited | Medical implant configuration method |
US20140303424A1 (en) * | 2013-03-15 | 2014-10-09 | Iain Glass | Methods and systems for diagnosis and treatment of neural diseases and disorders |
EP3687624A4 (en) * | 2017-09-29 | 2021-06-16 | Cochlear Limited | HEARING PROSTHESIS |
US10790919B1 (en) * | 2019-03-26 | 2020-09-29 | Electronic Arts Inc. | Personalized real-time audio generation based on user physiological response |
CN110613459B (zh) * | 2019-09-12 | 2023-03-24 | 江苏贝泰福医疗科技有限公司 | 一种基于共享云计算平台的耳鸣耳聋检测验配及治疗系统 |
-
2022
- 2022-08-12 CN CN202280059041.0A patent/CN117897203A/zh active Pending
- 2022-08-12 WO PCT/IB2022/057559 patent/WO2023031712A1/en active Application Filing
- 2022-08-12 EP EP22863720.3A patent/EP4395884A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN117897203A (zh) | 2024-04-16 |
WO2023031712A1 (en) | 2023-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112470495B (zh) | 用于假体的睡眠相关调整方法 | |
US20180085581A1 (en) | Perception change-based adjustments in hearing prostheses | |
US20240024677A1 (en) | Balance compensation | |
US20220330844A1 (en) | Systems and methods for monitoring and acting on a physiological condition of a stimulation system recipient | |
CN115768514A (zh) | 绕过验证的医疗装置控制 | |
US20230110745A1 (en) | Implantable tinnitus therapy | |
US20220387781A1 (en) | Implant viability forecasting | |
US20230308815A1 (en) | Compensation of balance dysfunction | |
EP4395884A1 (en) | Machine learning for treatment of physiological disorders | |
US20240090828A1 (en) | Wakefulness-level tinnitus therapy | |
US20240335661A1 (en) | Phase coherence-based analysis of biological responses | |
US20240306945A1 (en) | Adaptive loudness scaling | |
US20230226351A1 (en) | Surgical healing monitoring | |
US20230372712A1 (en) | Self-fitting of prosthesis | |
WO2023126756A1 (en) | User-preferred adaptive noise reduction | |
WO2024209308A1 (en) | Systems and methods for affecting dysfunction with stimulation | |
WO2023119076A1 (en) | Tinnitus remediation with speech perception awareness | |
CN112638470A (zh) | 利用修复体技术和/或其它技术的生理测量管理 | |
WO2024141900A1 (en) | Audiological intervention | |
WO2023222361A1 (en) | Vestibular stimulation for treatment of motor disorders | |
WO2024023676A1 (en) | Techniques for providing stimulus for tinnitus therapy | |
WO2024057131A1 (en) | Unintentional stimulation management | |
WO2024003688A1 (en) | Implantable sensor training | |
WO2024079571A1 (en) | Deliberate recipient creation of biological environment | |
WO2024052781A1 (en) | Smooth switching between medical device settings |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20240228 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |