US20230412996A1 - System and method for masking tinnitus - Google Patents
System and method for masking tinnitus Download PDFInfo
- Publication number
- US20230412996A1 US20230412996A1 US18/273,305 US202218273305A US2023412996A1 US 20230412996 A1 US20230412996 A1 US 20230412996A1 US 202218273305 A US202218273305 A US 202218273305A US 2023412996 A1 US2023412996 A1 US 2023412996A1
- Authority
- US
- United States
- Prior art keywords
- sound
- sensor
- sounds
- processing device
- sensors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000000873 masking effect Effects 0.000 title claims abstract description 11
- 208000009205 Tinnitus Diseases 0.000 title claims description 19
- 231100000886 tinnitus Toxicity 0.000 title claims description 19
- 238000012545 processing Methods 0.000 claims abstract description 60
- 230000001766 physiological effect Effects 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims abstract description 5
- 230000010255 response to auditory stimulus Effects 0.000 claims abstract description 5
- 230000017531 blood circulation Effects 0.000 claims description 22
- 230000029087 digestion Effects 0.000 claims description 8
- 208000037656 Respiratory Sounds Diseases 0.000 claims description 6
- 230000013707 sensory perception of sound Effects 0.000 claims description 6
- 238000002604 ultrasonography Methods 0.000 claims description 6
- 206010060965 Arterial stenosis Diseases 0.000 claims description 5
- 239000013013 elastic material Substances 0.000 claims description 5
- 239000007943 implant Substances 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 5
- 230000002792 vascular Effects 0.000 claims description 4
- 230000002526 effect on cardiovascular system Effects 0.000 claims description 3
- 210000000707 wrist Anatomy 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000000747 cardiac effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000000241 respiratory effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 229920001971 elastomer Polymers 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000029058 respiratory gaseous exchange Effects 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 208000013738 Sleep Initiation and Maintenance disease Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 239000000853 adhesive Substances 0.000 description 1
- 230000001070 adhesive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 210000003423 ankle Anatomy 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 210000001367 artery Anatomy 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000009849 deactivation Effects 0.000 description 1
- 210000002249 digestive system Anatomy 0.000 description 1
- 239000000806 elastomer Substances 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 210000002216 heart Anatomy 0.000 description 1
- 206010022437 insomnia Diseases 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 208000028173 post-traumatic stress disease Diseases 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000003362 replicative effect Effects 0.000 description 1
- 210000002345 respiratory system Anatomy 0.000 description 1
- 239000003826 tablet Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/75—Electric tinnitus maskers providing an auditory perception
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/12—Audiometering
- A61B5/128—Audiometering evaluating tinnitus
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/683—Means for maintaining contact with the body
- A61B5/6831—Straps, bands or harnesses
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/742—Details of notification to user or communication with user or patient ; user input means using visual displays
- A61B5/7435—Displaying user selection data, e.g. icons in a graphical user interface
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B7/00—Instruments for auscultation
- A61B7/02—Stethoscopes
- A61B7/04—Electric stethoscopes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/558—Remote control, e.g. of amplification, frequency
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0204—Acoustic sensors
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0233—Special features of optical sensors or probes classified in A61B5/00
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0247—Pressure sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
Definitions
- Tinnitus is an auditory perception of sound without an external source. An estimated 50 million Americans suffer from tinnitus. There are very few available treatments for tinnitus. For many, tinnitus is perceived as a ringing sound, while for others, it is perceived as whistling, buzzing, chirping, hissing, and/or humming. The sound may seem to come from one ear or both, from inside the head, or from a distance. Tinnitus may also be constant or intermittent. Thus, there is a need for a system and method to mask tinnitus to provide relief for millions of people suffering from this condition.
- the present disclosure provides a system and method for masking tinnitus, including continuous or intermittent tinnitus.
- the system includes one or more input devices configured to capture normal physiological sounds of a person wearing the input device(s). Physiological sounds that are recorded include, but are not limited to, blood flow, heartbeat, respiration, digestion, etc.
- the system also includes one or more output devices configured to play back the physiological sounds at supraphysiological levels to provide the patient suffering from tinnitus with more “normal” sounds.
- the term “supraphysiological level” denotes an amplified level that is above the amplitude of the physiological sound when the sound was recorded. These sounds allow the patient to focus on the physiological sounds, thereby masking tinnitus.
- the system according to the present disclosure may also be used to help any persons suffering from a debilitating state where focus on the “normal” body sounds could be helpful (e.g., meditation training, post-traumatic stress disorder, combat stress, anxiety, insomnia, etc.).
- a system for masking a perceived sound includes a wearable device disposed on a person, the wearable device including a plurality of sensors, which may be acoustic or ultrasound sensors, each of which is configured to output a sound waveform in response to sounds generated by physiological activity of the person.
- the system also includes a processing device coupled to the plurality sensors and configured to process the sound waveforms.
- the system further includes a sound output device coupled to the processing device, the sound output device is configured to output the biological sound waveforms to mask a perceived sound.
- the wearable device may include a band.
- the band may be formed from an elastic material configured to induce arterial stenosis thereby increasing blood flow turbulence.
- the wearable device may have an ultrasound sensor using Doppler effect to measure the blood flow and transmit a sound waveform from the blood flow measurements.
- the plurality of sensors includes at least one inner sensor disposed on an inner surface of the band and configured to measure sound generated by the blood flow or blood flow turbulence.
- the plurality of sensors includes at least one outer sensor disposed on an outer surface of the band and configured to measure external sounds.
- the sounds generated by the physiological activity of the person include at least one a vascular sound, cardiac sound, respiratory sound, and digestion sound.
- the processing device may be further configured to categorize the sound waveforms generated by the physiological activity and to store the sound waveforms as sound files in corresponding storage banks.
- the processing device further includes a user input device configured to display a graphical user interface.
- the graphical user interface is configured to enable selection of at least one of the sound files for output through the sound output device.
- the processing device is further configured to mix the sound waveforms.
- the sound output device may be a headphone, a cochlear implant, or a hearing aide.
- a method for masking a perceived sound includes placing a wearable device on a person, the wearable device including a plurality of sensors. The method also includes generating a sound waveform at each sensor of the plurality of sensors in response to sounds generated by physiological activity of the person. The method further includes processing at a processing device the sound waveforms and outputting the sound waveforms through a sound output device coupled to the processing device to mask a perceived sound.
- the wearable device includes a band formed from an elastic material configured to induce arterial stenosis thereby increasing blood flow turbulence.
- the wearable device may include an adjustable band where the inner transducer is an ultrasound transducer with the ability to measure blood flow using Doppler effect and transmit the sound waveform of the normal blood flow in the absence of turbulence.
- the plurality of sensors includes at least one inner sensor disposed on an inner surface of the band and configured to measure sound generated by the blood flow or blood turbulence.
- the plurality of sensors includes at least one outer sensor disposed on an outer surface of the band and configured to measure external sounds.
- the sounds generated by the physiological activity of the person include at least one of a cardiovascular sound, a respiratory sound, or a digestion sound.
- the method may further include categorizing the sounds generated by physiological activity of the person; and storing the categorized sounds as sound files in corresponding storage banks.
- the method may further include mixing the sound waveforms into a combined sound output, or separating overlapping sound waves into individual component waveforms divided by sound source by matching the waveform to a sound database.
- the sound output device may be one of a headphone, a cochlear implant, or a hearing aide.
- FIG. 1 is a schematic diagram of a system for masking tinnitus according to one embodiment the present disclosure
- FIG. 2 is a perspective view of a wearable device for receiving sounds according to one embodiment the present disclosure
- FIG. 3 is a schematic diagram of a processing device of the system of FIG. 2 according to one embodiment of the present disclosure
- FIG. 4 is a diagram of a graphical user interface of the processing device of FIG. 3 according to one embodiment the present disclosure.
- FIG. 5 is a perspective view of a wearable transducer assembly according to one embodiment the present disclosure.
- FIG. 1 shows a system 10 for generating sounds to mask tinnitus and other distracting or debilitating sounds.
- the system 10 includes one or more wearable devices 20 having one or more sensors that are connected to a processing device 30 ( FIG. 2 ), which in turn is connected to a sound output device 50 .
- the processing device 30 may be a computing device 40 , i.e., a tablet or a mobile phone.
- the computing device 40 may be used in conjunction with the processing device 30 .
- the sound output device 50 may be any suitable headphone or earpiece disposed in and/or over the ear of the patient “P.”
- the sound output device 50 may also be a cochlear implant or any other hearing aide.
- the sound output device 50 outputs live or prerecorded sounds received by the wearable device 20 , such as person's blood flow sounds, breathing sounds, heartbeat, digestion sounds, and other sounds generated by physiological activity of the person “P” body to mask tinnitus.
- the wearable device 20 may be worn at one or more locations around the body or a limb of a person “P”, such as a wrist, ankle, chest, etc.
- the wearable device 20 may be attached to the person “P” using a band 22 or an adhesive bandage (not shown), such that the wearable device 20 is in physical contact with the person “P” allowing for measurement of sounds generated by the person “P.”
- the band 22 may be formed from an elastic material, such as silicone, rubber, combinations thereof, or any other suitable stretchable elastomer.
- the band 22 is fitted about the wrist to induce arterial stenosis, thereby generating blood flow turbulence to enhance sound generation associated with the blood flow.
- any suitable strap may be used, such as an adjustable and/or an elastic strap.
- the band 22 may be formed as a single strip. In embodiments, the band 22 may be formed from one or more strips or filaments woven in any suitable pattern.
- the wearable device 20 includes one or more inner sensors 24 disposed an inner surface 22 a (i.e., surface directly in contact with the person “P”) of the band 22 .
- the inner sensor 24 may be a sensor configured to measure sounds generated within the person “P.”
- the inner sensor 24 may be a microphone or any other type of acoustic transducer configured to measure sound, such as a flexible membrane transducer, a micro-electromechanical systems (MEMS) microphone, an electret diaphragm microphone, or any other microphone.
- MEMS micro-electromechanical systems
- the inner sensor 24 picks up sounds generated by the blood flow, which is accentuated by the compression of the band 22 .
- the inner sensor 24 picks up sounds generated by the heart, digestive system, respiratory system of the person “P.”
- the inner sensor 24 i.e., when the wearable device 20 is worn around the wrist, may be an ultrasound device configured to measure the blood flow and in the absence of turbulence present the information as a sound waveform using Doppler effect or any other suitable technique.
- the inner sensor 24 may also be any other suitable transducer, such as an optical transducer, capable of measuring normal blood flow and transmitting blood flow sounds in the absence of turbulence.
- the wearable device 20 also includes one or more outer sensors 26 disposed on an outer surface 22 b of the band 22 .
- the outer sensor 26 may be the same type of sensor as the inner sensor 24 .
- the outer sensor 26 is configured to pick up sounds generated by the person “P” including, but not limited to, movement, respiratory, and other physiological sounds.
- the sensors 24 and 26 are coupled to a processing device 30 , which is shown as being attached to the band 22 .
- the processing device 30 may be a standalone device that is separated from the wearable device 20 .
- the sensors 24 and 26 may be coupled to the processing device 30 either through a wired or a wireless communication interface.
- the sensors 24 and 26 output sound waveform signals corresponding to various sounds generated by the person “P,” which are then processed by the processing device 30 .
- the sensors 24 and 26 may be incorporated into a housing the processing device 30 —with the inner sensor 24 disposed on an inner surface of the processing device 30 and the outer sensor 26 disposed on an outer surface of the processing device 30 .
- the processing device 30 includes a controller 31 , which may be any suitable processor (e.g., control circuit) adapted to perform the operations, calculations, and/or set of instructions described in the present disclosure including, but not limited to, a hardware processor, a field programmable gate array (FPGA), a digital signal processor (DSP), a central processing unit (CPU), a microprocessor, and combinations thereof.
- processor e.g., control circuit
- FPGA field programmable gate array
- DSP digital signal processor
- CPU central processing unit
- microprocessor e.g., microprocessor
- the system 10 may also include the computing device 40 ( FIG. 1 ), such as a handheld device having a touchscreen and an application for communicating with the processing device 30 thereby replicating the functionality of the user input device 33 and/or other functions of the processing device 30 described herein.
- the computing device 40 may communicate directly with the wearable device 20 , obviating the need for the processing device 30 . It is envisioned that various computing environments and architectures may be implemented to provide for measuring, recording, and processing of sounds generated by the person “P” for real-time or subsequent playback on the sound output device 50 .
- the controller 31 may also include a memory device, which may include one or more of volatile, non-volatile, magnetic, optical, or electrical media, such as read-only memory (ROM), random access memory (RAM), electrically-erasable programmable ROM (EEPROM), non-volatile RAM (NVRAM), or flash memory.
- ROM read-only memory
- RAM random access memory
- EEPROM electrically-erasable programmable ROM
- NVRAM non-volatile RAM
- flash memory any standard processor and memory component known in the art.
- the processing device 30 further includes a wireless interface 32 , which may include an antenna and any other suitable transceiver circuitry configured to communicate with external devices (e.g., sensors 24 and 26 ) using wireless communication protocols.
- Wireless communication may be achieved via one or more wireless configurations, e.g., radio frequency, optical, Wi-Fi, ANT+, BLUETOOTH®, (an open wireless protocol for exchanging data over short distances, using short length radio waves, from fixed and mobile devices, creating personal area networks (PANs), ZIGBEE® (a specification for a suite of high level communication protocols using small, low-power digital radios based on the IEEE 802.15.4-2003 standard for wireless personal area networks (WPANs)), and the like.
- the processing device 30 may also include a user input device 33 , which may include a display, i.e., a touchscreen, and/or one or more buttons, which allow for the user to control the processing device 30 .
- the processing device 30 further includes a waveform processing circuit 34 , which may include discrete components or may be configured as a single circuit.
- the waveform processing circuits 34 may be analog or digital and may be embodied in the controller 31 as hardware or software components.
- the sound waveform signal may be digitized by using any suitable method, such as Fourier transform algorithms.
- the processing device 30 may include any suitable electronic components, such as analog-to-digital (A/D) converters configured to digitize the sound waveform signal.
- A/D analog-to-digital
- One of the waveform processing circuits 34 may be a filtering circuit configured to block and/or pass certain frequencies.
- the filtering circuit may include one or more of the following filters: high pass, low pass, band pass, notch filters and/or digital equivalents thereof.
- the filtering circuit may be configured to remove ambient sounds (e.g., voices) from detected sound waveforms.
- the filtered sound waveform signal may also be amplified through an amplifier such that the sound is output at the supraphysiological level. The amplitude may be adjusted by the user through the user input device 33 .
- the processing device 30 also includes storage 35 for storing recorded sound waveforms as sound files for subsequent playback through the sound output device 50 .
- the processing device 30 may output sound waveforms through the sound output device 50 either in real time or playing back previously recorded sound waveforms.
- the storage 35 may include a database of various sounds previously recorded by the sensors 24 and 26 . Recorded sounds may be categorized based on the source of the sound. Thus, sounds recorded by the inner sensors 24 of the wearable device 20 disposed on the wrist provide vascular (i.e., blood flow) sounds. Similarly, the inner sensors 24 of the wearable device 20 disposed on the chest provide cardiovascular (i.e., heartbeat), respiratory, and digestion sounds.
- the outer sensors 26 provide movement sounds, vocal sounds, ambient room sounds, as well as respiratory sounds.
- storage banks may be categorized by the type of sound, including, but not limited to, a vascular bank, a cardiac bank, a respiratory bank, a digestion bank, a movement bank, and a miscellaneous bank.
- the person “P” may play back the sounds and manually sort and/or categorize the recorded sound using the user input device 33 .
- sortation and identification of the sounds may be done automatically by the processing device 30 and/or the computing device 40 using machine learning.
- Part of the identification process may include determining whether the sound waveforms meet certain criteria, i.e., if the amplitude and/or resolution of the recorded sound waveform is sufficient, such that during playback, the sound is clear. It is envisioned that there may be an ongoing training of the identification process to automatically identify the sounds using artificial intelligence.
- artificial intelligence may include, but are not limited to, neural networks, convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial networks (GAN), Bayesian Regression, Naive Bayes, nearest neighbors, least squares, means, and support vector regression, among other data science and artificial science techniques.
- CNN convolutional neural networks
- RNN recurrent neural networks
- GAN generative adversarial networks
- Bayesian Regression Naive Bayes, nearest neighbors, least squares, means, and support vector regression, among other data science and artificial science techniques.
- a neural network may be used to train the processing device 30 and/or the computing device 40 .
- the neural network may include a temporal convolutional network, with one or more fully connected layers, or a feed forward network.
- training of the neural network may happen on a separate system, e.g., graphic processor unit (“GPU”) workstations, high performing computer clusters, etc., and the trained algorithm would then be deployed on the processing device 30 .
- training of the neural networks may happen locally, e.g., on the processing device 30 and/or the computing device 40 .
- the processing device 30 may include a software application that is executable by the controller 31 to identify and sort various recorded sounds into corresponding storage banks.
- GUI graphical user interface
- the GUI 60 may include a plurality of buttons 62 a - e providing a user with the ability to control the system 10 and a status indicator 64 providing status of the system 10 .
- the indicator 64 may use color and other indicia to provide status for each of the components of the system 10 , namely, the wearable device 20 , the processing device 30 , the computing device 40 , and the sound output device 50 .
- the indicator 64 may also display type and condition of the signal being received by the processing device 30 .
- Type indicator may include a descriptor, i.e., cardiac, breathing, or any other category described above, and status indicator may include a strength of the signal, i.e., whether the signal is adequate.
- the button 62 a is used to adjust the amount of ambient sound that is removed, i.e., filtered, in the output. This may be done via a slider interface allowing the user to input a percentage or other value indicative of the amount of ambient sound being removed from the output.
- the button 62 b may be used to adjust operation of the system 10 , i.e., enabling specific sensors 24 and 26 , setting type of the sound output device 50 , etc.
- the button 62 c allows the user to configure and set the system 10 for real-time transmission of sounds recorded by the sensors 24 and 26 to the sound output device 50 .
- the button 62 d allows the user to access the storage banks having prerecorded sounds and select one or more of the sounds for playback through the sound output device 50 .
- the button 62 e is used to select between playback types.
- Playback type may include a cycling mode, in which the sound output device 50 cycles through different sound banks or individual sounds within a specific sound bank.
- Another mode may be a continuous mode in which one or more sounds are looped continuously until ended.
- the person “P” attaches one or more wearable devices 20 to suitable locations on the body, i.e., chest and/or wrist.
- the person “P” also pairs the sound output device 50 to the processing device 30 and/or the computing device 40 .
- the processing device 30 may be also paired to the computing device 40 to enable communication with the application running on the computing device 40 .
- the processing device 30 is configured to output the sounds based on the options selected through the GUI 60 as described above.
- the sound output is based on the selections made by the person “P” through the GUI 60 , such as which sounds to output and the output mode, i.e., cycle vs. maintain. More specifically, the output device 50 may be instructed to output one of the sounds or a plurality of the sound waveforms simultaneously.
- the processing device 30 and/or the computing device 40 may overlay and/or mix multiple waveforms, namely, sounds from multiple storage banks, to output a combined sound waveform. The user may adjust the amplitude of each of the sound waveforms individually, allowing for tailoring of the combined masking sound.
- the controller 31 is also configured to automatically configure and adjust operation of the wearable device 20 , and in particular, the sensors 24 and 26 , as well as the sound output device 50 . More specifically, the controller 31 is configured to automatically toggle one or more of the sensors 24 and 26 .
- the controller 31 may analyze the combined sound output to determine whether a certain component of the combined sound waveform is to be increased and activate the sensor(s) 24 or 26 to provide additional sources of that sound. Conversely, the controller 31 may deactivate certain sensors 24 or 26 to lessen the amount of some components of the combined sound output. Activation or deactivation of the sensors 24 and 26 by the controller 31 further finetunes the sounds being played to mask the tinnitus.
- the controller 31 is further configured to adjust the waveform processing circuit 34 to modify the filtering being performed on the sound waveform.
- filtering may be used to remove ambient sounds and other sounds that are less suitable for masking tinnitus. Reproduction and amplification of physiological sounds allow the patient “P” to be in tune with their physiological processes and to be distracted from tinnitus.
- the controller 31 controls preprocessing, i.e., by selecting which sensors 24 and 26 are used, as well as postprocessing, i.e., filtering detected sounds, to tailor the sound played through the sound output device 50 .
- the system 10 may also include additional devices configured to couple to the processing device 20 and/or computing device 16 .
- the system 10 may include a transducer assembly 70 having a housing 72 , which may enclose the sensor 24 as well as other components of the processing device 20 , such as the sensor 26 , a driver circuit, a transmitter, etc.
- the transducer assembly 70 may include a post 74 or other attachment means configured to secure the transducer assembly 70 to a band worn around the patient's wrist (e.g., watchband of an Apple Watch®).
- the transducer assembly 70 may be positioned on the underside of the watchband, such that the transducer assembly 70 is positioned over the artery.
Abstract
A system for masking a perceived sound includes a wearable device disposed on a person, the wearable device including in a plurality of sensors, each of which is configured to output a sound waveform in response to sounds generated by physiological activity of the person. The system also includes a processing device coupled to the plurality of sensors and configured to process the sound waveforms. The system further includes a sound output device coupled to the processing device, the sound output device is configured to output the sound waveforms to mask a perceived sound.
Description
- The present application claims the benefit of and priority to U.S. Provisional Application No. 63/139,512, filed on Jan. 20, 2021. The entire disclosure of the foregoing application is incorporated by reference herein.
- Tinnitus is an auditory perception of sound without an external source. An estimated 50 million Americans suffer from tinnitus. There are very few available treatments for tinnitus. For many, tinnitus is perceived as a ringing sound, while for others, it is perceived as whistling, buzzing, chirping, hissing, and/or humming. The sound may seem to come from one ear or both, from inside the head, or from a distance. Tinnitus may also be constant or intermittent. Thus, there is a need for a system and method to mask tinnitus to provide relief for millions of people suffering from this condition.
- The present disclosure provides a system and method for masking tinnitus, including continuous or intermittent tinnitus. The system includes one or more input devices configured to capture normal physiological sounds of a person wearing the input device(s). Physiological sounds that are recorded include, but are not limited to, blood flow, heartbeat, respiration, digestion, etc. The system also includes one or more output devices configured to play back the physiological sounds at supraphysiological levels to provide the patient suffering from tinnitus with more “normal” sounds. As used herein the term “supraphysiological level” denotes an amplified level that is above the amplitude of the physiological sound when the sound was recorded. These sounds allow the patient to focus on the physiological sounds, thereby masking tinnitus. The system according to the present disclosure may also be used to help any persons suffering from a debilitating state where focus on the “normal” body sounds could be helpful (e.g., meditation training, post-traumatic stress disorder, combat stress, anxiety, insomnia, etc.).
- According to one embodiment of the present disclosure, a system for masking a perceived sound is disclosed. The system includes a wearable device disposed on a person, the wearable device including a plurality of sensors, which may be acoustic or ultrasound sensors, each of which is configured to output a sound waveform in response to sounds generated by physiological activity of the person. The system also includes a processing device coupled to the plurality sensors and configured to process the sound waveforms. The system further includes a sound output device coupled to the processing device, the sound output device is configured to output the biological sound waveforms to mask a perceived sound.
- Implementations may include one or more of the following features. According to one aspect of the above embodiment, the perceived sound is continuous tinnitus. The wearable device may include a band. The band may be formed from an elastic material configured to induce arterial stenosis thereby increasing blood flow turbulence. In another embodiment, the wearable device may have an ultrasound sensor using Doppler effect to measure the blood flow and transmit a sound waveform from the blood flow measurements. The plurality of sensors includes at least one inner sensor disposed on an inner surface of the band and configured to measure sound generated by the blood flow or blood flow turbulence. The plurality of sensors includes at least one outer sensor disposed on an outer surface of the band and configured to measure external sounds. The sounds generated by the physiological activity of the person include at least one a vascular sound, cardiac sound, respiratory sound, and digestion sound. The processing device may be further configured to categorize the sound waveforms generated by the physiological activity and to store the sound waveforms as sound files in corresponding storage banks. The processing device further includes a user input device configured to display a graphical user interface. The graphical user interface is configured to enable selection of at least one of the sound files for output through the sound output device. The processing device is further configured to mix the sound waveforms. The sound output device may be a headphone, a cochlear implant, or a hearing aide.
- According to one embodiment of the present disclosure, a method for masking a perceived sound is disclosed. The method includes placing a wearable device on a person, the wearable device including a plurality of sensors. The method also includes generating a sound waveform at each sensor of the plurality of sensors in response to sounds generated by physiological activity of the person. The method further includes processing at a processing device the sound waveforms and outputting the sound waveforms through a sound output device coupled to the processing device to mask a perceived sound.
- Implementations may include one or more of the following features. According to one aspect of the above embodiment, the wearable device includes a band formed from an elastic material configured to induce arterial stenosis thereby increasing blood flow turbulence. According to another aspect of the above embodiment, the wearable device may include an adjustable band where the inner transducer is an ultrasound transducer with the ability to measure blood flow using Doppler effect and transmit the sound waveform of the normal blood flow in the absence of turbulence. The plurality of sensors includes at least one inner sensor disposed on an inner surface of the band and configured to measure sound generated by the blood flow or blood turbulence. The plurality of sensors includes at least one outer sensor disposed on an outer surface of the band and configured to measure external sounds. The sounds generated by the physiological activity of the person include at least one of a cardiovascular sound, a respiratory sound, or a digestion sound. The method may further include categorizing the sounds generated by physiological activity of the person; and storing the categorized sounds as sound files in corresponding storage banks. The method may further include mixing the sound waveforms into a combined sound output, or separating overlapping sound waves into individual component waveforms divided by sound source by matching the waveform to a sound database. The sound output device may be one of a headphone, a cochlear implant, or a hearing aide.
- Embodiments of the present disclosure are described herein with reference to the accompanying drawings, wherein:
-
FIG. 1 is a schematic diagram of a system for masking tinnitus according to one embodiment the present disclosure; -
FIG. 2 is a perspective view of a wearable device for receiving sounds according to one embodiment the present disclosure; -
FIG. 3 is a schematic diagram of a processing device of the system ofFIG. 2 according to one embodiment of the present disclosure; -
FIG. 4 is a diagram of a graphical user interface of the processing device ofFIG. 3 according to one embodiment the present disclosure; and -
FIG. 5 is a perspective view of a wearable transducer assembly according to one embodiment the present disclosure. - Embodiments of the present disclosure are described in detail with reference to the drawings, in which like reference numerals designate identical or corresponding elements in each of the several views.
FIG. 1 shows asystem 10 for generating sounds to mask tinnitus and other distracting or debilitating sounds. Thesystem 10 includes one or morewearable devices 20 having one or more sensors that are connected to a processing device 30 (FIG. 2 ), which in turn is connected to asound output device 50. In embodiments, theprocessing device 30 may be acomputing device 40, i.e., a tablet or a mobile phone. In further embodiments, thecomputing device 40 may be used in conjunction with theprocessing device 30. Thesound output device 50 may be any suitable headphone or earpiece disposed in and/or over the ear of the patient “P.” Thesound output device 50 may also be a cochlear implant or any other hearing aide. Thesound output device 50 outputs live or prerecorded sounds received by thewearable device 20, such as person's blood flow sounds, breathing sounds, heartbeat, digestion sounds, and other sounds generated by physiological activity of the person “P” body to mask tinnitus. - With reference to
FIGS. 1 and 2 , thewearable device 20 may be worn at one or more locations around the body or a limb of a person “P”, such as a wrist, ankle, chest, etc. Thewearable device 20 may be attached to the person “P” using aband 22 or an adhesive bandage (not shown), such that thewearable device 20 is in physical contact with the person “P” allowing for measurement of sounds generated by the person “P.” - When the
wearable device 20 is worn around the wrist, theband 22 may be formed from an elastic material, such as silicone, rubber, combinations thereof, or any other suitable stretchable elastomer. Theband 22 is fitted about the wrist to induce arterial stenosis, thereby generating blood flow turbulence to enhance sound generation associated with the blood flow. When thewearable device 20 is worn around chest, any suitable strap may be used, such as an adjustable and/or an elastic strap. Theband 22 may be formed as a single strip. In embodiments, theband 22 may be formed from one or more strips or filaments woven in any suitable pattern. - The
wearable device 20 includes one or moreinner sensors 24 disposed aninner surface 22 a (i.e., surface directly in contact with the person “P”) of theband 22. Theinner sensor 24 may be a sensor configured to measure sounds generated within the person “P.” Theinner sensor 24 may be a microphone or any other type of acoustic transducer configured to measure sound, such as a flexible membrane transducer, a micro-electromechanical systems (MEMS) microphone, an electret diaphragm microphone, or any other microphone. When thewearable device 20 is worn around the wrist, theinner sensor 24 picks up sounds generated by the blood flow, which is accentuated by the compression of theband 22. When thewearable device 20 is worn around chest, theinner sensor 24 picks up sounds generated by the heart, digestive system, respiratory system of the person “P.” - According to another embodiment, the
inner sensor 24, i.e., when thewearable device 20 is worn around the wrist, may be an ultrasound device configured to measure the blood flow and in the absence of turbulence present the information as a sound waveform using Doppler effect or any other suitable technique. Theinner sensor 24 may also be any other suitable transducer, such as an optical transducer, capable of measuring normal blood flow and transmitting blood flow sounds in the absence of turbulence. - The
wearable device 20 also includes one or moreouter sensors 26 disposed on anouter surface 22 b of theband 22. Theouter sensor 26 may be the same type of sensor as theinner sensor 24. Theouter sensor 26 is configured to pick up sounds generated by the person “P” including, but not limited to, movement, respiratory, and other physiological sounds. - The
sensors processing device 30, which is shown as being attached to theband 22. In embodiments, theprocessing device 30 may be a standalone device that is separated from thewearable device 20. Thesensors processing device 30 either through a wired or a wireless communication interface. Thesensors processing device 30. In further embodiments, thesensors processing device 30—with theinner sensor 24 disposed on an inner surface of theprocessing device 30 and theouter sensor 26 disposed on an outer surface of theprocessing device 30. - With reference to
FIG. 3 , theprocessing device 30 includes acontroller 31, which may be any suitable processor (e.g., control circuit) adapted to perform the operations, calculations, and/or set of instructions described in the present disclosure including, but not limited to, a hardware processor, a field programmable gate array (FPGA), a digital signal processor (DSP), a central processing unit (CPU), a microprocessor, and combinations thereof. Those skilled in the art will appreciate that the processor may be substituted by any logic processor (e.g., control circuit) adapted to execute algorithms, calculations, and/or set of instructions described herein. - The
system 10 may also include the computing device 40 (FIG. 1 ), such as a handheld device having a touchscreen and an application for communicating with theprocessing device 30 thereby replicating the functionality of theuser input device 33 and/or other functions of theprocessing device 30 described herein. In embodiments, thecomputing device 40 may communicate directly with thewearable device 20, obviating the need for theprocessing device 30. It is envisioned that various computing environments and architectures may be implemented to provide for measuring, recording, and processing of sounds generated by the person “P” for real-time or subsequent playback on thesound output device 50. - The
controller 31 may also include a memory device, which may include one or more of volatile, non-volatile, magnetic, optical, or electrical media, such as read-only memory (ROM), random access memory (RAM), electrically-erasable programmable ROM (EEPROM), non-volatile RAM (NVRAM), or flash memory. Thecontroller 31 and the memory device may be any standard processor and memory component known in the art. - The
processing device 30 further includes awireless interface 32, which may include an antenna and any other suitable transceiver circuitry configured to communicate with external devices (e.g.,sensors 24 and 26) using wireless communication protocols. Wireless communication may be achieved via one or more wireless configurations, e.g., radio frequency, optical, Wi-Fi, ANT+, BLUETOOTH®, (an open wireless protocol for exchanging data over short distances, using short length radio waves, from fixed and mobile devices, creating personal area networks (PANs), ZIGBEE® (a specification for a suite of high level communication protocols using small, low-power digital radios based on the IEEE 802.15.4-2003 standard for wireless personal area networks (WPANs)), and the like. Theprocessing device 30 may also include auser input device 33, which may include a display, i.e., a touchscreen, and/or one or more buttons, which allow for the user to control theprocessing device 30. - The
processing device 30 further includes awaveform processing circuit 34, which may include discrete components or may be configured as a single circuit. Thewaveform processing circuits 34 may be analog or digital and may be embodied in thecontroller 31 as hardware or software components. The sound waveform signal may be digitized by using any suitable method, such as Fourier transform algorithms. Theprocessing device 30 may include any suitable electronic components, such as analog-to-digital (A/D) converters configured to digitize the sound waveform signal. - One of the
waveform processing circuits 34 may be a filtering circuit configured to block and/or pass certain frequencies. The filtering circuit may include one or more of the following filters: high pass, low pass, band pass, notch filters and/or digital equivalents thereof. The filtering circuit may be configured to remove ambient sounds (e.g., voices) from detected sound waveforms. The filtered sound waveform signal may also be amplified through an amplifier such that the sound is output at the supraphysiological level. The amplitude may be adjusted by the user through theuser input device 33. - The
processing device 30 also includesstorage 35 for storing recorded sound waveforms as sound files for subsequent playback through thesound output device 50. Theprocessing device 30 may output sound waveforms through thesound output device 50 either in real time or playing back previously recorded sound waveforms. Thestorage 35 may include a database of various sounds previously recorded by thesensors inner sensors 24 of thewearable device 20 disposed on the wrist provide vascular (i.e., blood flow) sounds. Similarly, theinner sensors 24 of thewearable device 20 disposed on the chest provide cardiovascular (i.e., heartbeat), respiratory, and digestion sounds. Theouter sensors 26 provide movement sounds, vocal sounds, ambient room sounds, as well as respiratory sounds. Each of these sounds are stored in corresponding storage banks that are accessible by the database. In particular, storage banks may be categorized by the type of sound, including, but not limited to, a vascular bank, a cardiac bank, a respiratory bank, a digestion bank, a movement bank, and a miscellaneous bank. - In addition to storing the sound waveforms based on the source sensor, the person “P” may play back the sounds and manually sort and/or categorize the recorded sound using the
user input device 33. In further embodiments, sortation and identification of the sounds may be done automatically by theprocessing device 30 and/or thecomputing device 40 using machine learning. Part of the identification process may include determining whether the sound waveforms meet certain criteria, i.e., if the amplitude and/or resolution of the recorded sound waveform is sufficient, such that during playback, the sound is clear. It is envisioned that there may be an ongoing training of the identification process to automatically identify the sounds using artificial intelligence. - The terms “artificial intelligence,” “data models,” or “machine learning” may include, but are not limited to, neural networks, convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial networks (GAN), Bayesian Regression, Naive Bayes, nearest neighbors, least squares, means, and support vector regression, among other data science and artificial science techniques.
- A neural network may be used to train the
processing device 30 and/or thecomputing device 40. In various embodiments, the neural network may include a temporal convolutional network, with one or more fully connected layers, or a feed forward network. In various embodiments, training of the neural network may happen on a separate system, e.g., graphic processor unit (“GPU”) workstations, high performing computer clusters, etc., and the trained algorithm would then be deployed on theprocessing device 30. In further embodiments, training of the neural networks may happen locally, e.g., on theprocessing device 30 and/or thecomputing device 40. After training, theprocessing device 30 may include a software application that is executable by thecontroller 31 to identify and sort various recorded sounds into corresponding storage banks. - With reference to
FIG. 4 , an exemplary graphical user interface (“GUI”) 60 is shown on a display of theuser input device 33. TheGUI 60 may include a plurality of buttons 62 a-e providing a user with the ability to control thesystem 10 and astatus indicator 64 providing status of thesystem 10. Theindicator 64 may use color and other indicia to provide status for each of the components of thesystem 10, namely, thewearable device 20, theprocessing device 30, thecomputing device 40, and thesound output device 50. Theindicator 64 may also display type and condition of the signal being received by theprocessing device 30. Type indicator may include a descriptor, i.e., cardiac, breathing, or any other category described above, and status indicator may include a strength of the signal, i.e., whether the signal is adequate. - The
button 62 a is used to adjust the amount of ambient sound that is removed, i.e., filtered, in the output. This may be done via a slider interface allowing the user to input a percentage or other value indicative of the amount of ambient sound being removed from the output. Thebutton 62 b may be used to adjust operation of thesystem 10, i.e., enablingspecific sensors sound output device 50, etc. Thebutton 62 c allows the user to configure and set thesystem 10 for real-time transmission of sounds recorded by thesensors sound output device 50. Thebutton 62 d allows the user to access the storage banks having prerecorded sounds and select one or more of the sounds for playback through thesound output device 50. Thebutton 62 e is used to select between playback types. Playback type may include a cycling mode, in which thesound output device 50 cycles through different sound banks or individual sounds within a specific sound bank. Another mode may be a continuous mode in which one or more sounds are looped continuously until ended. - During initial setup of the
system 10, the person “P” attaches one or morewearable devices 20 to suitable locations on the body, i.e., chest and/or wrist. The person “P” also pairs thesound output device 50 to theprocessing device 30 and/or thecomputing device 40. In embodiments, where thecomputing device 40 is part of thesystem 10, theprocessing device 30 may be also paired to thecomputing device 40 to enable communication with the application running on thecomputing device 40. Once the initial setup is completed, theprocessing device 30 is configured to output the sounds based on the options selected through theGUI 60 as described above. - The sound output is based on the selections made by the person “P” through the
GUI 60, such as which sounds to output and the output mode, i.e., cycle vs. maintain. More specifically, theoutput device 50 may be instructed to output one of the sounds or a plurality of the sound waveforms simultaneously. In embodiments, theprocessing device 30 and/or thecomputing device 40 may overlay and/or mix multiple waveforms, namely, sounds from multiple storage banks, to output a combined sound waveform. The user may adjust the amplitude of each of the sound waveforms individually, allowing for tailoring of the combined masking sound. - The
controller 31 is also configured to automatically configure and adjust operation of thewearable device 20, and in particular, thesensors sound output device 50. More specifically, thecontroller 31 is configured to automatically toggle one or more of thesensors controller 31 may analyze the combined sound output to determine whether a certain component of the combined sound waveform is to be increased and activate the sensor(s) 24 or 26 to provide additional sources of that sound. Conversely, thecontroller 31 may deactivatecertain sensors sensors controller 31 further finetunes the sounds being played to mask the tinnitus. - The
controller 31 is further configured to adjust thewaveform processing circuit 34 to modify the filtering being performed on the sound waveform. As described above, filtering may be used to remove ambient sounds and other sounds that are less suitable for masking tinnitus. Reproduction and amplification of physiological sounds allow the patient “P” to be in tune with their physiological processes and to be distracted from tinnitus. Thus, thecontroller 31 controls preprocessing, i.e., by selecting whichsensors sound output device 50. - With reference to
FIG. 5 , thesystem 10 may also include additional devices configured to couple to theprocessing device 20 and/or computing device 16. In particular, thesystem 10 may include atransducer assembly 70 having ahousing 72, which may enclose thesensor 24 as well as other components of theprocessing device 20, such as thesensor 26, a driver circuit, a transmitter, etc. Thetransducer assembly 70 may include apost 74 or other attachment means configured to secure thetransducer assembly 70 to a band worn around the patient's wrist (e.g., watchband of an Apple Watch®). Thetransducer assembly 70 may be positioned on the underside of the watchband, such that thetransducer assembly 70 is positioned over the artery. - It will be appreciated that of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations or improvements may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims. Unless specifically recited in a claim, steps or components of claims should not be implied or imported from the specification or any other claims as to any particular order, number, position, size, shape, angle, or material.
Claims (20)
1. A system for masking a perceived sound, the system comprising:
a wearable device disposed on a person, the wearable device including a plurality of sensors, each of which is configured to output a sound waveform in response to sounds generated by a physiological activity of the person;
a processing device coupled to the plurality of sensors and configured to process the sound waveforms; and
a sound output device coupled to the processing device, the sound output device is configured to output the sound waveforms to mask a perceived sound.
2. The system according to claim 1 , wherein the perceived sound is continuous tinnitus.
3. The system according to claim 1 , wherein the wearable device includes a band.
4. The system according to claim 3 , wherein the band is formed from an elastic material configured to induce arterial stenosis thereby increasing blood flow turbulence.
5. The system according to claim 4 , wherein the plurality of sensors includes at least one inner sensor disposed on an inner surface of the band and configured to measure sound generated by the blood flow turbulence, and the at least one inner sensor is at least one of an acoustic sensor, an ultrasound sensor, or an optical sensor.
6. The system according to claim 4 , wherein the plurality of sensors includes at least one outer sensor disposed on an outer surface of the band and configured to measure external sounds, and the at least one outer sensor is an acoustic sensor.
7. The system according to claim 1 , wherein the sounds generated by the physiological activity of the person include at least one a vascular sound, respiratory sound, and digestion sound.
8. The system according to claim 7 , wherein the processing device is further configured to categorize the sound waveforms generated by the physiological activity and to store the sound waveforms as sound files in corresponding storage banks.
9. The system according to claim 8 , wherein the processing device further includes a user input device configured to display a graphical user interface.
10. The system according to claim 9 , wherein the graphical user interface is configured to enable selection of at least one of the sound files for output through the sound output device.
11. The system according to claim 1 , wherein the processing device is further configured to mix the sound waveforms.
12. The system according to claim 1 , wherein the sound output device is at least one of a headphone, a cochlear implant, or a hearing aide.
13. A method for masking a perceived sound, the method comprising:
placing a wearable device on a person, the wearable device including a plurality of sensors;
generating a sound waveform at each sensor of the plurality of sensors in response to sounds generated by a physiological activity of the person;
processing at a processing device the sound waveforms; and
outputting the sound waveforms through a sound output device coupled to the processing device to mask a perceived sound.
14. The method according to claim 13 , wherein the wearable device includes a band formed from an elastic material configured to induce arterial stenosis thereby increasing blood flow turbulence.
15. The method according to claim 14 , wherein the plurality of sensors includes at least one inner sensor disposed on an inner surface of the band and configured to measure sound generated by the blood flow turbulence, and the at least one inner sensor is at least one of an acoustic sensor, an ultrasound sensor, or an optical sensor.
16. The method according to claim 15 , wherein the plurality of sensors includes at least one outer sensor disposed on an outer surface of the band and configured to measure external sounds, and the at least one outer sensor is an acoustic sensor.
17. The method according to claim 13 , wherein the sounds generated by the physiological activity of the person include at least one of a cardiovascular sound, a respiratory sound, or a digestion sound.
18. The method according to claim 17 , further comprising:
categorizing the sounds generated by the physiological activity of the person; and
storing the categorized sounds as sound files in corresponding storage banks.
19. The method according to claim 13 , further comprising:
mixing the sound waveforms into a combined sound output.
20. The method according to claim 13 , wherein the sound output device is at least one of a headphone, a cochlear implant, or a hearing aide.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/273,305 US20230412996A1 (en) | 2021-01-20 | 2022-01-20 | System and method for masking tinnitus |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163139512P | 2021-01-20 | 2021-01-20 | |
PCT/US2022/013059 WO2022159543A1 (en) | 2021-01-20 | 2022-01-20 | System and method for masking tinnitus |
US18/273,305 US20230412996A1 (en) | 2021-01-20 | 2022-01-20 | System and method for masking tinnitus |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230412996A1 true US20230412996A1 (en) | 2023-12-21 |
Family
ID=82549731
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/273,305 Pending US20230412996A1 (en) | 2021-01-20 | 2022-01-20 | System and method for masking tinnitus |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230412996A1 (en) |
EP (1) | EP4280949A1 (en) |
WO (1) | WO2022159543A1 (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060093997A1 (en) * | 2004-06-12 | 2006-05-04 | Neurotone, Inc. | Aural rehabilitation system and a method of using the same |
US20070203416A1 (en) * | 2006-02-28 | 2007-08-30 | Andrew Lowe | Blood pressure cuffs |
CN101641967B (en) * | 2007-03-07 | 2016-06-22 | Gn瑞声达A/S | For depending on the sound enrichment of sound environment classification relief of tinnitus |
US10632278B2 (en) * | 2017-07-20 | 2020-04-28 | Bose Corporation | Earphones for measuring and entraining respiration |
-
2022
- 2022-01-20 US US18/273,305 patent/US20230412996A1/en active Pending
- 2022-01-20 EP EP22743141.8A patent/EP4280949A1/en active Pending
- 2022-01-20 WO PCT/US2022/013059 patent/WO2022159543A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
EP4280949A1 (en) | 2023-11-29 |
WO2022159543A1 (en) | 2022-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11517708B2 (en) | Ear-worn electronic device for conducting and monitoring mental exercises | |
US9210517B2 (en) | Hearing assistance device with brain computer interface | |
US10685577B2 (en) | Systems and methods for delivering sensory input during a dream state | |
US9779751B2 (en) | Respiratory biofeedback devices, systems, and methods | |
US20200138399A1 (en) | Wearable stethoscope patch | |
US20200008708A1 (en) | Ear-worn devices with deep breathing assistance | |
US10213157B2 (en) | Active unipolar dry electrode open ear wireless headset and brain computer interface | |
CN113260300A (en) | Fixed point gaze motion training system employing visual feedback and related methods | |
EP3852857B1 (en) | Biometric feedback as an adaptation trigger for active noise reduction and masking | |
US20230412996A1 (en) | System and method for masking tinnitus | |
WO2021150148A1 (en) | Heart monitoring system with wireless earbud | |
CN113691917A (en) | Hearing aid comprising a physiological sensor | |
US20230181869A1 (en) | Multi-sensory ear-wearable devices for stress related condition detection and therapy | |
Kirchner et al. | Wearable system for measurement of thoracic sounds with a microphone array | |
CA3208816A1 (en) | System and method for soothing infants | |
US20230390608A1 (en) | Systems and methods including ear-worn devices for vestibular rehabilitation exercises | |
Bhowmik et al. | Hear, now, and in the future: Transforming hearing aids into multipurpose devices | |
Mehta et al. | Wireless Neck-Surface Accelerometer and Microphone on Flex Circuit with Application to Noise-Robust Monitoring of Lombard Speech. | |
US20230410782A1 (en) | Cardiac and vascular noise cancellation for pulsatile tinnitus | |
US20240090808A1 (en) | Multi-sensory ear-worn devices for stress and anxiety detection and alleviation | |
US11617514B2 (en) | Hands free heart-beat audio transmitter and receiver system for exercise, meditation and relaxation | |
CN111388003B (en) | Flexible electronic auscultation device, body sound determination device and auscultation system | |
US11191448B2 (en) | Dynamic starting rate for guided breathing | |
US20240000315A1 (en) | Passive safety monitoring with ear-wearable devices | |
US20220301685A1 (en) | Ear-wearable device and system for monitoring of and/or providing therapy to individuals with hypoxic or anoxic neurological injury |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AMANS, MATTHEW;REEL/FRAME:064322/0130 Effective date: 20220118 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |