US20150133716A1 - Hearing devices based on the plasticity of the brain - Google Patents
Hearing devices based on the plasticity of the brain Download PDFInfo
- Publication number
- US20150133716A1 US20150133716A1 US14/076,237 US201314076237A US2015133716A1 US 20150133716 A1 US20150133716 A1 US 20150133716A1 US 201314076237 A US201314076237 A US 201314076237A US 2015133716 A1 US2015133716 A1 US 2015133716A1
- Authority
- US
- United States
- Prior art keywords
- frequencies
- hearing
- brain
- frequency
- heard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000004556 brain Anatomy 0.000 title claims abstract description 67
- 230000000007 visual effect Effects 0.000 claims abstract description 27
- 238000012549 training Methods 0.000 claims abstract description 21
- 238000002604 ultrasonography Methods 0.000 claims description 31
- 208000032041 Hearing impaired Diseases 0.000 claims description 27
- 210000000988 bone and bone Anatomy 0.000 claims description 27
- 239000003086 colorant Substances 0.000 claims description 15
- 210000003926 auditory cortex Anatomy 0.000 claims description 14
- 230000000875 corresponding effect Effects 0.000 claims description 12
- 230000000638 stimulation Effects 0.000 claims description 10
- 210000000857 visual cortex Anatomy 0.000 claims description 9
- 230000002596 correlated effect Effects 0.000 claims description 7
- 230000001934 delay Effects 0.000 claims description 7
- 210000002569 neuron Anatomy 0.000 claims description 7
- 230000004936 stimulating effect Effects 0.000 claims description 6
- 238000003491 array Methods 0.000 claims description 4
- 230000001186 cumulative effect Effects 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 4
- 210000003027 ear inner Anatomy 0.000 claims description 4
- 210000000959 ear middle Anatomy 0.000 claims description 4
- 230000008878 coupling Effects 0.000 claims description 2
- 238000010168 coupling process Methods 0.000 claims description 2
- 238000005859 coupling reaction Methods 0.000 claims description 2
- 230000003247 decreasing effect Effects 0.000 claims description 2
- 210000000883 ear external Anatomy 0.000 claims description 2
- 210000001595 mastoid Anatomy 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 claims 1
- 230000010363 phase shift Effects 0.000 claims 1
- 230000005855 radiation Effects 0.000 claims 1
- 230000037361 pathway Effects 0.000 abstract description 6
- 230000006872 improvement Effects 0.000 abstract description 2
- 210000003477 cochlea Anatomy 0.000 description 21
- 210000001508 eye Anatomy 0.000 description 15
- 238000000034 method Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 11
- 230000004044 response Effects 0.000 description 11
- 206010011878 Deafness Diseases 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 108091006146 Channels Proteins 0.000 description 8
- 208000016354 hearing loss disease Diseases 0.000 description 8
- 210000003625 skull Anatomy 0.000 description 8
- 210000000613 ear canal Anatomy 0.000 description 5
- 230000010370 hearing loss Effects 0.000 description 5
- 231100000888 hearing loss Toxicity 0.000 description 5
- 230000001537 neural effect Effects 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 5
- 241000269350 Anura Species 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000008030 elimination Effects 0.000 description 4
- 238000003379 elimination reaction Methods 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 230000001953 sensory effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000003321 amplification Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- SEQDDYPDSLOBDC-UHFFFAOYSA-N Temazepam Chemical compound N=1C(O)C(=O)N(C)C2=CC=C(Cl)C=C2C=1C1=CC=CC=C1 SEQDDYPDSLOBDC-UHFFFAOYSA-N 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 235000019800 disodium phosphate Nutrition 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000005670 electromagnetic radiation Effects 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000035876 healing Effects 0.000 description 2
- 230000000147 hypnotic effect Effects 0.000 description 2
- 230000001771 impaired effect Effects 0.000 description 2
- 230000001678 irradiating effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000000059 patterning Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 241000894007 species Species 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 208000010392 Bone Fractures Diseases 0.000 description 1
- 108090000312 Calcium Channels Proteins 0.000 description 1
- 102000003922 Calcium Channels Human genes 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- DGAQECJNVWCQMB-PUAWFVPOSA-M Ilexoside XXIX Chemical compound C[C@@H]1CC[C@@]2(CC[C@@]3(C(=CC[C@H]4[C@]3(CC[C@@H]5[C@@]4(CC[C@@H](C5(C)C)OS(=O)(=O)[O-])C)C)[C@@H]2[C@]1(C)O)C)C(=O)O[C@H]6[C@@H]([C@H]([C@@H]([C@H](O6)CO)O)O)O.[Na+] DGAQECJNVWCQMB-PUAWFVPOSA-M 0.000 description 1
- 108010052164 Sodium Channels Proteins 0.000 description 1
- 102000018674 Sodium Channels Human genes 0.000 description 1
- 241001455273 Tetrapoda Species 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 230000003727 cerebral blood flow Effects 0.000 description 1
- 230000002490 cerebral effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001054 cortical effect Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 229920005994 diacetyl cellulose Polymers 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003292 diminished effect Effects 0.000 description 1
- 229920001971 elastomer Polymers 0.000 description 1
- 239000000806 elastomer Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004720 fertilization Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 210000005153 frontal cortex Anatomy 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 210000000118 neural pathway Anatomy 0.000 description 1
- 230000010004 neural pathway Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 230000008521 reorganization Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000011734 sodium Substances 0.000 description 1
- 229910052708 sodium Inorganic materials 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
- 230000005062 synaptic transmission Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N2/00—Magnetotherapy
- A61N2/002—Magnetotherapy in combination with another treatment
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F11/00—Methods or devices for treatment of the ears or hearing sense; Non-electric hearing aids; Methods or devices for enabling ear patients to achieve auditory perception through physiological senses other than hearing sense; Protective devices for the ears, carried on the body or in the hand
- A61F11/04—Methods or devices for enabling ear patients to achieve auditory perception through physiological senses other than hearing sense, e.g. through the touch sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
-
- A61N1/36032—
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N2/00—Magnetotherapy
- A61N2/02—Magnetotherapy using magnetic fields produced by coils, including single turn loops or electromagnets
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N7/00—Ultrasound therapy
- A61N7/02—Localised ultrasound hyperthermia
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/43—Electronic input selection or mixing based on input signal analysis, e.g. mixing or selection between microphone and telecoil or between microphones with different directivity characteristics
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0027—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0044—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0055—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus with electric or electro-magnetic fields
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2205/00—General characteristics of the apparatus
- A61M2205/58—Means for facilitating use, e.g. by people with impaired vision
- A61M2205/583—Means for facilitating use, e.g. by people with impaired vision by visual feedback
- A61M2205/584—Means for facilitating use, e.g. by people with impaired vision by visual feedback having a color code
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N7/00—Ultrasound therapy
- A61N2007/0004—Applications of ultrasound therapy
- A61N2007/0021—Neural system treatment
- A61N2007/0026—Stimulation of nerve tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N7/00—Ultrasound therapy
- A61N2007/0073—Ultrasound therapy using multiple frequencies
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N7/00—Ultrasound therapy
- A61N2007/0086—Beam steering
- A61N2007/0095—Beam steering by modifying an excitation signal
-
- G—PHYSICS
- G02—OPTICS
- G02C—SPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
- G02C11/00—Non-optical adjuncts; Attachment thereof
- G02C11/06—Hearing aids
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2217/00—Details of magnetostrictive, piezoelectric, or electrostrictive transducers covered by H04R15/00 or H04R17/00 but not provided for in any of their subgroups
- H04R2217/03—Parametric transducers where sound is generated or captured by the acoustic demodulation of amplitude modulated ultrasonic waves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
Definitions
- the goal of this invention is to enable “to hear” the unheard or badly heard sound frequencies, by training the brain to connect the auditory channel with the visual channel and use stimulations of the eye in order to help the brain decipher language when the auditory channel by itself is at a loss, due to missing frequencies.
- missing fundamental consists in the brain determining the fundamental sound frequency after hearing harmonic frequencies of the said fundamental frequency, and substituting the “missing fundamental” in trying to decode a word.
- the joint processing of information between the auditory cortex and the visual cortex is illustrated by what is known as the “McGurk illusion”, where a phoneme heard concurrently with a video of the mouth enunciating a different phoneme, is interpreted by the brain as the phoneme heard in the video.
- This illusion shows that there are pathways between the visual cortex and the auditory cortex where the two try to arrive at a common conclusion; in this case the phoneme heard accompanied by a picture of the mouth articulating it, trumps the phoneme that reached only the auditory cortex.
- positron emission tomography we report cerebral blood flow activity in profoundly deaf signers processing specific aspects of sign language in key brain sites widely assumed to be unimodal speech or sound processing areas: the left inferior frontal cortex when signers produced meaningful signs, and the planum temporale bilaterally when they viewed signs or meaningless parts of signs (sign-phonetic and syllabic units). Contrary to prevailing wisdom, the planum temporale may not be exclusively dedicated to processing speech sounds, but may be specialized for processing more abstract properties essential to language that can engage multiple modalities.
- the neural tissue involved in language processing may not be prespecified exclusively by sensory modality (such as sound) but may entail polymodal neural tissue that has evolved unique sensitivity to aspects of the patterning of natural language.
- Such neural specialization for aspects of language patterning appears to be neurally unmodifiable in so far as languages with radically different sensory modalities such as speech and sign are processed at similar brain sites, while, at the same time, the neural pathways for expressing and perceiving natural language appear to be neurally highly modifiable.
- testimonies of people that say that they “hear voices” There are testimonies of people that say that they “hear voices” . These testimonies indicate that the brain is able to generate internal sounds similar to the sounds originating through the auditory channel.
- Our goal is to cause quasi-deaf people to “hear voices” generated mostly in the brain, by stimulating the brain “to put together” partial information received through the auditory channel with correlated information delivered through the visual channel and “GUESS” what was said.
- the present invention is a device that enables to train the brain of a hearing impaired person to correlate unheard or badly heard sound frequencies with “substitute frequencies”, visual color sequences and pictures of the mouth enunciating said phonemes concurrently, while triggering the corresponding areas of the auditory and visual corteces simultaneously, with focused ultrasound beams and magnetic stimulations.
- a multiplicity of resonant coils placed at strategic positions around the head may cumulatively deposit energy at selected regions of the brain, for example on the auditory cortex. We conjecture that depositing extra energy at the right moment will “cause” the brain to work harder and “decipher” speech with the substitute frequencies.
- the unheard or badly heard frequencies may be determined by taking audiograms of the ears and the “substitute frequencies” established during the training period.
- the substitute sound frequencies may be generated by a bone conduction (BC) speaker of a specific design shown in this application.
- the (BC) speakers transfer the vibrations to the skull, bypassing the outer ear” and the“middle ear” that may be damaged and reach the cochlea in the inner ear.
- the bandwidth of the (BC) speaker is lesser than that of the (AC) speaker, in cases where the causes of the hearing impairment may not be clear, it is advantageous to use the (AC) speaker tugged to the ear canal in addition to the (BC) transducer pressed onto the skull. Simultaneously with exciting the cochlea by a given sound frequency, the hearing impaired person's eye is visually excited by a colored light of corresponding wavelength, such that a one-to-one correspondence is gradually established between the sound frequencies and the light wavelengths.
- the light excitation may be generated either by a miniature tricolored light source where the power of each LED is controlled, or by a colored display in front of the person's eyes. Simultaneously with the cochlea and eye excitation, the corresponding auditory and visual cortex areas that process the transduced electrical signals, are also excited by twin external transcranial phased-array ultrasound beams that converge on the desired area. The ultrasound emitters are held in place against the crane by one or more ratcheted bands around the crane.
- An additional strategy for reinforcing the brain's interpretation of the “correct” frequency in the context of a word is to train it to correlate the unheard frequency with a substitute frequency that is “better” heard by the cochlea.
- the substitute frequencies of badly heard or unheard frequencies may be the higher frequency harmonics and sums of frequencies generated within a time window of less than 3 msec that the brain will interprete as one higher frequency.
- a translation “look-up-table” may be generated that translates the original speech frequencies detected by microphones to their “harmonics” or “time-squeezed” frequencies before delivering them to the bone conduction speaker or an audio speaker.
- the frequency training of the brain may be enhanced by phoneme training that consists in pronouncing phonemes while simultanously displaying the sequence of colors that are related to the sound frequencies.
- the training may be further enhanced by displaying the lips of a person pronouncing the phoneme.
- the components of the training system may be incorporated onto eyeglasses and a cap with a long visor worn by the trainee, where the system is controlled and managed by a smartphone.
- the extremely thin, flexible display monitor connected by bluetooth to the smartphone, lies at the front of the visor and may easily be flipped onto a position in front of the eyeglasses.enabling the eyeglasses wearer to view colored images transmitted by the cellphone, for example in the “face time” mode of the iPhone.
- the eyeglasses bows incorporate microphones at the front and back ends enabling to assess the direction of incoming sound and thus reject surrounding noise, thus greatly improving speech understanding.
- a bone conducting transducer of our design able to tailor sound sequences out of single frequencies are incorporated at the back of the bows behind the ear, next to the mastoid bone.
- the bone conducting vibration transducer is able to generate single frequency vibrations and thus enables to measure the “bone conduction audiogram” by the hearing impaired person.
- Various colored signals may be generated by a colored light illuminator consisting of miniature (Blue, Green and Red) LEDs controlled by a microprocessor that sets their relative intensities that determine the combined color after mixing, and the absolute intensities that determine the intensity of the end colored light.
- the illuminators may be incorporated in the front ends of the eyeglasses temples, in which case suitable mirrors direct the output light onto the eyes from the side; if both ears have the same hearing losses, the illuminator may be placed in the middle of the glasses frame and the colored light naturally observed by both eyes.
- a display viewable through the eyeglasses also enables to correlate sounds with related images and thus improve hearing. Lip reading of a talking person viewed on the display of the cellphone in a “Face Time” mode, may be transmitted by wireless in real time to the display in front of the eyeglasses.
- Relatively large displays may be very thin and suspended from the front rim of the visor of a cap worn by the eyeglasses wearer.
- the display, the communication hardware and the antenna may be embedded in the rims of the cap as is the battery supplying the power.
- the second strategy we use for reducing noise is to follow speech components in time with a time resolution of 1-2 milliseconds and try to locate the natural “pauses” between phonemes, syllables and words.
- noise is, with high degree of probability, present both during “pauses” and during speech segments, subtracting the noise frequencies amplitudes from the following speech frequencies, using a simple algorithm, improves the SNR during speech.
- This strategy is applicable both to the sound detected by the microphones situated on the eyeglasses temples as it is applicable to the microphone(s) of the smartphone and the bone conduction transducer operated as a microphone.
- the third strategy is to use the different frequency response of the air conduction (AC) microphones and bone conduction transducer used as a microphone, and accelerometers that detect vibrations of the crane; cross correlations between the different sensors, differentiate between correlated speech and uncorrelated or weakly correlated surrounding sound noise.
- the (BC) microphones also strongly detect the eyeglasses hearer's own voice as the mouth cavity resonances generate strong vibrations of the crane.
- Bone Conduction transducers may be used both as detectors of vibrations (microphone) and generator of vibrations (speaker).
- Bone conduction transducers made with piezolectric materials have non-linear responses both in intensity and frequency bandwidth.
- this invention comprises a new transducer design, that enables to generate vibrations for each frequency independently of the others and comprises an equalizer that allows to tailor the frequency response.
- the next important feature that improves speech understanding is a knowledge of the speaker's “voice signature”, his intonation characteristics, such as the relative intensities and spectra of vowels and consonants in phoneme pronunciation and speed of talking.
- voice signature characteristics of “frequent callers” may be analyzed in advance using a spectrum analyzer application stored in the smartphone, enabling to generate a list of characteristic pronunciations of phonemes.
- a one-to-one or a many-to-one look-up table of phonemes may be established, enabling to adapt the incoming phonemes detected by microphones to the hearing characteristics of the recipient and relay the adapted phonemes to the recipient's speaker and ear.
- FIG. 1 illustrates a “Hearing threshold” of a person with moderate hearing loss between 65 Hz to 16,744 Hz and substitute frequencies for frequencies below 1 kHz and frequencies above 4 kHz.
- FIG. 2 illustrates a smartphone controlled Eyeglasses carrying on its bows the components needed to reject audio noise, generate substitute frequencies relayed to the bone conduction (BC) speaker/microphone and color LEDs for illuminating the eye simultaneously with audio frequencies heard.
- BC bone conduction
- FIG. 2 a illustrates the deterioration of hearing with advanced age for different modes of speech.
- FIG. 2 b illustrates the subtraction of noise measured during speech pauses from the following syllables and words.
- FIG. 3 a illustrates the subtraction of surround sound that does not reach the eyeglasses wearer directly from the front.
- FIG. 3 b illustrates the process of noise elimination before transmitting the speech signals to the (BC) transducer and the dual functionality of the (BC) transducer both as a microphone and a speaker.
- FIG. 3 c illustrates the correction of fast speech by enlarging the periods of speech while reducing the intervals between phonemes and syllables.
- FIG. 4 illustrates the substitution of unheard or badly heard audio low and high frequencies with frequencies in the 1 to 4 kHz range.
- FIG. 5 illustrates a mechanical vibration producing transducer with separate controls over each band of frequencies, suitable to transmit audio vibrations by bone conduction and serve also as a sensor of vibrations of the skull.
- FIG. 6 illustrates the one-to-one correspondence between audio frequencies and color wavelengths for exciting the auditory and visual corteces simultaneously.
- FIG. 7 illustrates the delivery of low intensity focused ultrasound beams to the auditory and visual corteces simultaneously with the delivery of vibrations of the same frequency to the skull and color signals to the eyes.
- FIG. 8 illustrates a brain training system including a smartphone managed eyeglasses and a cape with a large visor on which is laid a foldable display monitor, an RF transmitter and a battery; the cap also incorporates 4 circular ultrasound emitters with their batteries.
- FIG. 8 a illustrates a display monitor showing the mouth and lips of a person pronouncing phonemes whose characteristic frequencies are delivered by the BC speaker to the skull of the person and the colors corresponding to said frequencies are simultaneously displayed and/or beamed to the eye of the eyeglasses wearer.
- FIG. 9 illustrates the stimulation of the brain with electromagnetic radiation generated between resonant coils.
- FIG. 1 illustrates a “Hearing threshold” 1 of a person with moderate hearing loss between 65 Hz to 16,744 Hz divided into low frequency 2, mid frequency 3 and high frequency 4 hearing regions.
- Such an audiogram may be self generated by using the smartphone to generate a series of audio frequencies at varying loudnesses while the person indicates the loudness level at which he ceases to hear the signals.
- This audiogram shows that the person has “normal hearing” between 1kHz and 4khz, but has a moderate-to-steep loss of hearing below 1 kHz and above 4 kHz. In cases of precipitous hearing loss, even the understanding of normal speech in the middle frequencies may seriously be impaired and a Hearing Aid is needed.
- the new ITU-T G.722.2 standard of Adaptive Multi-Rate Wideband of speech 5 requires a bandwidth of 50 Hz to 7 kHz which is beyond the hearing abilities of most middle-aged people except audiophiles. It is interesting to note that the bandwidth 6 of the Plain Old Telephone Service (POTS) is only 300 Hz to 3400 Hz and people “understand” telephone conversations well “when there is no “noise” on the line; in our opinion that shows that “noise” elimination is extremely important ands because their brains were “trained” to fill in the unheard frequencies.
- POTS Plain Old Telephone Service
- FIG. 2 illustrates the pair of eyeglasses with electronic components, sensors and transducers that together improve the hearing of the hearing impaired person.
- the Hearing Eyeglasses components embedded in each of the eyeglasses temples include, a bluetooth RF transceiver with a microcontroller and a large flash memory 74 b, an infrared LED 21 of sensitivity at 850 nm, a colored light illuminator 22 consisting of 3 LEDs (blue-green-red) controlled by the microcontroller, 2 unidirectional microphones 23 a and 23 b , a rechargeable LiPO 4 battery 24 , a Bone Conduction (BC) speaker/microphone 25 , a quad 330 comparator/gate 26 , an accelerometer 27 , a DSP 28 , a CODEC 29 comprising a wide band equalizer and delay generators, an (AC) speaker/microphone 30 hidden behind the ear that can be released and inserted into the ear canal.
- BC Bone Conduction
- CODEC 29
- the microcontrollers situated in the temples may communicate between them by coaxial wires embedded in the temples of the eyeglasses and the rims of the glasses.
- the tips of the temples are tightly interconnected by a ratcheted band 78 behind the head, thus pressing the bone conduction speaker/microphones against the skull.
- the microcontrollers control the traffic on the temples of the eyeglasses, and the DSPs process the algorithms that reduce noise, and determine the proper amplification of different frequency bands.
- the various instructions to the components of the system may be conveyed by coded “taps” on the accelerometers or the microphones. They enable, for example to change the volume of the respective speakers. Taps may be interpreted as “0” or “1” depending on the frequency of the correlating 1 tap with “0” and 2 short sequential taps as “1”.
- Different sequences may be used for selecting programs, devices and their features such as increasing or decreasing the volume of a speaker or a frequency of the (BC) transducer .
- a prerecorded menu of the “Tap” features may be delivered to the ear for example after 3 sequential taps.
- Unidirectional microphones 23 a and 23 b detect sounds coming mainly from the front.
- the time delays between the 4 microphones on the 2 temples determine the direction of the sound and serve to eliminate all sounds that do not abide by the timing constraints.
- the microcontrollers embedded in the two temples communicate by the coaxial cables embedded in the temples and the rims of the eyeglasses frame.
- Zinc-air high capacity, model 675 button cell batteries serve as back-up to the rechargeable LiPO 4 batteries.
- the frame of the eyeglasses may also hold a miniature wideband video camera 21 able to image objects in obscure locations.
- the video camera may be used to take a sequence of pictures of the mouth of the person with whom the eyeglasses wearer is having a conversation with, while recording the short conversation.
- the frame-by-frame display played concurrently with the related prerecorded phoneme, serve to train the brain.
- the camera may have a wide band sensitivity in order to detect infrared light and thus image people talking in the dark or in obscure places.
- FIG. 2 a shows the deterioration of hearing with advanced age for different modes of speech. While listening to normally articulated speech, a persons understanding of normal speech, declines by some 10 percent by the age of 70 to 79; listening to fast talking people makes understanding twice as difficult; speech understanding then declines by 20% by the age of 70 to 79. It also illustrates the steep decline of speech understanding with age when the interlocutor is in a crowd, when there is echo in the room or when his interlocutor talks with interruptions.
- FIG. 2 b illustrates the process of noise elimination from speech.
- Speech is built out of phonemes, syllables and words interspersed by pauses in between.
- the average english word duration is around 250 msecs while “pauses” between syllables are around 50 to 100 msec. Consequently noise intensity and spectra can be measured during such “Pauses” 31 and subtracted from following speech 31 a segments.
- the beginning of a pause may be detected by a steep drop in intensity and the end of the pause by a steep increase of intensity. These inflection points may be determined by following the sample amplitudes when sampling the speech, for example at 44 kHz.
- the beginning of a pause may be determined by finding the 10 samples whose average intensities are lower from the previous ones and approximately the same from the following 10 ones.
- the end of a pause then is the 10 samples whose average intensity is approximately the same as the previous ones and the following samples average intensity starts growing on the average.
- the “pause” time may then be defined as the middle 90% between the inflection points. Sound intensity rate during the pause period in the frequency domain may then be subtracted from the following speech segments also in the frequency domain.
- the process of measuring noise at “pauses” is repeated only from time to time and the last measured noise intensity and spectra are subtracted from ongoing speech signals for as long that the volume of sound doesn't change much.
- FIG. 3 a illustrates the processing of speech arriving from the front, from the interlocutor or from the TV. It illustrates the principles for determining the direction of sound by measuring the time delays of sound between the 4 unidirectional microphones, F R , F L , B R , and B L situated on the temples of the eyeglasses.
- the time delays of the sound waves arriving at the 4 microphones, ⁇ t 1 , ⁇ t 2 , ⁇ t 3 , ⁇ t 4 , and ⁇ t 5 being known in advance, the way to select the sounds arriving from the front direction out of all the sounds reaching the microphones is as follows:
- a differential amplifier enables to reject all the non-directional sounds and preserve the directional speech signal.
- This directional signal may then be processed by properly amplifying the frequency bands that are not well sensed by the hearing impaired.
- the processed signal may be delivered to the ear canal of the hearing impaired person through an air conduction (AC) speaker 30 a and/or through a bone conduction (BC) transducer 25 to his crane that transmits the vibrations to the cochlea.
- AC air conduction
- BC bone conduction
- FIG. 3 b illustrates a simplified diagram of the process of noise elimination before transmitting the speech signals to the (BC) transducer illustrated in FIG. 5 and the dual functionality of the (BC) transducer both as a microphone and a speaker.
- the outputs of the three microphones F R , F L and B R are properly delayed in the CODEC 29 and filtered by bi-quad filters.
- the (BC) transducer 25 operates half of the time (for example for 1 millisecond) as a microphone and the second half as a speaker.
- the outputs of the (BC) “microphone” which are already in the frequency domain are properly amplified (or attenuated) to equalize their average level to that of the (AC) microphones.
- the amplified outputs of the (BC) “microphone” are “lagging” in time in comparison to the speech components of the (AC) microphones, their signals are further delayed according to their distances from the (BC) microphone.
- the properly delayed streams of the 3 microphones and the (BC) microphone are added and passed through differential amplifiers that subtract the uncorrelated frequencies and transmits the correlated ones through DACs 54 to the coils 53 of the (BC) transducer, thus causing the plates 51 glued to the coils to vibrate at the frequency of the current passing through the coil.
- FIG. 3 c illustrates a way to improve understanding of fast talk by expanding the time it takes to pronounce a phoneme or syllable on account of the silence intervals between phonemes, syllables or words. This is done by enlarging the periods of speech 33 to 33 + ⁇ while reducing the intervals between phonemes and syllables also by the same amount 34 to 34 - ⁇ . This may be accomplished by expanding the samples duration above a given level (noise) by a given amount and reducing the duration of the following samples by the same amount, by changing the sampling clock.
- FIG. 4 illustrates the sampling of the voice signal detected by a digital microphone, its decomposition in the frequency domain by filtering it with an IIR filter, substitution of unheard or badly heard low and high frequencies with frequencies in the 1 to 4 kHz range, adding the amplitudes in the frequency domain and applying the resultant amplitudes onto the (BC) transducer.
- FIG. 5 illustrates a mechanical vibration producing transducer with separate controls over each band of frequencies, suitable to transmit audio vibrations by bone conduction and serve also as a sensor of vibrations of the skull.
- the vibration producing transducer is composed of a multiplicity of solid elements 51 that each may vibrate at a different frequency 50 .
- the elements 51 are solid, non-conductive and non-magnetic and may be of plastic or light ceramic. Electrical miniature flat, spiral shaped coils 53 that carry alternating currents supplied by digital-to-analog-converters (DAC) 54 , are glued to the back of the elements 51 ; the adjacent coils are wound in opposite directions.
- DAC digital-to-analog-converters
- the array of coils are in turn glued to a thin elastomer diaphragm 53 a in close proximity above an array of fixed magnets 52 having alternating poles between adjacent magnets.
- the 460 stationary magnets are glued to a non-magnetic back structure 52 a. Adjacent magnets have their north and south poles flipped in opposite directions so that the coils facing them are either attracted or repealed depending on the direction of the current in the coil.
- the transducer may generate planar vibrations by having its segmented diaphragm 53 a move forth and back, the different segments vibrating at different frequencies.
- the original electrical signal 57 is first passed through an equalizer 55 that decomposes it into its frequency bands; each of the frequency band signals may be amplified separately 56 by a different amount and fed to the coils 53 independently and phase locked.
- Such a transducer may generate single frequency vibrations for training the cochlea.
- the transducer does not have to be flat; the vibrating elements may be slightly curved and the totality of the elements form a curvature to better adjust to the local curvature of the crane, thus transmitting the vibrations with lesser pressure.
- the elements and magnets of the transducer may be miniaturized; for example a 16 frequency array with 3 ⁇ 3 mm elements 58 (frequencies) may be as small as 1.5 ⁇ 1.5 cm and a 64 element array may approximately be 1′′square.
- the transducer may also be used as a sensitive vibration microphone 60 where the vibrations transmitted to a plate 51 will cause the coil 53 on top of the magnet to vibrate, generating an induced current that can be amplified and digitized 60 .
- FIG. 6 illustrates the establishment of a one-to-one correspondence between audio frequencies 63 transmitted to the auditory channel and color wavelengths 62 seen by the visual channel
- the one-to-one correspondence is also established between the volume of the audio frequencies and the intensity or brilliance of the colors.
- the ability of the brain to substitute harmonic frequencies in lieu of a missing fundamental frequency in trying to decipher a “word” when the missing fundamental is missing has been observed. Consequently the low frequencies from 65 Hz to 932 Hz can be replaced by their harmonic substitutes from 1046 Hz to 1835 Hz as illustrated in table 64 .
- the “cycle” of audible tones is based on the harmonic relations modulo the octave. We can just associate each tone with its “equivalent” in other octaves.
- a (BC) transducer transmitted by a (BC) transducer to the cochlea as two sets of vibrations of 2600 Hz each at 1 msecond interval between the sets; the cochlea will transduce them to 2 signals of 2600 Hz each, however the slow to react synapses will sum them and transmit to the auditory cortex a 5200 Hz signal.
- the 1080 Hz vibration may also correspond to the consonant “p”; therefore if we deliver visually a bluish signal of 470 nm and the brain was previously trained to correlate the 470 nm light with the 270 Hz vibration, the brain would know that the 1080 vibration is an harmonic of the fundamental frequency of 270 Hz.
- the training of the brain to recognize substitute frequencies for replacement of unheard or badly heard frequencies and strengthening this exercise by establishing a one-to-one correspondence with colors, aided by lip reading may be carried repetitively a large number of times and the “learning” rate checked periodically. It is also possible to carry-on the exercises under hypnosis and get the help of the “subconscious” mind to establish to one-to-one correspondences.
- the brain however performs an immense number of tasks, consciously and inconsciously, and some of them involve colors in various contexts.
- the task of linking colors to sounds has therefore to be defined in a specific context and not as a general feature to be performed at all times. We wouldn't want that every time a Red color is perceived, to hear an 8 kHz whistle. Therefore the task of correlating colors with sound frequencies has to be limited to certain tasks, only in the context of “language” for example or when the task is preceded by a “code” and terminated by a different code. That is like tasks instructed to do during hypnosis, not before or after.
- the brain can be trained to respond to several color codes; using BLUE for “0” and RED for “1” for example and a multitude of color codes of several bits could be devised to direct the brain to perform certain tasks. It is also possible to train the Brain to generate the sound frequency corresponding to a given wavelength only in the presence of a third signal, for example a tactile signal. For example “rubbing your right ear” may start and “rubbing your left ear” may end the session of correlating colors and sound frequencies. Another impetus to start correlating colors with sound frequencies may be, by irradiating both the visual and auditory corteces with low intensity ultrasound beams and energizing them to start cooperate.
- the visual color signal may be generated by 3 low power LEDs (Blue 66 , Green 67 , Red 68 ) in a proportion determined by the microcontroller on the temple of the eyeglasses.
- the colored light source 22 is positioned at the front end of the eyeglasses temple; the light is reflected by 2 mirrors onto the direction of the eyeglasses bearer's eye.
- the intensity of the colored light may also reflect the volume of the sound it is correlated with.
- FIG. 7 illustrates the delivery of low intensity focused ultrasound beams of specific frequency to the brain using concentric circular rings of ultrasound exciters which may be piezoelectric crystals or capacitive MEMS.
- the concentric rings of exciters 74 form a partial hemisphere filled with a gel 74 c having good transmissivity across the crane against which they are pressed.
- the respective phases of the exciters are tuned so that all the beams reinforce each other at the common focal point 74 d.
- Two phased arrays of circular rings may be tuned to focus on the same focal point; in such a case the two ultrasound beams will interfere and at their common focal point, and will form ultrasound beams having the sum and difference of the frequencies of the two beams.
- This method may be used to excite the Al of the auditory cortex at the difference frequency 71 , for example at 1 kHz if the two beams are tuned at 100 kHz and 101 kHz respectively; another example is to set the two frequencies at 300 kHz and 308 kHz in order to obtain a beam of 8 kHz at the focal point.
- an ultrasound beam with the same difference frequency may be delivered to the visual 72 and the auditory corteces simultaneously.
- both corteces may be excited at the same vibration frequencies delivered by a bone conduction transducers to the crane nearby the cochleas and the frequency of the related color signals delivered to the eyes.
- the combined intensity of the ultrasound beams at the focal point may be extremely low, of the order of (1 ⁇ W/mm 3 ) and targeted to stimulate only a limited area of the corteces that process said frequencies.
- the ultrasound beam will stimulate the electrical activity in neurons, by activating both the sodium and calcium channels and may reinforce synaptic transmissions of specific frequencies.
- the circular phased array transducers may be held in place, pressed against the shaved skull by one or more ratcheted elastic bands 78 .
- FIG. 8 illustrates the smartphone managed eyeglasses where a foldable display 76 on the visor of a baseball cap displays the images transmitted by the smartphone 80 of the hearing impaired person.
- the cap also shows the low intensity ultrasound emitters 73 a, 73 b, 74 a and 74 b explained above in connection with FIG. 7 and a LiPO 4 battery 77 b and 77 c that supply power to the display monitor and the ultrasound stimulators.
- the eyeglasses may be of the multifocal type, in this case the upper lens having the shorter focus for better viewing the display monitor 76 .
- the mouth and lips 82 of a person pronouncing the word “mother” 85 are shown, while the face above the mouth is obscured for helping the viewer to concentrate on the movements of the mouth and lips.
- the syllables [m,a] 85 a and [th,ae,r] 85 b are displayed sequentially, in time synchronization with the video, with each of the phonemes 86 a , 86 b , 86 c, 86 d and 86 e are colored 83 according to the one-to-one correspondence scheme with the sound frequencies.
- the color code is also transmitted to the eye by the LED illuminator 22 to reinforce the link with the other stimulations.
- the corresponding vibration frequencies 270 Hz, 700 Hz, 6000 Hz, 500 Hz and 800 Hz are delivered to the crane by the (BC) transducer(s) explained above in connection with FIG. 5 .
- the (BC) transducer mounted on the inside of the “hearing eyeglasses” is pressed against the bone by stretching the band 78 that connects the two temples.
- the proper locations in the visual and auditory corteces are stimulated by ultrasound waves also of the same frequencies, in order to enhance the pathways between the corteces.
- FIG. 9 illustrates the stimulation of the brain with electromagnetic radiation generated between resonant coils.
- Inductively coupled resonant coils can transmit magnetic energy with little losses.
- the figure illustrates two resonant magnetic energy delivery systems perpendicular each to the other.
- the resonant coils have magnetic cores around which the current carrying wires are wound.
- the power sources 90 a, 90 b are coupled to the resonant sources 91 a, 91 b which are coupled with the distant resonant load coils 92 a and 92 b. In the illustrated configuration there is no substantial load at the load coils. The only loads are in the near-field due to the impedance of the brain.
- the coupling factor between the resonant sources 91 a, 91 b and the resonant loads 92 a, 92 b may be maximized electronically by adjusting the phase between the resonant coils.
- the purpose of the illustrated geometry is to keep the magnetic lines from diverging between the resonant coils. In this configuration the magnetic energy will circulate forth and back between the coils with some losses in the intermediate matter, namely the brain depending on the phase between the two coils. In fact changing the phase will determine the energy deposited in the brain along the magnetic lines.
- the two resonant energy transfer systems are perpendicular each to the other and their magnetic lines cross at a limited region 94 , where the deposited energy is cumulative.
- resonant magnetic energy transfer systems may be placed around the head at the proper angular positions so that their intertwined magnetic lines maximize the energy delivered at this spot.
- the absolute magnetic energy delivered may be controlled by the phases between the resonant coils.
- This method of stimulating selected spots in the brain can be used to stimulate the visual and the auditory corteces simultaneously with the delivery of vibrations to the auditory cortex and the corresponding “color” stimulations to the visual cortex.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Acoustics & Sound (AREA)
- Physics & Mathematics (AREA)
- Otolaryngology (AREA)
- Heart & Thoracic Surgery (AREA)
- Neurosurgery (AREA)
- Psychology (AREA)
- Signal Processing (AREA)
- Anesthesiology (AREA)
- Hematology (AREA)
- Neurology (AREA)
- Physiology (AREA)
- Biophysics (AREA)
- Vascular Medicine (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
- This application claims the benefit of U.S. patent application Ser. No. 13/495,648 titled “Audio Communication networks” filed on 13 Jun. 2012, and U.S. patent application Ser. No. 13/682352 titled “Social network with enhanced audio communications for the Hearing impaired” filed on 20 Nov. 2012 incorporated herein in their entirety by reference.
- Current hearing aid technology deals with correcting the detrimental effects caused by the damaged inner and middle ear and the cochlea in particular. The main tool used in the various inventions is the non-linear amplification of the impaired sound frequencies. However it is by now clear that the benefits of the multi-channel non-linear amplifications are limited in attaining the goal of speech “understanding”.
- Lately, in many healthcare fields, it has been shown that taking advantage of the plasticity of the brain, many physical impairments may be alleviated, if not resolved.
- The goal of this invention is to enable “to hear” the unheard or badly heard sound frequencies, by training the brain to connect the auditory channel with the visual channel and use stimulations of the eye in order to help the brain decipher language when the auditory channel by itself is at a loss, due to missing frequencies.
- Various effects illustrate the auditory processing of the brain in order to optimize understanding of speech. For example the phenomenon known as the “missing fundamental” consists in the brain determining the fundamental sound frequency after hearing harmonic frequencies of the said fundamental frequency, and substituting the “missing fundamental” in trying to decode a word.
- It has also be observed that the brain cannot distinguish between subsequent sounds “heard” within 3-4 msecs and just interpretes the sum of the two as the signal heard.
- The joint processing of information between the auditory cortex and the visual cortex is illustrated by what is known as the “McGurk illusion”, where a phoneme heard concurrently with a video of the mouth enunciating a different phoneme, is interpreted by the brain as the phoneme heard in the video. This illusion shows that there are pathways between the visual cortex and the auditory cortex where the two try to arrive at a common conclusion; in this case the phoneme heard accompanied by a picture of the mouth articulating it, trumps the phoneme that reached only the auditory cortex.
- McGurk and MacDonald [Nature: 254, 746-748], also showed that when the auditory and visual signals may, each, point to several possibilities, the brain will select the option commonly favored by both. For example the phonemes “ba” and “da” can be confused by the auditory cortex while the phonemes “ga” and “da” can be confused by the visual cortex. Thus when the phoneme “ba” is articulated and at the same time a video of the lips saying “ga” is shown, the brain will conclude that “da” was said, neither “ga” nor “ba”.
- Meredith et al. report in the Proceedings of the national Academy of Sciences PNAS 2011 108 (21) 8856-8861 “crossmodal reorganization in the early deaf, switches sensory but not behavioral roles of auditory cortex”, that:
- “Recordings in the auditory field of the anterior ectosylvian sulcus of early-deafened adult cats, revealed robust responses to visual stimulation as well as receptive fields that collectively represented the contralateral visual field. They conclude that “These results demonstrate that crossmodal plasticity can substitute one sensory modality for another while maintaining the functional repertoire of the reorganized region”.
- Laura Ann Petitto et al in Proceedings of the National Academy of Sciences PNAS 2000 97 (25) 13961-13966; “Speech-like cerebral activity in profoundly deaf people processing signed languages: Implications for the neural basis of human language” note that: “For more than a century we have understood that our brain's left hemisphere is the primary site for processing language, yet why this is so has remained more elusive. Using positron emission tomography, we report cerebral blood flow activity in profoundly deaf signers processing specific aspects of sign language in key brain sites widely assumed to be unimodal speech or sound processing areas: the left inferior frontal cortex when signers produced meaningful signs, and the planum temporale bilaterally when they viewed signs or meaningless parts of signs (sign-phonetic and syllabic units). Contrary to prevailing wisdom, the planum temporale may not be exclusively dedicated to processing speech sounds, but may be specialized for processing more abstract properties essential to language that can engage multiple modalities. We hypothesize that the neural tissue involved in language processing may not be prespecified exclusively by sensory modality (such as sound) but may entail polymodal neural tissue that has evolved unique sensitivity to aspects of the patterning of natural language. Such neural specialization for aspects of language patterning appears to be neurally unmodifiable in so far as languages with radically different sensory modalities such as speech and sign are processed at similar brain sites, while, at the same time, the neural pathways for expressing and perceiving natural language appear to be neurally highly modifiable.
- Renaud Boistel et al. in Proceeding of the National Academy of Sciences 10.1073 PNAS 1302218110 Sep. 3, 2013 note that: “Gardiner's Seychelle ,frog, one of the smallest terrestrial tetrapods, resolves an apparent paradox as these seemingly deaf frogs comtnunicate effectively without a middle ear. Acoustic playback experiments conducted using conspecific calls in the natural habitat of the frogs provoked vocalizations of several males, suggesting that these frogs are indeed capable of hearing. This species thus uses extra-tympanic pathways for sound propagation to the inner ear. Our models show how bone conduction is enhanced by the resonating role of the mouth and may help these frogs hear”.
- There is now extensive anatomical and physiological evidence from a range of species, that multisensory convergence occurs at the earliest levels of auditory cortical processing. Phased array ultrasound beams may be focused on a relatively small spot, thus delivering concentrated energy onto the desired locality in the brain. There is extensive evidence that irradiating damaged body organs, such as bone-fractures or missing teeth, with low intensity ultrasound, causes re-growth of the damaged parts. Y. Tufail et al in “Transcranial Pulsed Ultrasound Stimulates Intact Brain Circuits” report that “we found that ultrasound triggers TTX-sensitive neuronal activity in the absence of a rise in brain temperature (<0.01C)” Low intensity pulsed ultrasound is known to help healing lacerated muscles and various soft tissues. Although the exact mechanism of healing is not known, it is probably linked to the amount of energy deposited in the cells that energizes certain processes. We therefore conjecture that sound energy of the right frequency and intensity deposited in the brain will enhance neuron activity in that spot. Specifically, energizing neurons in the auditory and visual corteces simultaneously may promote and strengthen existing coordinating processes.
- There are testimonies of people that say that they “hear voices” . These testimonies indicate that the brain is able to generate internal sounds similar to the sounds originating through the auditory channel.
- Our goal is to cause quasi-deaf people to “hear voices” generated mostly in the brain, by stimulating the brain “to put together” partial information received through the auditory channel with correlated information delivered through the visual channel and “GUESS” what was said.
- The present invention, is a device that enables to train the brain of a hearing impaired person to correlate unheard or badly heard sound frequencies with “substitute frequencies”, visual color sequences and pictures of the mouth enunciating said phonemes concurrently, while triggering the corresponding areas of the auditory and visual corteces simultaneously, with focused ultrasound beams and magnetic stimulations.
- Once trained and the one-to-one correspondence between sound frequencies, “substitute frequencies” and color wavelengths are well established in the brain, we conjecture that a relatively simple pair of Eyeglasses incorporating a processor for translating the initial sound frequencies to “substitute frequencies”, bone conduction transducers to transmit the substitute vibrations indirectly to the cochlea and a color light source illuminating the eye from the side, will greatly improve the hearing capabilities of the hearing impaired persons. We also conjecture that stimulating the auditory and the visual corteces simultanously with sounds of the same frequency will strengthen the one-to-one connection between sounds and colors. The auditory and visual corteces may also be stimulated simultaneously with magnetic energy delivered by resonant coils that enable to traverse the brain with little loss of energy. This little amount of energy may be increased at will by detuning the resonance between the coils. A multiplicity of resonant coils placed at strategic positions around the head may cumulatively deposit energy at selected regions of the brain, for example on the auditory cortex. We conjecture that depositing extra energy at the right moment will “cause” the brain to work harder and “decipher” speech with the substitute frequencies.
- The unheard or badly heard frequencies may be determined by taking audiograms of the ears and the “substitute frequencies” established during the training period. The substitute sound frequencies may be generated by a bone conduction (BC) speaker of a specific design shown in this application. The (BC) speakers transfer the vibrations to the skull, bypassing the outer ear” and the“middle ear” that may be damaged and reach the cochlea in the inner ear. As the bandwidth of the (BC) speaker is lesser than that of the (AC) speaker, in cases where the causes of the hearing impairment may not be clear, it is advantageous to use the (AC) speaker tugged to the ear canal in addition to the (BC) transducer pressed onto the skull. Simultaneously with exciting the cochlea by a given sound frequency, the hearing impaired person's eye is visually excited by a colored light of corresponding wavelength, such that a one-to-one correspondence is gradually established between the sound frequencies and the light wavelengths.
- The light excitation may be generated either by a miniature tricolored light source where the power of each LED is controlled, or by a colored display in front of the person's eyes. Simultaneously with the cochlea and eye excitation, the corresponding auditory and visual cortex areas that process the transduced electrical signals, are also excited by twin external transcranial phased-array ultrasound beams that converge on the desired area. The ultrasound emitters are held in place against the crane by one or more ratcheted bands around the crane.
- The frequencies of each of the pair of phased-array ultrasound beams converging on the corresponding auditory and visual corteces excited areas, are slightly different, so that at the focal region their interference generates a difference signal of the same frequency as the frequency of the original sound signal. We conjecture that the brain will establish a triple correspondence between the signals coming from the eye, the cochlea and the transcranial signals and will interprete the sum as the desired sound, even when the signal from the cochlea may be weak and for some frequencies nonexistent.
- An additional strategy for reinforcing the brain's interpretation of the “correct” frequency in the context of a word, is to train it to correlate the unheard frequency with a substitute frequency that is “better” heard by the cochlea. The substitute frequencies of badly heard or unheard frequencies may be the higher frequency harmonics and sums of frequencies generated within a time window of less than 3 msec that the brain will interprete as one higher frequency. Thus a translation “look-up-table” may be generated that translates the original speech frequencies detected by microphones to their “harmonics” or “time-squeezed” frequencies before delivering them to the bone conduction speaker or an audio speaker. The frequency training of the brain may be enhanced by phoneme training that consists in pronouncing phonemes while simultanously displaying the sequence of colors that are related to the sound frequencies. The training may be further enhanced by displaying the lips of a person pronouncing the phoneme.
- The components of the training system may be incorporated onto eyeglasses and a cap with a long visor worn by the trainee, where the system is controlled and managed by a smartphone. The extremely thin, flexible display monitor, connected by bluetooth to the smartphone, lies at the front of the visor and may easily be flipped onto a position in front of the eyeglasses.enabling the eyeglasses wearer to view colored images transmitted by the cellphone, for example in the “face time” mode of the iPhone. The eyeglasses bows incorporate microphones at the front and back ends enabling to assess the direction of incoming sound and thus reject surrounding noise, thus greatly improving speech understanding. A bone conducting transducer of our design, able to tailor sound sequences out of single frequencies are incorporated at the back of the bows behind the ear, next to the mastoid bone. The bone conducting vibration transducer is able to generate single frequency vibrations and thus enables to measure the “bone conduction audiogram” by the hearing impaired person.
- Various colored signals may be generated by a colored light illuminator consisting of miniature (Blue, Green and Red) LEDs controlled by a microprocessor that sets their relative intensities that determine the combined color after mixing, and the absolute intensities that determine the intensity of the end colored light. The illuminators may be incorporated in the front ends of the eyeglasses temples, in which case suitable mirrors direct the output light onto the eyes from the side; if both ears have the same hearing losses, the illuminator may be placed in the middle of the glasses frame and the colored light naturally observed by both eyes. A display viewable through the eyeglasses, also enables to correlate sounds with related images and thus improve hearing. Lip reading of a talking person viewed on the display of the cellphone in a “Face Time” mode, may be transmitted by wireless in real time to the display in front of the eyeglasses.
- Relatively large displays may be very thin and suspended from the front rim of the visor of a cap worn by the eyeglasses wearer. The display, the communication hardware and the antenna may be embedded in the rims of the cap as is the battery supplying the power.
- Many aspects of “Hearing” improvement depend on eliminating surrounding noise and the way interlocuters talk. We maintain that in addition to finding remedies to the bodily hearing impairments it is as important to reduce all components of “noise” and make the necessary adaptations to the way others talk.
- In our system we try to substantially eliminate noise using 3 strategies. One strategy is by letting the hearing impaired person, to limit his “listening cone” to cover only the space covered by his interlocutor. This goal is implemented using 4 directional microphones on the forth and back of the temples of the eyeglasses and setting stringent limits to the time delays of the correlated sound reaching them.
- The second strategy we use for reducing noise, is to follow speech components in time with a time resolution of 1-2 milliseconds and try to locate the natural “pauses” between phonemes, syllables and words. As noise is, with high degree of probability, present both during “pauses” and during speech segments, subtracting the noise frequencies amplitudes from the following speech frequencies, using a simple algorithm, improves the SNR during speech. This strategy is applicable both to the sound detected by the microphones situated on the eyeglasses temples as it is applicable to the microphone(s) of the smartphone and the bone conduction transducer operated as a microphone.
- The third strategy is to use the different frequency response of the air conduction (AC) microphones and bone conduction transducer used as a microphone, and accelerometers that detect vibrations of the crane; cross correlations between the different sensors, differentiate between correlated speech and uncorrelated or weakly correlated surrounding sound noise. The (BC) microphones also strongly detect the eyeglasses hearer's own voice as the mouth cavity resonances generate strong vibrations of the crane.
- In this context it is important to note that Bone Conduction transducers may be used both as detectors of vibrations (microphone) and generator of vibrations (speaker).
- Bone conduction transducers made with piezolectric materials have non-linear responses both in intensity and frequency bandwidth. When transmitting vibrations to the crane, it is almost impossible to tailor the frequency response that reaches the cochlea, to compensate for the loss of frequency sensitivities a damaged cochlea has. Consequently this invention comprises a new transducer design, that enables to generate vibrations for each frequency independently of the others and comprises an equalizer that allows to tailor the frequency response. Thus it is possible to take in account the frequency response of the cochlea and the crane bone in order to generate a flat or the desired frequency response that the neurons transmit to the auditory cortex.
- The next important feature that improves speech understanding is a knowledge of the speaker's “voice signature”, his intonation characteristics, such as the relative intensities and spectra of vowels and consonants in phoneme pronunciation and speed of talking. Such “Voice signature” characteristics of “frequent callers” may be analyzed in advance using a spectrum analyzer application stored in the smartphone, enabling to generate a list of characteristic pronunciations of phonemes. As the Hearer's spectral characteristics and time response are usually different, a one-to-one or a many-to-one look-up table of phonemes may be established, enabling to adapt the incoming phonemes detected by microphones to the hearing characteristics of the recipient and relay the adapted phonemes to the recipient's speaker and ear.
-
FIG. 1 illustrates a “Hearing threshold” of a person with moderate hearing loss between 65 Hz to 16,744 Hz and substitute frequencies for frequencies below 1 kHz and frequencies above 4 kHz. -
FIG. 2 illustrates a smartphone controlled Eyeglasses carrying on its bows the components needed to reject audio noise, generate substitute frequencies relayed to the bone conduction (BC) speaker/microphone and color LEDs for illuminating the eye simultaneously with audio frequencies heard. -
FIG. 2 a illustrates the deterioration of hearing with advanced age for different modes of speech. -
FIG. 2 b illustrates the subtraction of noise measured during speech pauses from the following syllables and words. -
FIG. 3 a illustrates the subtraction of surround sound that does not reach the eyeglasses wearer directly from the front. -
FIG. 3 b illustrates the process of noise elimination before transmitting the speech signals to the (BC) transducer and the dual functionality of the (BC) transducer both as a microphone and a speaker. -
FIG. 3 c illustrates the correction of fast speech by enlarging the periods of speech while reducing the intervals between phonemes and syllables. -
FIG. 4 illustrates the substitution of unheard or badly heard audio low and high frequencies with frequencies in the 1 to 4 kHz range. -
FIG. 5 illustrates a mechanical vibration producing transducer with separate controls over each band of frequencies, suitable to transmit audio vibrations by bone conduction and serve also as a sensor of vibrations of the skull. -
FIG. 6 illustrates the one-to-one correspondence between audio frequencies and color wavelengths for exciting the auditory and visual corteces simultaneously. -
FIG. 7 illustrates the delivery of low intensity focused ultrasound beams to the auditory and visual corteces simultaneously with the delivery of vibrations of the same frequency to the skull and color signals to the eyes. -
FIG. 8 illustrates a brain training system including a smartphone managed eyeglasses and a cape with a large visor on which is laid a foldable display monitor, an RF transmitter and a battery; the cap also incorporates 4 circular ultrasound emitters with their batteries. -
FIG. 8 a illustrates a display monitor showing the mouth and lips of a person pronouncing phonemes whose characteristic frequencies are delivered by the BC speaker to the skull of the person and the colors corresponding to said frequencies are simultaneously displayed and/or beamed to the eye of the eyeglasses wearer. -
FIG. 9 illustrates the stimulation of the brain with electromagnetic radiation generated between resonant coils. - The following detailed description, provide a thorough understanding of the invention while omitting specific details, that are known by those skilled in the art.
- Hearing impaired persons exhibit an “audiogram” with diminished response at low and high frequencies.
FIG. 1 illustrates a “Hearing threshold” 1 of a person with moderate hearing loss between 65 Hz to 16,744 Hz divided intolow frequency 2,mid frequency 3 andhigh frequency 4 hearing regions. Such an audiogram may be self generated by using the smartphone to generate a series of audio frequencies at varying loudnesses while the person indicates the loudness level at which he ceases to hear the signals. This audiogram shows that the person has “normal hearing” between 1kHz and 4khz, but has a moderate-to-steep loss of hearing below 1 kHz and above 4 kHz. In cases of precipitous hearing loss, even the understanding of normal speech in the middle frequencies may seriously be impaired and a Hearing Aid is needed. In cases of severe hearing loss cases, the hearing impaired may only hear even a lower bandwidth of middle frequencies. The new ITU-T G.722.2 standard of Adaptive Multi-Rate Wideband of speech 5 requires a bandwidth of 50 Hz to 7 kHz which is beyond the hearing abilities of most middle-aged people except audiophiles. It is interesting to note that the bandwidth 6 of the Plain Old Telephone Service (POTS) is only 300 Hz to 3400 Hz and people “understand” telephone conversations well “when there is no “noise” on the line; in our opinion that shows that “noise” elimination is extremely important ands because their brains were “trained” to fill in the unheard frequencies. -
FIG. 2 illustrates the pair of eyeglasses with electronic components, sensors and transducers that together improve the hearing of the hearing impaired person. In a preferred 325 embodiment the Hearing Eyeglasses components embedded in each of the eyeglasses temples include, a bluetooth RF transceiver with a microcontroller and alarge flash memory 74 b, aninfrared LED 21 of sensitivity at 850 nm, a coloredlight illuminator 22 consisting of 3 LEDs (blue-green-red) controlled by the microcontroller, 2unidirectional microphones microphone 25, aquad 330 comparator/gate 26, anaccelerometer 27, aDSP 28, aCODEC 29 comprising a wide band equalizer and delay generators, an (AC) speaker/microphone 30 hidden behind the ear that can be released and inserted into the ear canal. The microcontrollers situated in the temples may communicate between them by coaxial wires embedded in the temples of the eyeglasses and the rims of the glasses. The tips of the temples are tightly interconnected by a ratchetedband 78 behind the head, thus pressing the bone conduction speaker/microphones against the skull. The microcontrollers control the traffic on the temples of the eyeglasses, and the DSPs process the algorithms that reduce noise, and determine the proper amplification of different frequency bands. - The various instructions to the components of the system may be conveyed by coded “taps” on the accelerometers or the microphones. They enable, for example to change the volume of the respective speakers. Taps may be interpreted as “0” or “1” depending on the frequency of the correlating 1 tap with “0” and 2 short sequential taps as “1”.
- Different sequences may be used for selecting programs, devices and their features such as increasing or decreasing the volume of a speaker or a frequency of the (BC) transducer . A prerecorded menu of the “Tap” features may be delivered to the ear for example after 3 sequential taps.
-
Unidirectional microphones - Zinc-air high capacity, model 675 button cell batteries serve as back-up to the rechargeable LiPO4 batteries.
- The frame of the eyeglasses may also hold a miniature
wideband video camera 21 able to image objects in obscure locations. The video camera may be used to take a sequence of pictures of the mouth of the person with whom the eyeglasses wearer is having a conversation with, while recording the short conversation. The frame-by-frame display played concurrently with the related prerecorded phoneme, serve to train the brain. The camera may have a wide band sensitivity in order to detect infrared light and thus image people talking in the dark or in obscure places. -
FIG. 2 a shows the deterioration of hearing with advanced age for different modes of speech. While listening to normally articulated speech, a persons understanding of normal speech, declines by some 10 percent by the age of 70 to 79; listening to fast talking people makes understanding twice as difficult; speech understanding then declines by 20% by the age of 70 to 79. It also illustrates the steep decline of speech understanding with age when the interlocutor is in a crowd, when there is echo in the room or when his interlocutor talks with interruptions. -
FIG. 2 b illustrates the process of noise elimination from speech. Speech is built out of phonemes, syllables and words interspersed by pauses in between. The average english word duration is around 250 msecs while “pauses” between syllables are around 50 to 100 msec. Consequently noise intensity and spectra can be measured during such “Pauses” 31 and subtracted from followingspeech 31 a segments. The beginning of a pause may be detected by a steep drop in intensity and the end of the pause by a steep increase of intensity. These inflection points may be determined by following the sample amplitudes when sampling the speech, for example at 44 kHz. The beginning of a pause may be determined by finding the 10 samples whose average intensities are lower from the previous ones and approximately the same from the following 10 ones. The end of a pause then is the 10 samples whose average intensity is approximately the same as the previous ones and the following samples average intensity starts growing on the average. The “pause” time may then be defined as the middle 90% between the inflection points. Sound intensity rate during the pause period in the frequency domain may then be subtracted from the following speech segments also in the frequency domain. - As surrounding noise doesn't change fast, the process of measuring noise at “pauses” is repeated only from time to time and the last measured noise intensity and spectra are subtracted from ongoing speech signals for as long that the volume of sound doesn't change much.
-
FIG. 3 a illustrates the processing of speech arriving from the front, from the interlocutor or from the TV. It illustrates the principles for determining the direction of sound by measuring the time delays of sound between the 4 unidirectional microphones, FR, FL, BR, and BL situated on the temples of the eyeglasses. The time delays of the sound waves arriving at the 4 microphones, Δt1, Δt2, Δt3, Δt4, and Δt5 being known in advance, the way to select the sounds arriving from the front direction out of all the sounds reaching the microphones is as follows: -
- decompose the signals in each sample in the frequency domain using, (i) digital filters, and
- after adding the proper delays, sum the five streams of signals in the frequency domain, then add
- All (i=n) frequency streams and
- Pass through a differential amplifier to select the cumulative speech signals above random sound signals baseline.
- Adding the signal streams for each frequency, with
proper delays 31 stemming from their mutual distances in space, causes the amplitude of speech signals coming from the front to overlap and reinforce each other, while sound signals coming from other directions are distributed at random on a time scale. - Adding all the frequency signals further reinforces the speech signals in comparison with random noise or sound with a different frequency content.
- Finally, passing the cumulative signal through a differential amplifier enables to reject all the non-directional sounds and preserve the directional speech signal. This directional signal may then be processed by properly amplifying the frequency bands that are not well sensed by the hearing impaired. The processed signal may be delivered to the ear canal of the hearing impaired person through an air conduction (AC)
speaker 30 a and/or through a bone conduction (BC)transducer 25 to his crane that transmits the vibrations to the cochlea. In case of using only the bone conduction speaker to deliver the audio signal through crane vibrations, it is important to plug the ear canal with a sound reflecting cap, in order to minimize the surrounding sound that reaches the hearing impaired person's ear canal. -
FIG. 3 b illustrates a simplified diagram of the process of noise elimination before transmitting the speech signals to the (BC) transducer illustrated inFIG. 5 and the dual functionality of the (BC) transducer both as a microphone and a speaker. As illustrated inFIG. 3 a above, the outputs of the three microphones FR, FL and BR are properly delayed in theCODEC 29 and filtered by bi-quad filters. The (BC)transducer 25 operates half of the time (for example for 1 millisecond) as a microphone and the second half as a speaker. The outputs of the (BC) “microphone” which are already in the frequency domain are properly amplified (or attenuated) to equalize their average level to that of the (AC) microphones. As the amplified outputs of the (BC) “microphone” are “lagging” in time in comparison to the speech components of the (AC) microphones, their signals are further delayed according to their distances from the (BC) microphone. - The properly delayed streams of the 3 microphones and the (BC) microphone are added and passed through differential amplifiers that subtract the uncorrelated frequencies and transmits the correlated ones through
DACs 54 to thecoils 53 of the (BC) transducer, thus causing theplates 51 glued to the coils to vibrate at the frequency of the current passing through the coil. -
FIG. 3 c illustrates a way to improve understanding of fast talk by expanding the time it takes to pronounce a phoneme or syllable on account of the silence intervals between phonemes, syllables or words. This is done by enlarging the periods of speech 33 to 33+Δ while reducing the intervals between phonemes and syllables also by the same amount 34 to 34-Δ. This may be accomplished by expanding the samples duration above a given level (noise) by a given amount and reducing the duration of the following samples by the same amount, by changing the sampling clock. -
FIG. 4 illustrates the sampling of the voice signal detected by a digital microphone, its decomposition in the frequency domain by filtering it with an IIR filter, substitution of unheard or badly heard low and high frequencies with frequencies in the 1 to 4 kHz range, adding the amplitudes in the frequency domain and applying the resultant amplitudes onto the (BC) transducer. -
FIG. 5 illustrates a mechanical vibration producing transducer with separate controls over each band of frequencies, suitable to transmit audio vibrations by bone conduction and serve also as a sensor of vibrations of the skull. - The vibration producing transducer is composed of a multiplicity of
solid elements 51 that each may vibrate at adifferent frequency 50. - The
elements 51 are solid, non-conductive and non-magnetic and may be of plastic or light ceramic. Electrical miniature flat, spiral shapedcoils 53 that carry alternating currents supplied by digital-to-analog-converters (DAC) 54, are glued to the back of theelements 51; the adjacent coils are wound in opposite directions. - The array of coils are in turn glued to a
thin elastomer diaphragm 53 a in close proximity above an array of fixedmagnets 52 having alternating poles between adjacent magnets. The 460 stationary magnets are glued to a non-magnetic back structure 52 a. Adjacent magnets have their north and south poles flipped in opposite directions so that the coils facing them are either attracted or repealed depending on the direction of the current in the coil. - The transducer may generate planar vibrations by having its segmented
diaphragm 53 a move forth and back, the different segments vibrating at different frequencies. - The original
electrical signal 57 is first passed through an equalizer 55 that decomposes it into its frequency bands; each of the frequency band signals may be amplified separately 56 by a different amount and fed to thecoils 53 independently and phase locked. - In such an architecture the parts of the diaphragm glued to the coils will vibrate at different frequencies and at different amplitudes enabling to better shape the spectra of the vibrations.
- Such a transducer may generate single frequency vibrations for training the cochlea. The transducer does not have to be flat; the vibrating elements may be slightly curved and the totality of the elements form a curvature to better adjust to the local curvature of the crane, thus transmitting the vibrations with lesser pressure.
- The elements and magnets of the transducer may be miniaturized; for example a 16 frequency array with 3×3 mm elements 58 (frequencies) may be as small as 1.5×1.5 cm and a 64 element array may approximately be 1″square.
- The transducer may also be used as a
sensitive vibration microphone 60 where the vibrations transmitted to aplate 51 will cause thecoil 53 on top of the magnet to vibrate, generating an induced current that can be amplified and digitized 60. -
FIG. 6 illustrates the establishment of a one-to-one correspondence betweenaudio frequencies 63 transmitted to the auditory channel andcolor wavelengths 62 seen by the visual channel The one-to-one correspondence is also established between the volume of the audio frequencies and the intensity or brilliance of the colors. As mentioned above the ability of the brain to substitute harmonic frequencies in lieu of a missing fundamental frequency in trying to decipher a “word” when the missing fundamental is missing, has been observed. Consequently the low frequencies from 65 Hz to 932 Hz can be replaced by their harmonic substitutes from 1046 Hz to 1835 Hz as illustrated in table 64. The “cycle” of audible tones is based on the harmonic relations modulo the octave. We can just associate each tone with its “equivalent” in other octaves. - As to the frequencies above 4 kHz that in the illustrated example, the hearing impaired does not hear well, we make the following observation that although the cochlea response to vibrations is of the order of several hundred microseconds, the neurons response latency is much larger, in the order of one to several milliseconds. When several inputs arrive within this latency period the result is a summation. We therefore conjectured that high frequencies that are unheard or badly heard may be replaced by 2, 3 or 4 times middle frequencies as shown in table 65, with approximately 1 msec intervals; the middle frequencies although well coded by the cochlea and delivered sequentially to the nervous system, will be integrated into one higher frequency and the brain will get the sum of the sequence and interprete them as one vibration of a higher frequency. Thus for example an “s” that is pronounced at approximately 5200 Hz may be
- transmitted by a (BC) transducer to the cochlea as two sets of vibrations of 2600 Hz each at 1 msecond interval between the sets; the cochlea will transduce them to 2 signals of 2600 Hz each, however the slow to react synapses will sum them and transmit to the auditory cortex a 5200 Hz signal.
- To “convey” to the auditory cortex what “we mean” when the low or high frequencies are substituted by middle frequencies that are better coded by the cochlea, we can take advantage of the pathways between the auditory and visual corteces and train the brain to establish a one-to one correspondence between optical wavelengths (colors) and sounds.
- To help the brain decipher a word in the language context, every time a “substitution vibration” is delivered to the cochlea, we also project to the eye the wavelength (color) corresponding to the original frequency (vibration). For example when the word “mother ” is articulated the “m” is usually articulated as a 270 Hz sound wave by the mouth. This sound may not be well deciphered by the cochlea and we may prefer to substitute the harmonic frequency that is 4 times the original frequency, 1080 Hz. However the 1080 Hz vibration may also correspond to the consonant “p”; therefore if we deliver visually a bluish signal of 470 nm and the brain was previously trained to correlate the 470 nm light with the 270 Hz vibration, the brain would know that the 1080 vibration is an harmonic of the fundamental frequency of 270 Hz.
- The training of the brain to recognize substitute frequencies for replacement of unheard or badly heard frequencies and strengthening this exercise by establishing a one-to-one correspondence with colors, aided by lip reading may be carried repetitively a large number of times and the “learning” rate checked periodically. It is also possible to carry-on the exercises under hypnosis and get the help of the “subconscious” mind to establish to one-to-one correspondences.
- The brain however performs an immense number of tasks, consciously and inconsciously, and some of them involve colors in various contexts. The task of linking colors to sounds has therefore to be defined in a specific context and not as a general feature to be performed at all times. We wouldn't want that every time a Red color is perceived, to hear an 8 kHz whistle. Therefore the task of correlating colors with sound frequencies has to be limited to certain tasks, only in the context of “language” for example or when the task is preceded by a “code” and terminated by a different code. That is like tasks instructed to do during hypnosis, not before or after. The brain can be trained to respond to several color codes; using BLUE for “0” and RED for “1” for example and a multitude of color codes of several bits could be devised to direct the brain to perform certain tasks. It is also possible to train the Brain to generate the sound frequency corresponding to a given wavelength only in the presence of a third signal, for example a tactile signal. For example “rubbing your right ear” may start and “rubbing your left ear” may end the session of correlating colors and sound frequencies. Another impetus to start correlating colors with sound frequencies may be, by irradiating both the visual and auditory corteces with low intensity ultrasound beams and energizing them to start cooperate.
- The visual color signal may be generated by 3 low power LEDs (
Blue 66, Green 67, Red 68) in a proportion determined by the microcontroller on the temple of the eyeglasses. The coloredlight source 22 is positioned at the front end of the eyeglasses temple; the light is reflected by 2 mirrors onto the direction of the eyeglasses bearer's eye. - The intensity of the colored light may also reflect the volume of the sound it is correlated with.
-
FIG. 7 illustrates the delivery of low intensity focused ultrasound beams of specific frequency to the brain using concentric circular rings of ultrasound exciters which may be piezoelectric crystals or capacitive MEMS. The concentric rings ofexciters 74 form a partial hemisphere filled with agel 74 c having good transmissivity across the crane against which they are pressed. The respective phases of the exciters are tuned so that all the beams reinforce each other at the commonfocal point 74 d. - Two phased arrays of circular rings may be tuned to focus on the same focal point; in such a case the two ultrasound beams will interfere and at their common focal point, and will form ultrasound beams having the sum and difference of the frequencies of the two beams. This method may be used to excite the Al of the auditory cortex at the
difference frequency 71, for example at 1 kHz if the two beams are tuned at 100 kHz and 101 kHz respectively; another example is to set the two frequencies at 300 kHz and 308 kHz in order to obtain a beam of 8 kHz at the focal point. - To reinforce the pathways between the auditory and visual corteces an ultrasound beam with the same difference frequency may be delivered to the visual 72 and the auditory corteces simultaneously. Moreover both corteces may be excited at the same vibration frequencies delivered by a bone conduction transducers to the crane nearby the cochleas and the frequency of the related color signals delivered to the eyes. The combined intensity of the ultrasound beams at the focal point may be extremely low, of the order of (1 μW/mm3) and targeted to stimulate only a limited area of the corteces that process said frequencies. The ultrasound beam will stimulate the electrical activity in neurons, by activating both the sodium and calcium channels and may reinforce synaptic transmissions of specific frequencies.
- The circular phased array transducers may be held in place, pressed against the shaved skull by one or more ratcheted
elastic bands 78. -
FIG. 8 illustrates the smartphone managed eyeglasses where afoldable display 76 on the visor of a baseball cap displays the images transmitted by thesmartphone 80 of the hearing impaired person. In addition to the components embedded in the “hearing eyeglasses” 575 illustrated inFIG. 2 , the cap also shows the lowintensity ultrasound emitters FIG. 7 and a LiPO4 battery 77 b and 77 c that supply power to the display monitor and the ultrasound stimulators. - The eyeglasses may be of the multifocal type, in this case the upper lens having the shorter focus for better viewing the
display monitor 76. - In this training session illustrated in
FIG. 8 a, the mouth andlips 82 of a person pronouncing the word “mother” 85 are shown, while the face above the mouth is obscured for helping the viewer to concentrate on the movements of the mouth and lips. In parallel with the movement of the mouth and lips saying the word “mother”, the syllables [m,a] 85 a and [th,ae,r] 85 b are displayed sequentially, in time synchronization with the video, with each of thephonemes 86 a, 86 b, 86 c, 86 d and 86 e are colored 83 according to the one-to-one correspondence scheme with the sound frequencies. The color code is also transmitted to the eye by theLED illuminator 22 to reinforce the link with the other stimulations. In parallel the corresponding vibration frequencies, 270 Hz, 700 Hz, 6000 Hz, 500 Hz and 800 Hz are delivered to the crane by the (BC) transducer(s) explained above in connection withFIG. 5 . To better transmit the vibrations, the (BC) transducer mounted on the inside of the “hearing eyeglasses” is pressed against the bone by stretching theband 78 that connects the two temples. - In parallel with the stimulations of the mouth movements, the color signaling and the vibrations transmitted to the crane, the proper locations in the visual and auditory corteces are stimulated by ultrasound waves also of the same frequencies, in order to enhance the pathways between the corteces.
- While the training of the plastic brain, in an endeavor to help the damaged auditory organ, makes use of 4 tools in parallel (crane vibrations, ultrasound vibrations, lip reading and color linkage), it is not clear the relative contributions of each of the tools. Some of the suggested tools and techniques will definitely evolve during the training attempts; some will prove to be more useful than the others and probably cross fertilizations will be discovered.
-
FIG. 9 illustrates the stimulation of the brain with electromagnetic radiation generated between resonant coils. Inductively coupled resonant coils can transmit magnetic energy with little losses. The figure illustrates two resonant magnetic energy delivery systems perpendicular each to the other. The resonant coils have magnetic cores around which the current carrying wires are wound. Thepower sources 90 a, 90 b are coupled to theresonant sources 91 a, 91 b which are coupled with the distant resonant load coils 92 a and 92 b. In the illustrated configuration there is no substantial load at the load coils. The only loads are in the near-field due to the impedance of the brain. The coupling factor between theresonant sources 91 a, 91 b and theresonant loads - In the illustrated figure the two resonant energy transfer systems are perpendicular each to the other and their magnetic lines cross at a
limited region 94, where the deposited energy is cumulative. - Consequently several resonant magnetic energy transfer systems may be placed around the head at the proper angular positions so that their intertwined magnetic lines maximize the energy delivered at this spot. The absolute magnetic energy delivered may be controlled by the phases between the resonant coils.
- This method of stimulating selected spots in the brain can be used to stimulate the visual and the auditory corteces simultaneously with the delivery of vibrations to the auditory cortex and the corresponding “color” stimulations to the visual cortex.
- There are multiple ways to realize the invention explained above, combine the differentiating features illustrated in the accompanying figures, and devise new embodiments of the methods described, without departing from the scope and spirit of the present invention. Those skilled in the art will recognize that other embodiments and modifications are possible. While the invention has been described with respect to the preferred embodiments thereof, it will be understood by those skilled in the art that changes may be made in the above constructions and in the foregoing sequences of operation without departing substantially from the scope and spirit of the invention. All such changes, combinations, modifications and variations are intended to be included herein within the scope of the present invention, as defined by the claims. It is accordingly intended that all matter contained in the above description or shown in the accompanying figures be interpreted as illustrative rather than in a limiting sense.
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/076,237 US9036844B1 (en) | 2013-11-10 | 2013-11-10 | Hearing devices based on the plasticity of the brain |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/076,237 US9036844B1 (en) | 2013-11-10 | 2013-11-10 | Hearing devices based on the plasticity of the brain |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150133716A1 true US20150133716A1 (en) | 2015-05-14 |
US9036844B1 US9036844B1 (en) | 2015-05-19 |
Family
ID=53044338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/076,237 Expired - Fee Related US9036844B1 (en) | 2013-11-10 | 2013-11-10 | Hearing devices based on the plasticity of the brain |
Country Status (1)
Country | Link |
---|---|
US (1) | US9036844B1 (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017024359A1 (en) * | 2015-08-13 | 2017-02-16 | Prospect Silver Limited | A visual aid system |
US20170085976A1 (en) * | 2014-02-21 | 2017-03-23 | Earlens Corporation | Contact hearing system with wearable communication apparatus |
WO2017156276A1 (en) * | 2016-03-11 | 2017-09-14 | Mayo Foundation For Medical Education And Research | Cochlear stimulation system with surround sound and noise cancellation |
US10277971B2 (en) * | 2016-04-28 | 2019-04-30 | Roxilla Llc | Malleable earpiece for electronic devices |
US10535364B1 (en) * | 2016-09-08 | 2020-01-14 | Amazon Technologies, Inc. | Voice activity detection using air conduction and bone conduction microphones |
WO2020186104A1 (en) * | 2019-03-14 | 2020-09-17 | Peter Stevens | Haptic and visual communication system for the hearing impaired |
WO2021055992A1 (en) * | 2019-09-22 | 2021-03-25 | Third Wave Therapeutics, Inc. | Applying predetermined sound to provide therapy |
US11039980B2 (en) | 2019-09-22 | 2021-06-22 | Third Wave Therapeutics, Inc. | Applying predetermined sound to provide therapy |
US11273283B2 (en) | 2017-12-31 | 2022-03-15 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement to enhance emotional response |
DE102020132254A1 (en) | 2020-12-04 | 2022-06-09 | USound GmbH | Glasses with parametric audio unit |
US11364361B2 (en) | 2018-04-20 | 2022-06-21 | Neuroenhancement Lab, LLC | System and method for inducing sleep by transplanting mental states |
US11452839B2 (en) | 2018-09-14 | 2022-09-27 | Neuroenhancement Lab, LLC | System and method of improving sleep |
DE102021001747A1 (en) | 2021-04-01 | 2022-10-06 | Karl-Heinz Krempels | Electromechanical, electromagnetic and optical device for the transmission of mechanical, electromagnetic and optical signals and signal sequences and information encoded therein to humans |
US11623248B2 (en) * | 2019-01-18 | 2023-04-11 | University Of Southern California | Focused ultrasound transducer with electrically controllable focal length |
US11717686B2 (en) | 2017-12-04 | 2023-08-08 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement to facilitate learning and performance |
US11723579B2 (en) | 2017-09-19 | 2023-08-15 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12052538B2 (en) * | 2021-09-16 | 2024-07-30 | Bitwave Pte Ltd. | Voice communication in hostile noisy environment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7760898B2 (en) * | 2003-10-09 | 2010-07-20 | Ip Venture, Inc. | Eyeglasses with hearing enhanced and other audio signal-generating capabilities |
US8553910B1 (en) * | 2011-11-17 | 2013-10-08 | Jianchun Dong | Wearable computing device with behind-ear bone-conduction speaker |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2955247B2 (en) | 1997-03-14 | 1999-10-04 | 日本放送協会 | Speech speed conversion method and apparatus |
EP1944753A3 (en) | 1997-04-30 | 2012-08-15 | Nippon Hoso Kyokai | Method and device for detecting voice sections, and speech velocity conversion method and device utilizing said method and device |
US20040196998A1 (en) | 2003-04-04 | 2004-10-07 | Paul Noble | Extra-ear hearing |
US7412378B2 (en) | 2004-04-01 | 2008-08-12 | International Business Machines Corporation | Method and system of dynamically adjusting a speech output rate to match a speech input rate |
US20070041600A1 (en) | 2005-08-22 | 2007-02-22 | Zachman James M | Electro-mechanical systems for enabling the hearing impaired and the visually impaired |
US20090103744A1 (en) | 2007-10-23 | 2009-04-23 | Gunnar Klinghult | Noise cancellation circuit for electronic device |
DE102009014770A1 (en) | 2009-03-25 | 2010-09-30 | Cochlear Ltd., Lane Cove | vibrator |
US10112029B2 (en) | 2009-06-19 | 2018-10-30 | Integrated Listening Systems, LLC | Bone conduction apparatus and multi-sensory brain integration method |
US20130090520A1 (en) | 2009-06-19 | 2013-04-11 | Randall Redfield | Bone Conduction Apparatus and Multi-sensory Brain Integration Method |
EP2393309B1 (en) | 2010-06-07 | 2019-10-09 | Oticon Medical A/S | Device and method for applying a vibration signal to a human skull bone |
SE536254C2 (en) | 2010-11-12 | 2013-07-23 | Osseofon Ab | Adjustment net for leg conduction vibrator |
US8565461B2 (en) | 2011-03-16 | 2013-10-22 | Cochlear Limited | Bone conduction device including a balanced electromagnetic actuator having radial and axial air gaps |
FR2974655B1 (en) | 2011-04-26 | 2013-12-20 | Parrot | MICRO / HELMET AUDIO COMBINATION COMPRISING MEANS FOR DEBRISING A NEARBY SPEECH SIGNAL, IN PARTICULAR FOR A HANDS-FREE TELEPHONY SYSTEM. |
US20130007949A1 (en) | 2011-07-08 | 2013-01-10 | Witricity Corporation | Wireless energy transfer for person worn peripherals |
JP6148234B2 (en) | 2011-08-04 | 2017-06-14 | ワイトリシティ コーポレーションWitricity Corporation | Tunable wireless power architecture |
US9020168B2 (en) | 2011-08-30 | 2015-04-28 | Nokia Corporation | Apparatus and method for audio delivery with different sound conduction transducers |
CN102497612B (en) | 2011-12-23 | 2013-05-29 | 深圳市韶音科技有限公司 | Bone conduction speaker and compound vibrating device thereof |
-
2013
- 2013-11-10 US US14/076,237 patent/US9036844B1/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7760898B2 (en) * | 2003-10-09 | 2010-07-20 | Ip Venture, Inc. | Eyeglasses with hearing enhanced and other audio signal-generating capabilities |
US8553910B1 (en) * | 2011-11-17 | 2013-10-08 | Jianchun Dong | Wearable computing device with behind-ear bone-conduction speaker |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11070902B2 (en) | 2014-02-21 | 2021-07-20 | Earlens Corporation | Contact hearing system with wearable communication apparatus |
US20170085976A1 (en) * | 2014-02-21 | 2017-03-23 | Earlens Corporation | Contact hearing system with wearable communication apparatus |
US10003877B2 (en) * | 2014-02-21 | 2018-06-19 | Earlens Corporation | Contact hearing system with wearable communication apparatus |
WO2017024359A1 (en) * | 2015-08-13 | 2017-02-16 | Prospect Silver Limited | A visual aid system |
WO2017156276A1 (en) * | 2016-03-11 | 2017-09-14 | Mayo Foundation For Medical Education And Research | Cochlear stimulation system with surround sound and noise cancellation |
CN108778410A (en) * | 2016-03-11 | 2018-11-09 | 梅约医学教育与研究基金会 | The cochlear stimulation system eliminated with surround sound and noise |
US10277971B2 (en) * | 2016-04-28 | 2019-04-30 | Roxilla Llc | Malleable earpiece for electronic devices |
US10535364B1 (en) * | 2016-09-08 | 2020-01-14 | Amazon Technologies, Inc. | Voice activity detection using air conduction and bone conduction microphones |
US11723579B2 (en) | 2017-09-19 | 2023-08-15 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement |
US11717686B2 (en) | 2017-12-04 | 2023-08-08 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement to facilitate learning and performance |
US11273283B2 (en) | 2017-12-31 | 2022-03-15 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement to enhance emotional response |
US11318277B2 (en) | 2017-12-31 | 2022-05-03 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement to enhance emotional response |
US11478603B2 (en) | 2017-12-31 | 2022-10-25 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement to enhance emotional response |
US11364361B2 (en) | 2018-04-20 | 2022-06-21 | Neuroenhancement Lab, LLC | System and method for inducing sleep by transplanting mental states |
US11452839B2 (en) | 2018-09-14 | 2022-09-27 | Neuroenhancement Lab, LLC | System and method of improving sleep |
US11623248B2 (en) * | 2019-01-18 | 2023-04-11 | University Of Southern California | Focused ultrasound transducer with electrically controllable focal length |
US11100814B2 (en) | 2019-03-14 | 2021-08-24 | Peter Stevens | Haptic and visual communication system for the hearing impaired |
WO2020186104A1 (en) * | 2019-03-14 | 2020-09-17 | Peter Stevens | Haptic and visual communication system for the hearing impaired |
WO2021055992A1 (en) * | 2019-09-22 | 2021-03-25 | Third Wave Therapeutics, Inc. | Applying predetermined sound to provide therapy |
US11039980B2 (en) | 2019-09-22 | 2021-06-22 | Third Wave Therapeutics, Inc. | Applying predetermined sound to provide therapy |
US11648175B2 (en) | 2019-09-22 | 2023-05-16 | Third Wave Therapeutics, Inc. | Applying predetermined sound to provide therapy |
DE102020132254A1 (en) | 2020-12-04 | 2022-06-09 | USound GmbH | Glasses with parametric audio unit |
US11668959B2 (en) | 2020-12-04 | 2023-06-06 | USound GmbH | Eyewear with parametric audio unit |
DE102021001747A1 (en) | 2021-04-01 | 2022-10-06 | Karl-Heinz Krempels | Electromechanical, electromagnetic and optical device for the transmission of mechanical, electromagnetic and optical signals and signal sequences and information encoded therein to humans |
Also Published As
Publication number | Publication date |
---|---|
US9036844B1 (en) | 2015-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9036844B1 (en) | Hearing devices based on the plasticity of the brain | |
Svirsky | Cochlear implants and electronic hearing | |
EP3876557B1 (en) | Hearing aid device for hands free communication | |
Wouters et al. | Sound coding in cochlear implants: From electric pulses to hearing | |
US8892232B2 (en) | Social network with enhanced audio communications for the hearing impaired | |
Henry et al. | Bone conduction: Anatomy, physiology, and communication | |
JP3760173B2 (en) | Microphone, communication interface system | |
US20210274296A1 (en) | Hearing aid system for estimating acoustic transfer functions | |
US10039672B2 (en) | Vibro-electro tactile ultrasound hearing device | |
KR20170071585A (en) | Systems, methods, and devices for intelligent speech recognition and processing | |
US20060177799A9 (en) | Methods and devices for treating non-stuttering speech-language disorders using delayed auditory feedback | |
CN103945315A (en) | Listening device comprising an interface to signal communication quality and/or wearer load to surroundings | |
Fletcher | Using haptic stimulation to enhance auditory perception in hearing-impaired listeners | |
US11057722B2 (en) | Hearing aid for people having asymmetric hearing loss | |
CN205584434U (en) | Smart headset | |
Fletcher | Can haptic stimulation enhance music perception in hearing-impaired listeners? | |
US20220023137A1 (en) | Device and method for improving perceptual ability through sound control | |
Perkins et al. | The EarLens system: new sound transduction methods | |
CN113226454A (en) | Prediction and identification techniques used with auditory prostheses | |
AU2017200809A1 (en) | Apparatus to assist speech training and/or hearing training after a cochlear implantation | |
Bizley | Audition | |
Ifukube | Sound-based assistive technology | |
Gfeller | Accommodating children who use cochlear implants in music therapy or educational settings | |
AU2014293427B2 (en) | Binaural cochlear implant processing | |
KR100778143B1 (en) | A Headphone with neck microphone using bone conduction vibration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20230519 |