US20130343584A1 - Hearing assist device with external operational support - Google Patents
Hearing assist device with external operational support Download PDFInfo
- Publication number
- US20130343584A1 US20130343584A1 US13/623,435 US201213623435A US2013343584A1 US 20130343584 A1 US20130343584 A1 US 20130343584A1 US 201213623435 A US201213623435 A US 201213623435A US 2013343584 A1 US2013343584 A1 US 2013343584A1
- Authority
- US
- United States
- Prior art keywords
- assist device
- hearing assist
- hearing
- user
- audio signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/80—Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61J—CONTAINERS SPECIALLY ADAPTED FOR MEDICAL OR PHARMACEUTICAL PURPOSES; DEVICES OR METHODS SPECIALLY ADAPTED FOR BRINGING PHARMACEUTICAL PRODUCTS INTO PARTICULAR PHYSICAL OR ADMINISTERING FORMS; DEVICES FOR ADMINISTERING FOOD OR MEDICINES ORALLY; BABY COMFORTERS; DEVICES FOR RECEIVING SPITTLE
- A61J1/00—Containers specially adapted for medical or pharmaceutical purposes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/554—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/55—Communication between hearing aids and external devices via a network for data exchange
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/03—Aspects of the reduction of energy consumption in hearing devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/13—Hearing devices using bone conduction transducers
Definitions
- the subject matter described herein relates to hearing assist devices and devices and services that are capable of providing external operational support to such hearing assist devices.
- a hearing aid is an electro-acoustic device that typically fits in or behind the ear of a wearer, and amplifies and modulates sound for the wearer. Hearing aids are frequently worn by persons who are hearing impaired to improve their ability to hear sounds. A hearing aid may be worn in one or both ears of a user, depending on whether one or both of the user's ears need hearing assistance.
- hearing aids Since most hearing aids rely on battery power to operate, it is critical that hearing aids are designed so as not consume battery power too quickly. This places a constraint on the types of features and processes that can be built into a hearing aid. Furthermore, it is desirable that hearing aids be lightweight and small so that they are comfortable to wear and not readily discernible to others. This also operates as a constraint on both the size of the batteries that can be used to power the hearing aid as well as the types of functionality that can be integrated into a hearing aid.
- hearing aid batteries are dead or a hearing aid is left at home, a wearer needing hearing aid support is at a loss. This often results in someone raising their speaking volume to help the wearer hear what they are saying.
- hearing problems often have a frequency profile, merely raising one's volume may not work.
- raising the volume on a cell phone may not adequately provide understandable audio to someone with hearing impairment.
- FIG. 1 shows a communication system that includes a multi-sensor hearing assist device that communicates with a near field communication (NFC)-enabled communications device, according to an exemplary embodiment.
- NFC near field communication
- FIGS. 2-4 show various configurations for associating a multi-sensor hearing assist device with an ear of a user, according to exemplary embodiments.
- FIG. 5 shows a multi-sensor hearing assist device that mounts over an ear of a user, according to an exemplary embodiment.
- FIG. 6 shows a multi-sensor hearing assist device that extends at least partially into the ear canal of a user, according to an exemplary embodiment.
- FIG. 7 shows a circuit block diagram of a multi-sensor hearing assist device that is configured to communicate with external devices according to multiple communication schemes, according to an exemplary embodiment.
- FIG. 8 shows a flowchart of a process for a hearing assist device that processes and transmits sensor data and receives a command from a second device, according to an exemplary embodiment.
- FIG. 9 shows a communication system that includes a multi-sensor hearing assist device that communicates with one or more communications devices and network-connected devices, according to an exemplary embodiment.
- FIG. 10 shows a flowchart of a process for a wirelessly charging a battery of a hearing assist device, according to an exemplary embodiment.
- FIG. 11 shows a flowchart of a process for broadcasting sound that is generated based on sensor data, according to an exemplary embodiment.
- FIG. 12 shows a flowchart of a process for generating and broadcasting filtered sound from a hearing assist device, according to an exemplary embodiment.
- FIG. 13 shows a flowchart of a process for generating an information signal in a hearing assist device based on a voice of a user, and transmitting the information signal to a second device, according to an exemplary embodiment.
- FIG. 14 shows a flowchart of a process for generating voice based at least on sensor data to be broadcast by a speaker of a hearing assist device to a user, according to an exemplary embodiment.
- FIG. 15 is a block diagram of an example system that enables external operational support to be provided to a hearing assist device in accordance with an embodiment.
- FIG. 16 is a block diagram of a system comprising a hearing assist device and a cloud/service/phone/portable device that may provide external operational support thereto.
- FIG. 17 is a block diagram of an enhanced audio processing module that may be implemented by a hearing assist device to provide such enhanced spatial signaling in accordance with an embodiment.
- FIG. 18 depicts a flowchart of a method for providing audio playback support to a hearing assist device in accordance with an embodiment.
- FIG. 19 is a block diagram of a noise suppression system that may be utilized by a hearing assist device or a device/service communicatively connected thereto in accordance with an embodiment.
- FIGS. 20-23 depict flowcharts of methods for providing external operational support to a hearing assist device worn by a user in accordance with various embodiments.
- FIG. 24 is a block diagram of an audio processing module that may be implemented in a hearing assist device in accordance with an embodiment.
- references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- a hearing aid is an electro-acoustic device that typically fits in or behind the ear of a wearer, and amplifies and modulates sound for the wearer. Hearing aids are frequently worn by persons who are hearing impaired to improve their ability to hear sounds. A hearing aid may be worn in one or both ears of a user, depending on whether one or both of the user's ears need hearing assistance.
- Hearing assist devices such as hearing aids, headsets, and headphones, are typically worn in contact with the user's ear, and in some cases extend into the user's ear canal.
- a hearing assist device is typically positioned in close proximity to various organs and physical features of a wearer, such as the inner ear structure (for example, the ear canal, ear drum, ossicles, Eustachian tube, cochlea, auditory nerve, or the like), skin, brain, veins and arteries, and further physical features of the wearer.
- the inner ear structure for example, the ear canal, ear drum, ossicles, Eustachian tube, cochlea, auditory nerve, or the like
- skin brain, veins and arteries, and further physical features of the wearer.
- a hearing assist device may be configured to detect various characteristics of a user's health. Furthermore, the detected characteristics may be used to treat health-related issues of the wearer, and perform further health-related functions. As such, hearing assist devices may
- health monitoring technology may be incorporated into a hearing assist device to monitor the health of a wearer.
- health monitoring technology that may be incorporated in a hearing assist device include health sensors that determine (for example, sense/detect/measure/collect, or the like) various physical characteristics of the user, such as blood pressure, heart rate, temperature, humidity, blood oxygen level, skin galvanometric levels, brain wave information, arrhythmia onset detection, skin chemistry changes, falling down impacts, long periods of activity, or the like.
- Sensor information resulting from the monitoring may be analyzed within the hearing assist device, or may be transmitted from the hearing assist device and analyzed at a remote location.
- the sensor information may be analyzed at a local computer, in a smart phone or other mobile device, or at a remote location, such as at a cloud-based server.
- instructions and/or other information may be communicated back to the wearer.
- Such information may be provided to the wearer by a display screen (for example, a desktop computer display, a smart phone display, a tablet computer display, a medical equipment display, or the like), by the hearing assist device itself (for example, by voice, beeps, or the like), or may be provided to the wearer in another manner.
- Medical personnel and/or emergency response personnel may be alerted when particular problems with the wearer are detected by the hearing assist device.
- the medical personnel may evaluate information received from the hearing assist device, and provide information back to the hearing assist device/wearer.
- the hearing assist device may provide the wearer with reminders, alarms, instructions, etc.
- the hearing assist device may be configured with speech/voice recognition capability. For instance, the wearer may provide commands, such as by voice, to the hearing assist device.
- the hearing assist device may be configured to perform various audio processing functions to suppress background noise and/or other sounds, as well amplifying other sounds, and may be configured to modify audio according to a particular frequency response of the hearing of the wearer.
- the hearing assist device may be configured to detect vibrations (for example, jaw movement of the wearer during talking), and may use the detected vibrations to aid in improving speech/voice recognition.
- FIG. 1 shows a communication system 100 that includes a multi-sensor hearing assist device 102 that communicates with a near field communication (NFC)-enabled communications device 104 , according to an exemplary embodiment.
- Hearing assist device 102 may be worn in association with the ear of a user, and may be configured to communicate with other devices, such as communications device 104 .
- hearing assist device 102 includes a plurality of sensors 106 a and 106 b , processing logic 108 , an NFC transceiver 110 , storage 112 , and a rechargeable battery 114 . These features of hearing assist device 102 are described as follows.
- Sensors 106 a and 106 b are medical sensors that each sense a characteristic of the user and generate a corresponding sensor output signal. Although two sensors 106 a and 106 b are shown in hearing assist device 102 in FIG. 1 , any number of sensors may be included in hearing assist device 102 , including three sensors, four sensors, five sensors, etc. (e.g., tens of sensors, hundreds of sensors, etc.).
- sensors for sensors 106 a and 106 b include a blood pressure sensor, a heart rate sensor, a temperature sensor, a humidity sensor, a blood oxygen level sensor, a skin galvanometric level sensor, a brain wave information sensor, an arrhythmia onset detection sensor (for example, a chest strap with multiple sensor pads), a skin chemistry sensor, a motion sensor (e.g., to detect falling down impacts, long periods of activity, etc.), an air pressure sensor, etc.
- sensors suitable for sensors 106 a and 106 b are further described elsewhere herein.
- Processing logic 108 may be implemented in hardware (e.g., one or more processors, electrical circuits, etc.), or any combination of hardware with software and/or firmware. Processing logic 108 may receive sensor information from sensors 106 a , 106 b , etc., and may process the sensor information to generate processed sensor data. Processing logic 108 may execute one or more programs that define various operational characteristics, such as: (i) a sequence or order of retrieving sensor information from sensors of hearing assist device 102 , (ii) sensor configurations and reconfigurations (via a preliminary setup or via adaptations over the course of time), (iii) routines by which particular sensor data is at least pre-processed, and (iv) one or more functions/actions to be performed based on particular sensor data values, etc.
- various operational characteristics such as: (i) a sequence or order of retrieving sensor information from sensors of hearing assist device 102 , (ii) sensor configurations and reconfigurations (via a preliminary setup or via adaptations over the course of time), (iii
- processing logic 108 may store and/or access sensor data in storage 112 , processed or unprocessed. Furthermore, processing logic 108 may access one or more programs stored in storage 112 for execution.
- Storage 112 may include one or more types of storage, including memory (e.g., random access memory (RAM), read only memory (ROM), etc.) that is volatile or non-volatile.
- NFC transceiver 110 is configured to wirelessly communicate with a second device (for example, a local or remote supporting device), such as NFC-enabled communications device 104 according to NFC techniques.
- a second device for example, a local or remote supporting device
- NFC-enabled communications device 104 uses magnetic induction between two loop antennas (e.g., coils, microstrip antennas, or the like) located within each other's near field, effectively forming an air-core transformer.
- loop antennas e.g., coils, microstrip antennas, or the like
- NFC communications occur over relatively short ranges (e.g., within a few centimeters), and are conducted at radio frequencies.
- NFC communications may be performed by NFC transceiver 110 at a 13.56 MHz frequency, with data transfers of up to 424 kilobits per second.
- NFC transceiver 110 may be configured to perform NFC communications at other frequencies and data transfer rates. Examples of standards according to which NFC transceiver 110 may be configured to conduct NFC communications include ISO/IEC 18092 and those defined by the NFC Forum, which was founded in 2004 by Nokia, Philips and Sony.
- NFC-enabled communications device 104 may be configured with an NFC transceiver to perform NFC communications.
- NFC-enabled communications device 104 may be any type of device that may be enabled with NFC capability, such as a docking station, a desktop computer (e.g., a personal computer, etc.), a mobile computing device (e.g., a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer (e.g., an Apple iPadTM), a netbook, etc.), a mobile phone (e.g., a cell phone, a smart phone, etc.), a medical appliance, etc.
- NFC-enabled communications device 104 may be network-connected to enable hearing assist device 102 to communicate with entities over the network (e.g., cloud computers or servers, web services, etc.).
- NFC transceiver 102 enables sensor data (processed or unprocessed) to be transmitted by processing logic 108 from hearing assist device 102 to NFC-enabled communications device 104 . In this manner, the sensor data may be reported, processed, and/or analyzed externally to hearing assist device 102 . Furthermore, NFC transceiver 102 enables processing logic 108 at hearing assist device 102 to receive data and/or instructions/commands from NFC-enabled communications device 104 in response to the transmitted sensor data.
- NFC transceiver 102 enables processing logic 108 at hearing assist device 102 to receive programs (e.g., program code), including new programs, program updates, applications, “apps”, and/or other programs from NFC-enabled communications device 104 that can be executed by processing logic 108 to change/update the functionality of hearing assist device 102 .
- programs e.g., program code
- Rechargeable battery 114 is a rechargeable battery that includes one or more electrochemical cells that store charge that may be used to power components of hearing assist device 102 , including one or more of sensor 106 a , 106 b , etc., processing logic 108 , NFC transceiver 110 , and storage 112 .
- Rechargeable battery 114 may be any suitable rechargeable battery type, including lead-acid, nickel cadmium (NiCd), nickel metal hydride (NiMH), lithium ion (Li-ion), and lithium ion polymer (Li-ion polymer). Charging of the batteries may be through a typical tethered recharger or via NFC power delivery.
- NFC communications are shown, alternative communication approaches can be employed. Such alternatives may include wireless power transfer schemes as well.
- Hearing assist device 102 may be configured in any manner to be associated with the ear of a user.
- FIGS. 2-4 show various configurations for associating a hearing assist device with an ear of a user, according to exemplary embodiments.
- hearing assist device 102 may be a hearing aid type that fits and is inserted partially or fully in an ear 202 of a user.
- hearing assist device 102 includes sensors 106 a - 106 n that contact the user. Examples forms of hearing assist device 102 of FIG.
- ear buds include ear buds, “receiver in the canal” hearing aids, “in the ear” (ITE) hearing aids, “invisible in canal” (IIC) hearing aids, “completely in canal” (CIC) hearing aids, etc.
- ITE ear buds
- IIC visible in canal
- CIC completely in canal
- cochlear implant configurations may also be used.
- hearing assist device 102 may be a hearing aid type that mounts on top of, or behind ear 202 of the user. As shown in FIG. 3 , hearing assist device 102 includes sensors 106 a - 106 n that contact the user. Examples forms of hearing assist device 102 of FIG. 3 include “behind the ear” (BTE) hearing aids, “open fit” or “over the ear” (OTE) hearing aids, eyeglasses hearing aids (e.g., that contain hearing aid functionality in or on the glasses arms), etc.
- BTE behind the ear
- OFT over the ear
- hearing assist device 102 may be a headset or head phones that mounts on the head of the user and include speakers that are held close to the user's ears. As shown in FIG. 4 , hearing assist device 102 includes sensors 106 a - 106 n that contact the user. In the embodiment of FIG. 4 , sensors 106 a - 106 n may be spaced further apart in the headphones, including being dispersed in the ear pad(s) and/or along the headband that connects together the ear pads (when a head band is present).
- hearing assist device 102 may be configured in further forms, including combinations of the forms shown in FIGS. 2-4 , and is not intended to be limited to the embodiments illustrated in FIGS. 2-4 .
- hearing assist device 102 may be a cochlear implant-type hearing aid, or other type of hearing assist device.
- the following section describes some example forms of hearing assist device 102 with associated sensor configurations.
- hearing assist device 102 may be configured in various forms, and may include any number and type of sensors.
- FIG. 5 shows a hearing assist device 500 that is an example of hearing assist device 102 according to an exemplary embodiment.
- Hearing assist device 500 is configured to mount over an ear of a user, and has a portion that is at least partially inserted into the ear.
- a user may wear a single hearing assist device 500 on one ear, or may simultaneously wear first and second hearing assist devices 500 on the user's right and left ears, respectively.
- hearing assist device 500 includes a case or housing 502 that includes a first portion 504 , a second portion 506 , and a third portion 508 .
- First portion 504 is shaped to be positioned behind/over the ear of a user.
- first portion 504 has a crescent shape, and may optionally be molded in the shape of a user's outer ear (e.g., by taking an impression of the outer ear, etc.).
- Second portion 506 extends perpendicularly from a side of an end of first portion 504 .
- Second portion 506 is shaped to be inserted at least partially into the ear canal of the user.
- Third portion 508 extends from second portion 506 , and may be referred to as an earmold shaped to conform to the user's ear shape, to better adhere hearing assist device 500 to the user's ear.
- hearing assist device 500 further includes a speaker 512 , a forward IR/UV (ultraviolet) communication transceiver 520 , a BTLE (BLUETOOTH low energy) antenna 522 , at least one microphone 524 , a telecoil 526 , a tethered sensor port 528 , a skin communication conductor 534 , a volume controller 540 , and a communication and power delivery coil 542 .
- a speaker 512 a forward IR/UV (ultraviolet) communication transceiver 520 , a BTLE (BLUETOOTH low energy) antenna 522 , at least one microphone 524 , a telecoil 526 , a tethered sensor port 528 , a skin communication conductor 534 , a volume controller 540 , and a communication and power delivery coil 542 .
- a speaker 512 As shown in FIG. 5 , hearing assist device 500 further includes a speaker 512 ,
- hearing assist device 500 includes a plurality of medical sensors, including at least one pH sensor 510 , an IR (infrared) or sonic distance sensor 514 , an inner ear temperature sensor 516 , a position/motion sensor 518 , a WPT (wireless power transfer)/NFC coil 530 , a switch 532 , a glucose spectroscopy sensor 536 , a heart rate sensor 538 , and a subcutaneous sensor 544 .
- hearing assist device 500 may include one or more of these further features and/alternative features. The features of hearing assist device 500 are described as follows.
- speaker 512 As shown in FIG. 5 , speaker 512 , IR or sonic distance sensor 514 , and inner ear temperature sensor 516 are located on a circular surface of second portion 506 of hearing assist device 500 that faces into the ear of the user. Position/motion sensor 518 and pH sensor 510 are located on a perimeter surface of second portion 506 around the circular surface that contacts the ear canal of the user. In alternative embodiments, one or more of these features may be located in/on different locations of hearing assist device 500 .
- pH sensor 510 is a sensor that may be present to measure a pH of skin of the user's inner ear. The measured pH value may be used to determine a medical problem of the user, such an onset of stroke. pH sensor 510 may include one or more metallic plates. Upon receiving power (e.g., from rechargeable battery 114 of FIG. 1 ), pH sensor 510 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured pH value.
- a sensor output signal e.g., an electrical signal
- Speaker 512 is a speaker of hearing assist device 500 that broadcasts environmental sound received by microphone(s) 524 , that is subsequently amplified and/or filtered by processing logic of the hearing assist device 600 , into the ear of the user to assist the user in hearing the environmental sound. Furthermore, speaker 512 may broadcast additional sounds into the ear of the user for the user to hear, including alerts (e.g., tones, beeping sounds), voice, and/or further sounds that may be generated by or received by processing logic of hearing assist device 500 , and/or may be stored in hearing assist device 500 .
- alerts e.g., tones, beeping sounds
- IR or sonic distance sensor 514 is a sensor that may be present to sense a displacement distance. Upon receiving power, IR or sonic distance sensor 514 may generate an IR light pulse, a sonic (e.g., ultrasonic) pulse, or other light or sound pulse, that may be reflected in the ear of the user, and the reflection may be received by IR or sonic distance sensor 514 . A time of reflection may be compared for a series of pulses to determine a displacement distance within the ear of user. IR or sonic distance sensor 514 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured displacement distance.
- a sensor output signal e.g., an electrical signal
- hearing assist device 500 can perform the following when a user inserts and turns on hearing assist device 500 : (i) automatically adjust the volume to fall within a target range; and (ii) prevent excess volume associated with unexpected loud sound events. It is noted that the amount of volume adjustment that may be applied can vary by frequency. It is also noted that the excess volume associated with unexpected loud sound events may be further prevented by using a hearing assist device that has a relatively tight fit, thereby allowing the hearing assist device to act as an ear plug.
- Hearing efficiency and performance data over the spectrum of normal audible frequencies can be gathered by delivering each frequency (or frequency range) at an output volume level, measuring eardrum deflection characteristics, and delivering audible test questions to the user via hearing assist device 500 .
- This can be accomplished solely by hearing assist device 500 or with assistance from a smartphone or other external device or service.
- a user may respond to an audio (or textual) prompt “Can you hear this?” with a “yes” or “no” response.
- the response is received by microphone(s) 524 (or via touch input for example) and processed internally or on an assisting external device to identify the response.
- the amplitude of the audio output can be adjusted to determine a given user's hearing threshold for each frequency (or frequency range). From this hearing efficiency and performance data, input frequency equalization can be performed by hearing assist device 500 so as to deliver to the user audio signals that will be perceived in much the same way as someone with no hearing impairment. In addition, such data can be delivered to the assisting external device (e.g., to a smartphone) for use by such device in producing audio output for the user.
- the assisting external device e.g., to a smartphone
- the assisting device can deliver an adjusted audio output tailored for the user if (i) the user is not wearing hearing assist device 500 , (ii) the battery power of hearing assist device 500 is depleted, (iii) hearing assist device 500 is powered down, or (iv) hearing assist device 500 is operating in a lower power mode.
- the supporting device can deliver the audio signal: (a) in an audible form via a speaker which will be generated with intent of directly reaching the eardrum; (b) in an audible form intended for receipt and amplification control by hearing assist device 500 without further need for user specific audio equalization; and (c) in a non-audible form (e.g.) electromagnetic transmission for receipt and conversion to an audible form by hearing assist device 500 and again without further equalization.
- a non-audible form e.g. electromagnetic transmission for receipt and conversion to an audible form by hearing assist device 500 and again without further equalization.
- a wearer may further tweak their recommended equalization via slide bars and such in a manner similar to adjusting equalization for other conventional audio equipment. Such tweaking can be carried out via the supporting device user interface.
- a plurality of equalization settings can be supported with each being associated with a particular mode of operation of hearing assist device 500 . That is conversation in a quiet room with one other might receive one equalization profile while a concert hall might receive another. Modes can be selected in many automatic or commanded ways via either or both hearing assist device 500 and the external supporting device. Automatic selection can be performed via analysis and classification of captured audio. Certain classifications may trigger selection of a particular mode. Commands may delivered via any user input interface such as voice input (voice recognized commands), tactile input commands, etc.
- Audio modes also comprise alternate or additional audio processing techniques as well. For example, in one mode, to enhance audio perspective and directionality, delays might be selectively introduced (or increased in a stereoscopic manner) to enhance a wearer's ability to discern the location of an audio source. Sensor data may support automatic mode selection in such situations. Detecting walking impacts and outdoor GPS (Global Positioning System) location might automatically trigger such enhanced perspective mode. A medical condition might trigger another mode which attenuates environmental audio while delivering synthesized voice commands to the wearer. In another exemplary mode, both echoes and delays might be introduced to simulate a theater environment. For example, when audio is being sourced by a television channel broadcast of a movie, the theater environment mode might be selected. Such selection may be in response to a set top box, television or media player's commands or by identifying one of the same as the audio source.
- GPS Global Positioning System
- hearing assist device 500 and an external supporting device.
- the external supporting device may receive the audio for processing: (i) directly via built in microphones; (ii) from storage; or (iii) via yet another external device.
- the source audio may be captured by hearing assist device 500 itself and delivered via a wired or wireless pathway to the external supporting device for processing before delivery of either the processed audio signals or substitute audio back to hearing assist device 500 for delivery to the wearer.
- sensor data may be captured in one or both of hearing assist device 500 and an external supporting device.
- Sensor data captured by hearing assist device 500 may likewise be delivered via such or other wired or wireless pathways to the external supporting device for (further) processing.
- the external supporting device may then respond to the sensor data received and processed by delivering audio content and/or hearing aid commands back to hearing assist device 500 .
- Such commands may be to reconfigure some aspect of hearing assist device 500 or manage communication or power delivery.
- Such audio content may be instructional, comprise queries, or consist of commands to be delivered the wearer via the ear drums.
- Sensor data may be stored and displayed in some form locally on the external supporting device along with similar audio, graphical or textual content, commands or queries.
- Sensors within one or both hearing assist device 500 and an external supporting device may be medical sensors or environmental sensors (e.g., latitude/longitude, velocity, temperature, wearer's physical orientation, acceleration, elevation, tilt, humidity, etc.).
- hearing assist device 500 may also be configured with an imager that may be located near transceiver 520 .
- the imager can then be used to capture images or video that may be relayed to one or more external supporting device for real time display, storage or processing. For example, detecting a medical situation and no response to audible content queries delivered via hearing assist device 500 , the imager can be commanded (internal or external command origin) to capture an image or a video sequence.
- Such imager output can be delivered to medical staff via a user's supporting smartphone so that a determination can be made as to the user's condition or the position/location of hearing assist device 500 .
- Inner ear temperature sensor 516 is a sensor that may be present to measure a temperature of the user.
- inner ear temperature sensor 516 may include a lens used to measure inner ear temperature.
- IR light may be reflected from the user skin by an IR light emitter, such as the ear canal or ear drum, and received by a single temperature sensor element, a one-dimensional array of temperature sensor elements, a two-dimensional array of temperature sensor elements, or other configuration of temperature sensor elements.
- Inner ear temperature sensor 516 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured inner ear temperature.
- Such a configuration may also be used to determine a distance to the user's ear drum.
- the IR light emitter and sensor may be used to determine a distance to the user's ear drum from hearing assist device 500 , which may be used by processing logic to automatically control a volume of sound emitted from hearing assist device 500 , as well as for other purposes.
- the IR light emitter/sensor may also be used as an imager that captures an image of the inside of the user's ear. This could be used to identify characteristics of vein structures inside the user's ear, for example.
- the IR light emitter/sensor could also be used to detect the user's heartbeat, as well as to perform further functions.
- Position/motion sensor 518 includes one or more sensors that may be present to measure time of day, location, acceleration, orientation, vibrations, and/or other movement related characteristics of the user.
- position/motion sensor 518 may include one or more of a GPS (global positioning system) receiver (to measure user position), an accelerometer (to measure acceleration of the user), a gyroscope (to measure orientation of the head of the user), a magneto (to determine a direction the user is facing), a vibration sensor (for example, a micro-electromechanical system (MEMS) vibration sensor), or the like.
- GPS global positioning system
- accelerometer to measure acceleration of the user
- gyroscope to measure orientation of the head of the user
- magneto to determine a direction the user is facing
- vibration sensor for example, a micro-electromechanical system (MEMS) vibration sensor
- Position/motion sensor 518 may be used for various benefits, including determining whether a user has fallen (e.g., based on measured position, acceleration, orientation, etc.), for local VoD, and many more benefits. Position/motion sensor 518 may generate a sensor output signal (e.g., an electrical signal) that indicates one or more of the measured time of day, location, acceleration, orientation, vibration, etc.
- a sensor output signal e.g., an electrical signal
- the sensor information indicated by position/motion sensor 518 and/or other sensors may be used for various purposes. For instance, position/motion information may be used to determine that the user has fallen down/collapsed.
- voice and/or video assist e.g., by a handheld device in communication with hearing assist device 500
- Such sensor data and feedback information if warranted, can be automatically forwarded to medical staff, ambulance services, and/or family members, for example, as described elsewhere herein.
- the analysis of the data that triggered the forwarding process may be performed in whole or in part on one (or both) of hearing assist device 500 , and/or on the assisting local device (e.g., a smart phone, tablet computer, set top box, TV, etc., in communication with a hearing assist device 500 ) and/or remote computing systems (e.g., at medical staff offices or as might be available through a cloud or portal service).
- the assisting local device e.g., a smart phone, tablet computer, set top box, TV, etc., in communication with a hearing assist device 500
- remote computing systems e.g., at medical staff offices or as might be available through a cloud or portal service.
- forward IR/UV (ultraviolet) communication transceiver 520 , BTLE antenna 522 , microphone(s) 524 , telecoil 526 , tethered sensor port 528 , WPT/NFC coil 530 , switch 532 , skin communication conductor 534 , glucose spectroscopy sensor 536 , a heart rate sensor 538 , volume controller 540 , and communication and power delivery coil 542 are located at different locations in/on the first portion 504 of hearing assist device 500 . In alternative embodiments, one or more of these features may be located in/on different locations of hearing assist device 500 .
- Forward IR/UV communication transceiver 520 is a communication mechanism that may be present to enable communications with another device, such as a smart phone, computer, etc.
- Forward IR/UV communication transceiver 520 may receive information/data from processing logic of hearing assist device 500 to be transmitted to the other device in the form of modulated light (e.g., IR light, UV light, etc.), and may receive information/data in the form of modulated light from the other device to be provided to the processing logic of hearing assist device 500 .
- Forward IR/UV communication transceiver 520 may enable low power communications for hearing assist device 500 , to reduce a load on a battery of hearing assist device 500 .
- an emitter/receiver of forward IR/UV communication transceiver 520 may be positioned on housing 502 to be facing forward in a direction a wearer of hearing assist device 500 faces. In this manner, the forward IR/UV communication transceiver 520 may communicate with a device held by the wearer, such as a smart phone, a tablet computer, etc., to provide text to be displayed to the wearer, etc.
- a device held by the wearer such as a smart phone, a tablet computer, etc.
- BTLE antenna 522 is a communication mechanism coupled to a BluetoothTM transceiver in hearing assist device 500 that may be present to enable communications with another device, such as a smart phone, computer, etc.
- BTLE antenna 522 may receive information/data from processing logic of hearing assist device 500 to be transmitted to the other device according to the BluetoothTM specification, and may receive information/data transmitted according to the BluetoothTM specification from the other device to be provided to the processing logic of hearing assist device 500 .
- Microphone(s) 524 is a sensor that may be present to receive environmental sounds, including voice of the user, voice of other persons, and other sounds in the environment (e.g., traffic noise, music, etc.). Microphone(s) 524 may include any number of microphones, and may be configured in any manner, including being omni-directional (non-directional), directional, etc. Microphone(s) 524 generates an audio signal based on the received environmental sound that may be processed and/or filtered by processing logic of hearing assist device 500 , may be stored in digital form in hearing assist device 500 , may be transmitted from hearing assist device 500 , and may be used in other ways.
- Telecoil 526 is a communication mechanism that may be present to enable communications with another device.
- Telecoil 526 is an audio induction loop that enables audio sources to be directly coupled to hearing assist device 500 in a manner known to persons skilled in the relevant art(s).
- Telecoil 526 may be used with a telephone, a radio system, and induction loop systems that transmit sound to hearing aids.
- Tethered sensor port 528 is a port that a remote sensor (separate from hearing assist device 500 ) may be coupled with to interface with hearing assist device 500 .
- port 528 may be an industry standard or proprietary connector type.
- a remote sensor may have a tether (one or more wires) with a connector at an end that may be plugged into port 528 . Any number of tethered sensor ports 528 may be present. Examples of sensor types that may interface with tethered sensor port 528 include brainwave sensors (e.g., electroencephalography (EEG) sensors that record electrical activity along the scalp according to EEG techniques) attached to the user's scalp, heart rate/arrhythmia sensors attached to a chest of the user, etc.
- EEG electroencephalography
- WPT/NFC coil 530 is a communication mechanism coupled to a NFC transceiver in hearing assist device 500 that may be present to enable communications with another device, such as a smart phone, computer, etc., as described above with respect to NFC transceiver 110 ( FIG. 1 ).
- Switch 532 is a switching mechanism that may be present on housing 502 to perform various functions, such as switching power on or off, switching between different power and/or operational modes, etc. A user may interact with switch 532 to switch power on or off, to switch between modes, etc. Switch 532 may be any type of switch, including a toggle switch, a push button switch, a rocker switch, a three-(or greater) position switch, a dial switch, etc.
- Skin communication conductor 534 is a communication mechanism coupled to a transceiver in hearing assist device 500 that may be present to enable communications with another device, such as a smart phone, computer, etc., through skin of the user.
- skin communication conductor 534 may enable communications to flow between hearing assist device 500 and a smart phone held in the hand of the user, a second hearing assist device worn on an opposite ear of the user, a pacemaker or other device implanted in the user, or other communications device in communication with skin of the user.
- a transceiver of hearing assist device 500 may receive information/data from processing logic to be transmitted from skin communication conductor 534 through the user's skin to the other device, and the transceiver may receive information/data at skin communication conductor 534 that was transmitted from the other device through the user's skin to be provided to the processing logic of hearing assist device 500 .
- Glucose spectroscopy sensor 536 is a sensor that may be present to measure a glucose level of the user using spectroscopy techniques in a manner known to persons skilled in the relevant art(s). Such a measurement may be valuable in determining whether a user has diabetes. Such a measurement can also be valuable in helping a diabetic user determine whether insulin is needed, etc. (e.g., hypoglycemia or hyperglycemia).
- Glucose spectroscopy sensor 536 may be configured to monitor glucose in combination with subcutaneous sensor 544 . As shown in FIG. 5 , subcutaneous sensor 544 is shown separate from, and proximate to hearing assist device 500 . In an alternative embodiment, subcutaneous sensor 544 may be located in/on hearing assist device 500 .
- Subcutaneous sensor 544 is a sensor that may be present to measure any attribute of a user's health, characteristics or status.
- subcutaneous sensor 544 may be a glucose sensor implanted under the skin behind the ear so as to provide a reasonably close mating location with communication and power delivery coil 542 .
- glucose spectroscopy sensor 536 may measure the user glucose level with respect to subcutaneous sensor 544 , and may generate a sensor output signal (e.g., an electrical signal) that indicates a glucose level of the user.
- Heart rate sensor 538 is a sensor that may be present to measure a heart rate of the user. For instance, in an embodiment, upon receiving power, heart rate sensor 538 may pressure changes with respect to a blood vessel in the ear, or may measure heart rate in another manner such as changes in reflectivity or otherwise as would be known to persons skilled in the relevant art(s). Missed beats, elevated heart rate, and further heart conditions may be detected in this manner. Heart rate sensor 538 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured heart rate.
- subcutaneous sensor 544 might comprise at least a portion of an internal heart monitoring device which communicates via communication and power delivery coil 542 heart status information and data. Subcutaneous sensor 544 could also be associated with or be part of a pacemaker or defibrillating implant, insulin pump, etc.
- Volume controller 540 is a user interface mechanism that may be present on housing 502 to enable a user to modify a volume at which sound is broadcast from speaker 512 .
- a user may interact with volume controller 520 to increase or decrease the volume.
- Volume controller 540 may be any suitable controller type (e.g., a potentiometer), including a rotary volume dial, a thumb wheel, etc.
- communication and power delivery coil 542 may be dedicated to one or the other.
- such coil may only support power delivery (if needed to charge or otherwise deliver power to subcutaneous sensor 544 ), and can be replaced with any other type of communication system that supports communication with subcutaneous sensor 544 .
- the coils/antennas of hearing assist device 500 may be separately included in hearing assist device 500 , or in embodiments, two or more of the coils/antennas may be combined as a single coil/antenna.
- the processing logic of hearing assist device 500 may be operable to set up/configure and adaptively reconfigure each of the sensors of hearing assist device 500 based on an analysis of the data obtained by such sensor as well as on an analysis of data obtained by other sensors.
- a first sensor of hearing assist device 500 may be configured to operate at one sampling rate (or sensing rate) which is analyzed periodically or continuously.
- a second sensor of hearing assist device 500 can be in a sleep or power down mode to conserve battery power. When a threshold is exceeded or other triggering event occurs, such first sensor can be reconfigured by the processing logic of hearing assist device 500 to sample at a higher rate or continuously and the second sensor can be powered up and configured. Additionally, multiple types of sensor data can be used to construct or derive single conclusions.
- heart rate can be gathered multiple ways (via multiple sensors) and combined to provide a more robust and trustworthy conclusion.
- a combination of data obtained from different sensors e.g., pH plus temperature plus horizontal posture plus impact detected plus weak heart rate
- glucose if glucose is too high, hyperglycemia may be indicated while if glucose it too low, hypoglycemia may be indicated.
- a stroke may be indicated.
- This processing can be done in whole or in part within hearing assist device 500 with audio content being played to the wearer thereof to gather further voiced information from the wearer to assist in conclusions or to warn the wearer.
- FIG. 6 shows a hearing assist device 600 that is an example of hearing assist device 102 according to an exemplary embodiment.
- Hearing assist device 600 is configured to be at least partially inserted into the ear canal of a user (for example, an ear bud).
- a user may wear a single hearing assist device 600 on one ear, or may simultaneously wear first and second hearing assist devices 600 on the user's right and left ears, respectively.
- hearing assist device 600 includes a case or housing 602 that has a generally cylindrical shape, and includes a first portion 604 , a second portion 606 , and a third portion 608 .
- First portion 604 is shaped to be inserted at least partially into the ear canal of the user.
- Second portion 606 extends coaxially from first portion 604 .
- Third portion 608 is a handle that extends from second portion 606 . A user grasps third portion 608 to extract hearing assist device 600 from the ear of the user.
- hearing assist device 600 further includes pH sensor 510 , speaker 512 , IR (infrared) or sonic distance sensor 514 , inner ear temperature sensor 516 , and an antenna 610 .
- pH sensor 510 , speaker 512 , IR (infrared) or sonic distance sensor 514 , inner ear temperature sensor 516 may function and be configured similarly as described above.
- Antenna 610 may be include one or more coils or other types of antennas to function as any one or more of the coils/antennas described above with respect to FIG. 5 and/or elsewhere herein (e.g., an NFC antenna, a BluetoothTM antenna, etc.).
- antennas such as coils, mentioned herein may be implemented as any suitable type of antenna, including a coil, a microstrip antenna, or other antenna type.
- further sensors, communication mechanisms, switches, etc., of hearing assist device 500 of FIG. 5 are not shown included in hearing assist device 600 , one or more further of these features of hearing assist device 500 may additionally and/or alternatively be included in hearing assist device 600 .
- sensors that are present in a hearing assist device may all operate simultaneously, or one or more sensors may be run periodically, and may be off at other times (e.g., based on an algorithm in program code, etc.). By running fewer sensors at any one time, battery power may be conserved.
- sensor management duty cycling, continuous operations, threshold triggers, sampling rates, etc.
- the assisting local device e.g., smart phone, tablet computer, set top box, TV, etc.
- remote computing systems at medical staff offices or as might be available through a cloud or portal service.
- Hearing assist devices 102 , 500 , and 600 may be configured in various ways with circuitry to process sensor information, and to communicate with other devices.
- the next section describes some example circuit embodiments for hearing assist devices, as well as processes for communicating with other devices, and for further functionality.
- hearing assist devices may be configured in various ways to perform their functions.
- FIG. 7 shows a circuit block diagram of a hearing assist device 700 that is configured to communicate with external devices according to multiple communication schemes, according to an exemplary embodiment.
- Hearing assist devices 102 , 500 , and 600 may each be implemented similarly to hearing assist device 700 , according to embodiments.
- hearing assist device 700 includes a plurality of sensors 702 a - 702 c , processing logic 704 , a microphone 706 , an amplifier 708 , a filter 710 , an analog-to-digital (A/D) converter 712 , a speaker 714 , an NFC coil 716 , an NFC transceiver 718 , an antenna 720 , a BluetoothTM transceiver 722 , a charge circuit 724 , a battery 726 , a plurality of sensor interfaces 728 a - 728 c , and a digital-to-analog (D/A) converter 764 .
- A/D analog-to-digital
- Processing logic 704 includes a digital signal processor (DSP) 730 , a central processing unit (CPU) 732 , and a memory 734 .
- Sensors 702 a - 702 c , processing logic 704 , amplifier 708 , filter 710 , A/D converter 712 , NFC transceiver 718 , BluetoothTM transceiver 722 , charge circuit 724 , sensor interfaces 728 a - 728 c , D/A converter 764 , DSP 730 , CPU 732 may each be implemented in the form of hardware (e.g., electrical circuits, digital logic, etc.) or a combination of hardware and software/firmware.
- the features of hearing assist device 700 shown in FIG. 7 are described as follows.
- microphone 706 is a sensor that receives environmental sounds, including voice of the user of hearing assist device 700 , voice of other persons, and other sounds in the environment (e.g., traffic noise, music, etc.).
- Microphone 706 may be configured in any manner, including being omni-directional (non-directional), directional, etc., and may include one or more microphones.
- Microphone 706 may be a miniature microphone conventionally used in hearing aids, as would be known to persons skilled in the relevant art(s), or may be another suitable type of microphone.
- Microphone(s) 524 FIG. 5 is an example of microphone 706 .
- Microphone 706 generates a received audio signal 740 based on the received environmental sound.
- Amplifier 708 receives and amplifies received audio signal 740 to generate an amplified audio signal 742 .
- Amplifier 708 may be any type of amplifier, including a low-noise amplifier for amplifying low level signals.
- Filter 710 receives and processes amplified audio signal 742 to generate a filtered audio signal 744 .
- Filter 710 may be any type of filter, including being a filter configured to filter out noise, other high frequencies, and/or other frequencies as desired.
- A/D converter 712 receives filtered audio signal 742 , which may be an analog signal, and converts filtered audio signal 742 to digital form, to generate a digital audio signal 746 .
- A/D converter 712 may be configured in any manner, including as a conventional A/D converter.
- Processing logic 704 receives digital audio signal 746 , and may process digital audio signal 746 in any manner to generate processed digital audio signal 762 .
- DSP 730 may receive digital audio signal 746 , and may perform digital signal processing on digital audio signal 746 to generate processed digital audio signal 762 .
- DSP 730 may be configured in any manner, including as a conventional DSP known to person skilled in the relevant art(s), or in another manner.
- DSP 730 may perform any suitable type of digital signal processing to process/filter digital audio signal 746 , including processing digital audio signal 746 in the frequency domain to manipulate the frequency spectrum of digital audio signal 746 (e.g., according to Fourier transform/analysis techniques, etc.).
- DSP 730 may amplify particular frequencies, may attenuate particular frequencies, and may otherwise modify digital audio signal 746 in the discrete domain. DSP 730 may perform the signal processing for various reasons, including noise cancellation or hearing loss compensation. For instance, DSP 730 may process digital audio signal 746 to compensate for a personal hearing frequency response of the user, such as compensating for poor hearing of high frequencies, middle range frequencies, or other personal frequency response characteristics of the user.
- DSP 730 may be pre-configured to process digital audio signal 746 .
- DSP 730 may receive instructions from CPU 732 regarding how to process digital audio signal 746 .
- CPU 732 may access one or more DSP configurations in stored in memory 734 (e.g., in other data 768 ) that may be provided to DSP 730 to configure DSP 730 for digital signal processing of digital audio signal 746 .
- CPU 732 may select a DSP configuration based on a hearing assist mode selected by a user of hearing assist device 700 (e.g., by interacting with switch 532 , etc.).
- D/A converter 764 receives processed digital audio signal 762 , and converts processed digital audio signal 762 to digital form, generating processed audio signal 766 .
- D/A converter 764 may be configured in any manner, including as a conventional D/A converter.
- Speaker 714 receives processed audio signal 766 , and broadcasts sound generated based on processed audio signal 766 into the ear of the user. The user is enabled to hear the broadcast sound, which may be amplified, filtered, and/or otherwise frequency manipulated with respect to the sound received by microphone 706 .
- Speaker 714 may be a miniature speaker conventionally used in hearing aids, as would be known to persons skilled in the relevant art(s), or may be another suitable type of speaker.
- Speaker 512 ( FIG. 5 ) is an example of speaker 714 .
- Speaker 714 may include one or more speakers.
- FIG. 8 shows a flowchart 800 of a process for a hearing assist device that processes and transmits sensor data and receives a command from a second device, according to an exemplary embodiment.
- hearing assist device 700 (as well as any of hearing assist devices 102 , 500 , and 600 ) may perform flowchart 800 . Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following description of flowchart 800 and hearing assist device 700 .
- Flowchart 800 begins with step 802 .
- a sensor output signal is received from a medical sensor of the hearing assist device that senses a characteristic of the user.
- sensors 702 a - 702 c may each sense/measure information about a health characteristic of the user of hearing assist device 700 .
- Sensors 702 a - 702 c may each be one of the sensors shown in FIGS. 5 and 6 , and/or mentioned elsewhere herein. Although three sensors are shown in FIG. 7 for purposes of illustration, other numbers of sensors may be present in hearing assist device 700 , including one sensor, two sensors, or greater numbers of sensors.
- Sensors 702 a - 702 c each may generate a corresponding sensor output signal 758 a - 758 c (e.g., an electrical signal) that indicates the measured information about the corresponding health characteristic.
- sensor output signals 758 a - 758 c may be analog or digital signals having levels or values corresponding to the measured information.
- Sensor interfaces 728 a - 728 c are each optionally present, depending on whether the corresponding sensor outputs a sensor output signal that needs to be modified to be receivable by CPU 732 .
- each of sensor interfaces 728 a - 728 c may include an amplifier, filter, and/or A/D converter (e.g., similar to amplifier 708 , filter 710 , and A/D converter 712 ) that respectively amplify (e.g., increase or decrease), reduces particular frequencies, and/or convert to digital form the corresponding sensor output signal.
- Sensor interfaces 728 a - 728 c (when present) respectively output modified sensor output signals 760 a - 760 c.
- the sensor output signal is processed to generate processed sensor data.
- processing logic 704 receives modified sensor output signals 760 a - 760 c .
- Processing logic 704 may process modified sensor output signals 760 a - 760 c in any manner to generate processed sensor data.
- CPU 732 may receive modified sensor output signals 760 a - 760 c .
- CPU 732 may process the sensor information in one or more of modified sensor output signals 760 a - 760 c to generate processed sensor data.
- CPU 732 may manipulate the sensor information (e.g., according to an algorithm of code 738 ) to convert the sensor information into a presentable form (e.g., scaling the sensor information, adding or subtracting a constant to/from the sensor information, etc.). Furthermore, CPU 732 may transmit the sensor information of modified sensor output signals 760 a - 760 c to DSP 730 to be digital signal processed by DSP 730 to generate processed sensor data, and may receive the processed sensor data from DSP 730 . The processed and/or raw (unprocessed) sensor data may optionally be stored in memory 734 (e.g., as sensor data 736 ).
- the processed sensor data is wirelessly transmitted from the hearing assist device to a second device.
- CPU 732 may provide the sensor data (processed or raw) (e.g., from CPU registers, from DSP 730 , from memory 734 , etc.) to a transceiver to be transmitted from hearing assist device 700 .
- hearing assist device 700 includes an NFC transceiver 718 and a BT transceiver 722 , which may each be used to transmit sensor data from hearing assist device 700 .
- hearing assist device 700 may include one or more additional and/or alternative transceivers that may transmit sensor data from hearing assist device 700 , including a Wi-Fi transceiver, a forward IR/UV communication transceiver (e.g., transceiver 520 of FIG. 5 ), a telecoil transceiver (which may transmit via telecoil 526 ), a skin communication transceiver 534 (which may transmit via skin communication conductor 534 ), etc.
- a Wi-Fi transceiver e.g., transceiver 520 of FIG. 5
- a telecoil transceiver which may transmit via telecoil 526
- a skin communication transceiver 534 which may transmit via skin communication conductor 534
- NFC transceiver 718 may receive an information signal 740 from CPU 732 that includes sensor data for transmitting.
- NFC transceiver 718 may modulate the sensor data onto NFC antenna signal 748 to be transmitted from hearing assist device 700 by NFC coil 716 when NFC coil 716 is energized by an RF field generated by a second device.
- BT transceiver 722 may receive an information signal 754 from CPU 732 that includes sensor data for transmitting.
- BT transceiver 722 may modulate the sensor data onto BT antenna signal 752 to be transmitted from hearing assist device 700 by antenna 720 (e.g., BTLE antenna 522 of FIG. 5 ), according to a BluetoothTM communication protocol or standard.
- a hearing assist device may communicate with one or more other devices to provide sensor data and/or other information, and to receive information.
- FIG. 9 shows a communication system 900 that includes a hearing assist device communicating with other communication devices, according to an exemplary embodiment.
- communication system 900 includes hearing assist device 700 , a mobile computing device 902 , a stationary computing device 904 , and a server 906 .
- System 900 is described as follows.
- Mobile computing device 902 (for example, a local supporting device) is a device capable of communicating with hearing assist device 700 according to one or more communication techniques. For instance, as shown in FIG. 9 , mobile computing device 902 includes a telecoil 910 , one or more microphones 912 , an IR/UV communication transceiver 914 , a WPT/NFC coil 916 , and a BluetoothTM antenna 918 . In embodiments, mobile computing device 902 may include one or more of these features and/or alternative or additional features (e.g., communication mechanisms, etc.).
- Mobile computing device 902 may be any type of mobile electronic device, including a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer (e.g., an Apple iPadTM), a netbook, a mobile phone (e.g., a cell phone, a smart phone, etc.), a special purpose medical device, etc.
- PDA personal digital assistant
- laptop computer a notebook computer
- tablet computer e.g., an Apple iPadTM
- netbook e.g., a mobile phone (e.g., a cell phone, a smart phone, etc.), a special purpose medical device, etc.
- the features of mobile computing device 902 shown in FIG. 9 are described as follows.
- Telecoil 910 is a communication mechanism that may be present to enable mobile computing device 902 to communicate with hearing assist device 700 via a telecoil (e.g., telecoil 526 of FIG. 5 ).
- a telecoil e.g., telecoil 526 of FIG. 5
- telecoil 910 and an associated transceiver may enable mobile computing device 902 to couple audio sources and/or other communications to hearing assist device 700 in a manner known to persons skilled in the relevant art(s).
- Microphone(s) 912 may be present to receive voice of a user of mobile computing device 902 .
- the user may provide instructions for mobile computing device 902 and/or for hearing assist device 700 by speaking into microphone(s) 912 .
- the received voice may be transmitted to hearing assist device 700 (in digital or analog form) according to any communication mechanism, or may be converted into data and/or commands to be provided to hearing assist device 700 to cause functions/actions in hearing assist device 700 .
- Microphone(s) 912 may include any number of microphones, and may be configured in any manner, including being omni-directional (non-directional), directional, etc.
- IR/UV communication transceiver 914 is a communication mechanism that may be present to enable communications with hearing assist device 700 via an IR/UV communication transceiver of hearing assist device 700 (e.g., forward IR/UV communication transceiver 520 of FIG. 5 ). IR/UV communication transceiver 914 may receive information/data from and/or transmit information/data to hearing assist device 700 (e.g., in the form of modulated light, as described above).
- WPT/NFC coil 916 is an NFC antenna coupled to a NFC transceiver in mobile computing device 902 that may be present to enable NFC communications with an NFC communication mechanism of hearing assist device 700 (e.g., NFC transceiver 110 of FIG. 1 , NFC coil 530 of FIG. 5 ). WPT/NFC coil 916 may be used to receive information/data from and/or transmit information/data to hearing assist device 700 .
- BluetoothTM antenna 918 is a communication mechanism coupled to a BluetoothTM transceiver in mobile computing device 902 that may be present to enable communications with hearing assist device 700 (e.g., BT transceiver 722 and antenna 720 of FIG. 7 ). BluetoothTM antenna 918 may be used to receive information/data from and/or transmit information/data to hearing assist device 700 .
- mobile computing device 902 and hearing assist device 700 may exchange communication signals 920 according to any communication mechanism/protocol/standard mentioned herein or otherwise known. According to step 806 , hearing assist device 700 may wirelessly transmit sensor data to mobile computing device 902 .
- Stationary computing device 904 (for example, a local supporting device) is also a device capable of communicating with hearing assist device 700 according to one or more communication techniques.
- stationary computing device 904 may be capable of communicating with hearing assist device 700 according to any of the communication mechanisms shown for mobile computing device 902 in FIG. 9 , and/or according to other communication mechanisms/protocols/standards described elsewhere herein or otherwise known.
- Stationary computing device 904 may be any type of stationary electronic device, including a desktop computer (e.g., a personal computer, etc.), a docking station, a set top box, a gateway device, an access point, special purpose medical equipment, etc.
- stationary computing device 904 and hearing assist device 700 may exchange communication signals 922 according to any communication mechanism/protocol/standard mentioned herein or otherwise known. According to step 806 , hearing assist device 700 may wirelessly transmit sensor data to stationary computing device 904 .
- mobile computing device 902 may communicate with server 906 (for example, a remote supporting device, a third device).
- server 906 for example, a remote supporting device, a third device.
- mobile computing device and/or stationary computing device 904
- Network 908 may be any type of communication network, including a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a phone network (e.g., a cellular network, a land based network), or a combination of communication networks, such as the Internet.
- Network 908 may include wired and/or wireless communication pathway(s) implemented using any of a wide variety of communication media and associated protocols.
- such communication pathway(s) may comprise wireless communication pathways implemented via radio frequency (RF) signaling, infrared (IR) signaling, or the like.
- RF radio frequency
- IR infrared
- Such signaling may be carried out using long-range wireless protocols such as WIMAX® (IEEE 802.16) or GSM (Global System for Mobile Communications), medium-range wireless protocols such as WI-FI® (IEEE 802.11), and/or short-range wireless protocols such as BLUETOOTH® or any of a variety of IR-based protocols.
- Such communication pathway(s) may also comprise wired communication pathways established over twisted pair, Ethernet cable, coaxial cable, optical fiber, or the like, using suitable communication protocols therefor.
- security protocols e.g., private key exchange, etc.
- Server 906 may be any computer system, including a stationary computing device, a server computer, a mobile computing device, etc.
- Server 906 may include a web service, an API (application programming interface), or other service or interface for communications.
- API application programming interface
- Sensor data and/or other information may be transmitted (for example, relayed) to server 906 over network 908 to be processed.
- server 906 may transmit processed data, instructions, and/or other information through network 908 to mobile computing device 902 (and/or stationary computing device 904 ) to be transmitted to hearing assist device 700 to be stored, to cause a function/action at hearing assist device 700 , and/or for other reason.
- hearing assist device 700 may receive a command wirelessly transmitted in a communication signal from a second device at NFC coil 716 , antenna 720 , or other antenna or communication mechanism at hearing assist device 700 .
- the command may be transmitted from NFC coil 716 on NFC antenna signal 748 to NFC transceiver 718 .
- NFC transceiver 718 may demodulate command data from the received communication signal, and provide the command to CPU 732 .
- the command may be transmitted from antenna 720 on BT antenna signal 752 to BT transceiver 722 .
- BT transceiver 722 may demodulate command data from the received communication signal, and provide the command to CPU 732 .
- the CPU 732 may execute the received command.
- the received command may cause hearing assist device 700 to perform one or more functions/actions.
- the command may cause hearing assist device 700 to turn on or off, to change modes, to activate or deactivate one or more sensors, to wirelessly transmit further information, to execute particular program code (e.g., stored as code 738 in memory 734 ), to play a sound (e.g., an alert, a tone, a beeping noise, pre-recorded or synthesized voice, etc.) from speaker 714 to the user to inform the user of information and/or cause the user to perform a function/action, and/or cause one or more additional and/or alternative functions/actions to be performed by hearing assist device 700 . Further examples of such commands and functions/actions are described elsewhere herein.
- a hearing assist device may be configured to convert received RF energy into charge for storage in a battery of the hearing assist device.
- hearing assist device 700 includes charge circuit 724 for charging battery 726 , which is a rechargeable battery (e.g., rechargeable battery 114 ).
- charge circuit 724 may operate according to FIG. 10 .
- FIG. 10 shows a flowchart 1000 of a process for a wirelessly charging a battery of a hearing assist device, according to an exemplary embodiment. Flowchart 1000 is described as follows.
- a radio frequency signal is received.
- NFC coil 716 , antenna 720 , and/or other antenna or coil of hearing assist device 700 may receive a radio frequency (RF) signal.
- the RF signal may be a communication signal that includes data (e.g., modulated on the RF signal), or may be an un-modulated RF signal.
- Charge circuit 724 may be coupled to one or more of NFC coil 716 , antenna 720 , or other antenna to receive the RF signal.
- a charge current is generated that charges a rechargeable battery of the hearing assist device based on the received radio frequency signal.
- charge circuit 724 is configured to generate a charge current 756 that is used to charge battery 726 .
- Charge circuit 724 may be configured in various ways to convert a received RF signal to a charge current.
- charge circuit 724 may include an induction coil to take power from an electromagnetic field and convert it to electrical current.
- charge circuit 724 may include a diode rectifier circuit that rectifies the received RF signal to a DC (direct current) signal, and may include one or more charge pump circuits coupled to the diode rectifier circuit used to create a higher voltage value from the DC signal.
- charge circuit 724 may be configured in other ways to generate charge current 756 from a received RF signal.
- hearing assist device 700 may maintain power for operation, with battery 726 being charged periodically by RF fields generated by other devices, rather than needing to physically replace batteries.
- hearing assist device 700 may be configured to generate sound based on received sensor data.
- hearing assist device 700 may operate according to FIG. 11 .
- FIG. 11 shows a flowchart 1100 of a process for generating and broadcasting sound based on sensor data, according to an exemplary embodiment. For purposes of illustration, flowchart 1100 is described as follows with reference to FIG. 7 .
- Flowchart 1100 begins with step 1102 .
- an audio signal is generated based at least on the processed sensor data.
- a sensor output signal may be processed to generate processed sensor data.
- the processed sensor data may be stored in memory 736 as sensor data 736 , may be held in registers in CPU 732 , or may be present in another location.
- Audio data for one or more sounds e.g., tones, beeping sounds, voice segments, etc.
- CPU 732 or DSP 730 may select the audio data corresponding to particular sensor data from memory 734 .
- CPU 732 may transmit a request for the audio data from another device using a communication mechanism (e.g., NFC transceiver 718 , BT transceiver 722 , etc.).
- DSP 730 may receive the audio data from CPU 732 , from memory 734 , or from another device, and may generate processed digital audio signal 762 based thereon.
- step 1104 sound is generated based on the audio signal, the sound broadcast from a speaker of the hearing assist device into the ear of the user.
- D/A converter 764 may be present, and may receive processed digital audio signal 762 .
- D/A converter 764 may convert processed digital audio signal 762 to digital form to generate processed audio signal 766 .
- Speaker 714 receives processed audio signal 766 , and broadcasts sound generated based on processed audio signal 766 into the ear of the user.
- sounds may be provided to the user by hearing assist device 700 based at least on sensor data, and optionally further based on additional information.
- the sounds may provide information to the user, and may remind or instruct the user to perform a function/action.
- the sounds may include one or more of a tone, a beeping sound, or a voice that includes at least one of a verbal instruction to the user, a verbal warning to the user, or a verbal question to the user.
- a tone or a beeping sound may be provided to the user as an alert based on particular values of sensor data (e.g., indicating a high glucose/blood sugar value), and/or a voice instruction may be provided to the user as the alert based on the particular values of sensor data (e.g., a voice segment stating “Blood sugar is low—Insulin is required” or “hey, your heart rate is 80 beats per minute, your heart is fine, your pacemaker has got 6 hours of battery left.”).
- a voice segment stating “Blood sugar is low—Insulin is required” or “hey, your heart rate is 80 beats per minute, your heart is fine, your pacemaker has got 6 hours of battery left.”.
- hearing assist device 700 may be configured to generate filtered environmental sound.
- hearing assist device 700 may operate according to FIG. 12 .
- FIG. 12 shows a flowchart 1200 of a process for generating and broadcasting filtered sound from a hearing assist device, according to an exemplary embodiment. For purposes of illustration, flowchart 1200 is described as follows with reference to FIG. 7 .
- Flowchart 1200 begins with step 1202 .
- an audio signal is generated based on environmental sound received by at least one microphone of the hearing assist device.
- microphone 706 may generate a received audio signal 740 based on received environmental sound.
- Received audio signal 740 may optionally be amplified, filtered, and converted to digital form to generate digital audio signal 746 , as shown in FIG. 7 .
- DSP 730 may receive digital audio signal 746 , and may perform digital signal processing on digital audio signal 746 to generate processed digital audio signal 762 .
- DSP 730 may favor one or more frequencies by amplifying particular frequencies, attenuate particular frequencies, and/or by otherwise filtering digital audio signal 746 in the discrete domain.
- DSP 730 may perform the signal processing for various reasons, including noise cancellation or hearing loss compensation. For instance, DSP 730 may process digital audio signal 746 to compensate for a personal hearing frequency response of the user, such as compensating for poor hearing of high frequencies, middle range frequencies, or other personal frequency response characteristics of the user.
- step 1206 sound is generated based on the modified audio signal, the sound broadcast from a speaker of the hearing assist device into the ear of the user.
- D/A converter 764 may be present, and may receive processed digital audio signal 762 .
- D/A converter 764 may convert processed digital audio signal 762 to digital form to generate processed audio signal 766 .
- Speaker 714 receives processed audio signal 766 , and broadcasts sound generated based on processed audio signal 766 into the ear of the user.
- environmental noise, voice, and other sounds may be tailored to a particular user's personal hearing frequency response characteristics.
- particular noises in the environment may be attenuated (e.g., road noise, engine noise, etc.) to be filtered from the received environmental sounds so that the user may better hear important or desired sounds.
- sounds that are desired to be heard e.g., music, a conversation, a verbal warning, verbal instructions, sirens, sounds of a nearby car accident, etc.
- sounds that are desired to be heard e.g., music, a conversation, a verbal warning, verbal instructions, sirens, sounds of a nearby car accident, etc.
- hearing assist device 700 may be configured to transmit recorded voice of a user to another device.
- hearing assist device 700 may operate according to FIG. 13 .
- FIG. 13 shows a flowchart 1300 of a process for generating an information signal in a hearing assist device based on a voice of a user, and for transmitting the information signal to a second device, according to an exemplary embodiment.
- flowchart 1300 is described as follows with reference to FIG. 7 .
- Flowchart 1300 begins with step 1302 .
- an audio signal is generated based on a voice of the user received at a microphone of the hearing assist device.
- microphone 706 may generate a received audio signal 740 based on received voice of the user.
- Received audio signal 740 may optionally be amplified, filtered, and converted to digital form to generate digital audio signal 746 , as shown in FIG. 7 .
- the voice of the user may be any statement made by the user, including a question, a statement of fact, a command, or any other verbal sequence. For instance, the user may ask “what is my heart rate”. All such statements made by the user can be those intended for capture by one or more hearing assist devices, supporting local and remote systems. Such statements may also include unintentional sounds such as semi-lucid ramblings, moaning, choking, coughing, and/or other sounds. Any one or more of the hearing assist devices and the supporting local device can receive (via microphones) such audio and forward the audio from the hearing assist device(s) as needed for further processing.
- This processing may include voice and/or sound recognition, comparisons with command words or sequences, (video, audio) prompting for (gesture, tactile or audible) confirmation, carrying out commands, storage for later analysis or playback, and/or forwarding to an appropriate recipient system for further processing, storage, and/or presentations to others.
- an information signal is generated based on the audio signal.
- DSP 730 may receive digital audio signal 746 .
- DSP 730 and/or CPU 732 may generate an information signal from digital audio signal 746 to be transmitted to a second device from hearing assist device 700 .
- DSP 730 and/or CPU 732 may optionally perform voice/speech recognition on digital audio signal 746 to recognize spoken words included therein, and may include the spoken words in the generated information signal.
- code 738 stored in memory 734 may include a voice recognition program that may be executed by CPU 732 and/or DSP 730 .
- the voice recognition program may use conventional or proprietary voice recognition techniques.
- voice recognition techniques may be augmented by sensor data.
- position/motion sensor 518 may include a vibration sensor.
- the vibration sensor may detect vibrations of the user associated with speaking (e.g., jaw movement of the wearer during talking), and generates corresponding vibration information/data.
- the vibration information output by the vibration sensor may be received by CPU 732 and/or DSP 730 , and may be used to aid in improving speech/voice recognition performed by the voice recognition program.
- the vibration information may be used by the voice recognition program to detect breaks between words, to identify the location of spoken syllables, to identify the syllables themselves, and/or to better perform other aspects of voice recognition.
- the vibration information may be transmitted from hearing assist device 700 , along with the information signal, to a second device to perform the voice recognition process at the second device (or other device).
- the generated information signal is transmitted to the second device.
- CPU 732 may provide the information signal (e.g., from CPU registers, from DSP 730 , from memory 734 , etc.) to a transceiver to be transmitted from hearing assist device 700 (e.g., NFC transceiver 718 , BT transceiver 722 , or other transceiver).
- a transceiver e.g., NFC transceiver 718 , BT transceiver 722 , or other transceiver.
- Another device such as mobile computing device 902 , stationary computing device 904 , or server 906 , may receive the transmitted voice information, and may analyze the voice (spoken words, moans, slurred words, etc.) therein to determine one or more functions/actions to be performed. As a result, one or more functions/actions may be determined to be performed by hearing assist device 700 or another device.
- voice spoken words, moans, slurred words, etc.
- hearing assist device 700 may be configured to enable voice to be received and/or generated to be played to the user.
- hearing assist device 700 may operate according to FIG. 14 .
- FIG. 14 shows a flowchart 1400 of a process for generating voice to be broadcast to a user, according to an exemplary embodiment. For purposes of illustration, flowchart 1400 is described as follows with reference to FIG. 7 .
- Flowchart 1400 begins with step 1402 .
- a sensor output signal is received from a medical sensor of the hearing assist device that senses a characteristic of the user.
- sensors 702 a - 702 c each sense/measure information about a health characteristic of the user of hearing assist device 700 .
- sensor 702 a may sense a characteristic of the user (e.g., a heart rate, a blood pressure, a glucose level, a temperature, etc.).
- Sensors 702 a generates sensor output signal 758 a , which indicates the measured information about the corresponding health characteristic.
- Sensor interface 728 a when present, may convert sensor output signal 758 a to modified sensor output signal 760 a , to be received by processing logic.
- processed sensor data is generated based on the sensor output signal.
- processing logic 704 receives modified sensor output signal 760 a , and may process modified sensor output signal 760 a in any manner.
- CPU 732 may receive modified sensor output signal 760 a , and may process the sensor information contained therein to generate processed sensor data.
- CPU 732 may manipulate the sensor information (e.g., according to an algorithm of code 738 ) to convert the sensor information into a presentable form (e.g., scaling the sensor information, adding or subtracting a constant to/from the sensor information, etc.), or may otherwise process the sensor information.
- CPU 732 may transmit the sensor information of modified sensor output signal 760 a to DSP 730 to be digital signal processed.
- a voice audio signal generated based at least on the processed sensor data is received.
- the processed sensor data generated in step 1404 may be transmitted from hearing assist device 700 to another device (e.g., as shown in FIG. 9 ), and a voice audio signal may be generated at the other device based on the processed sensor data.
- the voice audio signal may be generated by processing logic 704 based on the processed sensor data.
- the voice audio signal contains voice information (e.g., spoken words) that relate to the processed sensor data.
- the voice information may include a verbal alert, verbal instructions, and/or other verbal information to be provided to the user based on the processed sensor data (e.g., based on a value of measured sensor data, etc.).
- the voice information may be generated by being synthesized, being retrieved from memory 734 (e.g., a library of record spoken segments in other data 768 ), or being generated from a combination thereof. It is noted that the voice audio signal may be generated based on processed sensor data from one or more sensors. DSP 730 may output the voice audio signal as processed digital audio signal 762 .
- voice is broadcast from the speaker into the ear of the user based on the received voice audio signal.
- D/A converter 764 may be present, and may receive processed digital audio signal 762 .
- D/A converter 764 may convert processed digital audio signal 762 to digital form to generate processed audio signal 766 .
- Speaker 714 receives processed audio signal 766 , and broadcasts voice generated based on processed audio signal 766 into the ear of the user.
- voice may be provided to the user by hearing assist device 700 based at least on sensor data, and optionally further based on additional information.
- the voice may provide information to the user, and may remind or instruct the user to perform a function/action.
- the voice may include at least one of a verbal instruction to the user (“take an iron supplement”), a verbal warning to the user (“your heart rate is high”), a verbal question to the user (“have you fallen down, and do you need assistance?”), or a verbal answer to the user (“your heart rate is 98 beats per minute”).
- the performance of one or more functions by a hearing assist device is assisted or improved in some manner by utilizing resources of an external device and/or service to which the hearing assist device may be communicatively connected.
- Such performance assistance or improvement may be achieved, for example and without limitation, by utilizing power resources, processing resources, storage resources, sensor resources, and/or user interface resources of an external device or service to which the hearing assist device may be communicatively connected.
- FIG. 15 is a block diagram of an example system 1500 that enables external operational support to be provided to a hearing assist device in accordance with an embodiment.
- system 1500 includes a first hearing assist device 1501 , a second hearing assist device 1503 , and a portable electronic device 1505 .
- First and second hearing assist devices 1501 and 1503 may each be implemented in a like manner to any of the hearing assist devices described above in Sections II-IV.
- first and second hearing assist devices 1501 and 303 are not limited to those implementations.
- FIG. 15 shows two hearing assist devices that can be worn by a user, it is to be understood that the external operational support techniques described herein can also be applied to a single hearing assist device worn by a user.
- Portable electronic device 1505 is intended to represent an electronic device that may be carried by or is otherwise locally accessible to a wearer of first and second hearing assist devices 1501 and 1503 .
- portable electronic device 1505 may comprise a smart phone, a tablet computer, a netbook, a laptop computer, a remote control device, a personal media player, a handheld gaming device, or the like. It is noted that certain external operational support features described herein are premised on the ability of a wearer of a hearing assist device to hold portable electronic device 1505 and/or lift portable electronic device 1505 toward his/her ear. For these embodiments, it is to be understood that portable electronic device 1505 has a form factor that permits such actions to be taken.
- portable electronic device 1505 may have a larger form factor.
- portable electronic device 1505 may comprise a desktop computer or television.
- first hearing assist device 1501 and second hearing assist device 1503 are capable of communicating with each other via a communication link 1521 .
- Communication link 1521 may be established using, for example and without limitation, a wired communication link, a wireless communication link (wherein such wireless communication link may be established using NFC, BLUETOOTH® low energy (BTLE) technology, wireless power transfer (WPT) technology, telecoil, or the like), or skin-based signal transmission.
- first hearing assist device 1501 is capable of communicating with portable electronic device 1505 via a communication link 1523
- second hearing assist device 303 is capable of communicating with portable electronic device 1505 via a communication link 1525 .
- Each of communication links 1523 and 1525 may be established using, for example and without limitation, a wireless communication link (wherein such wireless communication link may be established using NFC, BTLE technology, WPT technology, telecoil or the like), or skin-based signal transmission.
- portable electronic device 1505 is capable of communicating with various other entities via one or more wired and/or wireless communication pathways 1513 .
- portable electronic device 1505 may access one or more hearing assist device support services 1511 via communication pathway(s) 1513 .
- hearing assist device support service(s) 1511 may be executed or otherwise provided by a device such as but not limited to a set top box, a television, a wired or wireless access point, or a server that is accessed via communication pathway(s) 1513 .
- Such device may also comprise a gateway via which such hearing assist device support service(s) 1511 may be accessed.
- hearing assist device support service(s) 1511 may also comprise cloud-based services accessed via a network. Since portable electronic device 1505 can access such hearing assist device support service(s) 1511 and can also communicate with first and second hearing assist devices 1501 and 1503 , portable electronic device 1505 is capable of making hearing assist device support service(s) 1511 available to first and second hearing assist devices 1501 and 1503 .
- Portable electronic device 1505 can also access one or more support personnel system(s) 1515 via communication pathway(s) 1513 .
- Support personnel system(s) 1515 are intended to generally represent systems that are owned and/or operated by persons having an interest (personal, professional, fiduciary or otherwise) in the health, well-being, or some other state of a wearer of first and second hearing assist devices 1501 and 1503 .
- support personnel system(s) 1515 may include a system owned and/or operated by a doctor's office or medical practice with which a wearer of first and second hearing assist devices 1501 and 1503 is affiliated.
- support personnel system(s) 1515 may include systems or devices owned and/or operated by family members, friends, or caretakers of a wearer of first and second hearing assist devices 1501 and 1503 . Since portable electronic device 1505 can access such support personnel system(s) 1515 and can also communicate with first and second hearing assist devices 1501 and 1503 , portable electronic device 1505 is capable of carrying out communication between first and second hearing assist devices 1501 and 1503 and support personnel system(s) 1515 .
- Wired and/or wireless communication pathway(s) 1513 may be implemented using any of a wide variety of communication media and associated protocols.
- communication pathway(s) 1513 may comprise wireless communication pathways implemented via radio frequency (RF) signaling, infrared (IR) signaling, or the like.
- RF radio frequency
- IR infrared
- Such signaling may be carried out using long-range wireless protocols such as WIMAX® (IEEE 802.16) or GSM (Global System for Mobile Communications), medium-range wireless protocols such as WI-FI® (IEEE 802.11), and/or short-range wireless protocols such as BLUETOOTH® or any of a variety of IR-based protocols.
- Communication pathway(s) 1513 may also comprise wired communication pathways established over twisted pair, Ethernet cable, coaxial cable, optical fiber, or the like, using suitable communication protocols therefor.
- Communication links 1523 and 1525 respectively established between first and second hearing assist devices 1501 and 1503 and portable electronic device 1505 enable first and second hearing assist devices 1501 and 1503 to utilize resources of and/or services provided by portable electronic device 1505 to assist in performing certain operations and/or improve the performance of such operations. Furthermore, since portable electronic device 1505 can access hearing assist device support service(s) 1511 and support personnel system(s) 1515 , portable electronic device 1505 can also make such system(s) and service(s) available to first and second hearing assist devices 1501 and 1503 such that first and second hearing assist devices 1501 and 1503 can utilize those system(s) and service(s) to assist in the performance of certain operations and/or improve the performance of such operations.
- FIG. 16 depicts a system 1600 comprising a hearing assist device 1601 and a cloud/service/phone/portable device 1603 that may be communicatively connected thereto.
- Hearing assist device 1601 may comprise, for example and without limitation, either of hearing assist device 1501 or 1503 as described above in reference to FIG. 15 or any of the hearing assist devices described above in Sections II-IV. Although only a single hearing assist device 1601 is shown in FIG. 16 , it is to be understood that system 1600 may include two hearing assist devices.
- Device 1603 may comprise, for example and without limitation, portable electronic device 1505 or a device used to implement any of hearing assist device support service(s) 1511 or support personnel system(s) 1515 that are accessible to portable electronic device 1505 as described above in reference to FIG. 15 .
- device 1603 may be local with respect to the wearer of hearing assist device 1601 or remote with respect to the wearer of hearing assist device 1601 .
- Hearing assist device 1601 includes a number of processing modules that may be implemented as software or firmware running on one or more general purpose processors and/or digital signal processors (DSPs), as dedicated circuitry, or as a combination thereof. Such processors and/or dedicated circuitry are collectively referred to in FIG. 16 as general purpose (DSP) and dedicated processing circuitry 1613 . As shown in FIG. 16 , the processing modules include a speech generation module 1623 , a speech/noise recognition module 1625 , an enhanced audio processing module 1627 , a clock/scheduler module 1629 , a mode select and reconfiguration module 1631 , and a battery management module 1633 .
- DSP digital signal processors
- the processing modules include a speech generation module 1623 , a speech/noise recognition module 1625 , an enhanced audio processing module 1627 , a clock/scheduler module 1629 , a mode select and reconfiguration module 1631 , and a battery management module 1633 .
- hearing assist device 1601 further includes local storage 1635 .
- Local storage 1635 comprises one or more volatile and/or non-volatile memory devices or structures that are internal to hearing assist device 1601 .
- Such memory devices or structures may be used to store recorded audio information in an audio playback queue 1637 as well as to store information and settings 1639 associated with hearing assist device 1601 , a user thereof, a device paired thereto, and to services (cloud-based or otherwise) accessed by or on behalf of hearing assist device 1601 .
- Hearing assist device 1601 further includes sensor components and associated circuitry 1641 .
- sensor components and associated circuitry may include but are not limited to one or more microphones, bone conduction sensors, temperature sensors, blood pressure sensors, blood glucose sensors, pulse oximetry sensors, pH sensors, vibration sensors, accelerometers, gyros, magnetos, or the like. Further sensor types that may be included in hearing assist device 1601 and information regarding the structure, function and operation of such sensors is provided above in Sections II-Iv.
- Hearing assist device 1601 still further includes user interface (UI) components and associated circuitry 1643 .
- UI components may include buttons, switches, dials or other mechanical components by which a user may control and configure the operation of hearing assist device 1601 .
- Such UI components may also comprise capacitive sensing components to allow for touch-based or tap-based interaction with hearing assist device 1601 .
- Such UI components may further include a voice-based UI.
- voice-based UI may utilize speech/noise recognition module 1625 to recognize commands uttered by a user of hearing assist device 1601 and/or speech generation module 1623 to provide output in the form of pre-defined or synthesized speech.
- hearing assist device 1601 comprise an integrated part of a pair of glasses, visor or helmet
- user interface component and associated circuitry 1643 may also comprise a display integrated with or projected upon a portion of the glasses, visor or helmet for presenting information to a user.
- Hearing assist device 1601 also includes communication interfaces and associated circuitry 1645 for carrying out communication over one or more wired, wireless, or skin-based communication pathways. Communication interfaces and associated circuitry 1645 enable hearing assist device 1601 to communicate with device 1603 . Communication interfaces and associated circuitry 1645 may also enable hearing assist device 1601 to communicate with a second hearing assist device worn by the same user as well as with other devices.
- cloud/service/phone/portable device 1603 comprises power resources, processing resources, and storage resources that can be used by hearing assist device 1601 to assist in performing certain operations and/or to improve the performance of such operations when a communication pathway has been established between the two devices.
- device 1603 includes a number of assist processing modules that may be implemented as software or firmware running on one or more general purpose processors and/or DSPs, as dedicated circuitry, or as a combination thereof.
- Such processors and/or dedicated circuitry are collectively referred to in FIG. 16 as general/dedicated processing circuitry (with hearing assist device support) 1653 .
- the processing modules include a speech generation assist module 1655 , a speech/noise recognition assist module 1657 , an enhanced audio processing assist module 1659 , a clock/scheduler assist module 1661 , a mode select and reconfiguration assist module 1663 , and a battery management assist module 1665 .
- device 1603 further includes storage 1667 .
- Storage 1667 comprises one or more volatile and/or non-volatile memory devices/structures and/or storage systems that are internal to or otherwise accessible to device 1603 .
- Such memory devices/structures and/or storage systems may be used to store recorded audio information in an audio playback queue 1669 as well as to store information and settings 1671 associated with hearing assist device 1601 , a user thereof, a device paired thereto, and to services (cloud-based or otherwise) accessed by or on behalf of hearing assist device 1601 .
- Device 1603 also includes communication interfaces and associated circuitry 1677 for carrying out communication over one or more wired, wireless or skin-based communication pathways.
- Communication interfaces and associated circuitry 1677 enable device 1603 to communicate with hearing assist device 1601 . Such communication may be direct (point-to-point between device 1603 and hearing assist device 1601 ) or indirect (through one or more intervening devices or nodes).
- Communication interfaces and associated circuitry 1677 may also enable device 1603 to communicate with other devices or access various remote services, including cloud-based services.
- device 1603 may also comprise supplemental sensor components and associated circuitry 1673 and supplemental user interface components and associated circuitry 1675 that can be used by hearing assist device 1601 to assist in performing certain operations and/or to improve the performance of such operations.
- a prerequisite for providing external operational support to hearing assist device 1601 by device 1603 may be the establishment of a communication pathway between device 1603 and hearing assist device 1601 .
- the establishment of such a communication pathway is achieved by implementing a communication service on hearing assist device 1601 that monitors for the presence of device 1603 and selectively establishes communication therewith in accordance with a predefined protocol.
- a communication service may be implemented on device 1603 that monitors for the presence of hearing assist device 1601 and selectively establishes communication therewith in accordance with a predefined protocol. Still other methods of establishing a communication pathway between hearing assist device 1601 and device 1603 may be used.
- Hearing assist device 1601 includes battery management module 1633 that monitors a state of a battery internal to hearing assist device 1601 .
- Battery management module 1601 may also be configured to alert a wearer of hearing assist device 1601 when such battery is in a low-power state so that the wearer can recharge the battery.
- the wearer of hearing assist device 1601 can cause such recharging to occur by bringing a portable electronic device within a certain distance of hearing assist device 1601 such that power may be transferred via an NFC link, WPT link, or other suitable link for transferring power between such devices.
- hearing assist device 1601 may be said to be utilizing the power resources of device 1603 to assist in the performance of its operations.
- hearing assist device 1601 can also utilize other resources of device 1603 to assist in performing certain operations and/or to improve the performance of such operations. Whether and when hearing assist device 1601 so utilizes the resources of device 1603 may vary depending upon the designs of such devices and/or any user configuration of such devices.
- hearing assist device 1601 may be programmed to only utilize certain resources of device 1603 when the battery power available to hearing assist device 1601 has dropped below a certain level.
- hearing assist device 1601 may be programmed to only utilize certain resources of device 1603 when it is determined that an estimated amount of power that will be consumed in maintaining a particular communication pathway between hearing assist device 1601 and device 1603 will be less than an estimated amount of power that will be saved by offloading functionality to and/or utilizing the resources of device 1603 .
- an assistance feature of device 1603 may be provided when a very low power communication pathway can be established or exists between hearing assist device 1601 and device 1603 , but that same assistance feature of device 1603 may be disabled if the only communication pathway that can be established or exists between hearing assist device 1601 and device 1603 is one that consumes a relatively greater amount of power.
- Still other decision algorithms can be used to determine whether and when hearing assist device 1601 will utilize resources of device 1603 .
- Such algorithms may be applied by battery management module 1633 of hearing assist device 1601 and/or by battery management assist module 1665 of device 1603 prior to activating assistance features of device 1603 .
- a user interface provided by hearing assist device 1601 and/or device 1603 may enable a user to select which features of hearing assist device 1601 should be able to utilize external operational support and/or under what conditions such external operational support should be provided.
- the settings established by the user may be stored as part of information and settings 1639 in local storage 1635 of hearing assist device 1601 and/or as part of information and settings 1671 in storage 1667 of device 1603 .
- hearing assist device 1601 can also utilize resources of a second hearing assist device to perform certain operations.
- hearing assist device 1601 may communicate with a second hearing assist device worn by the same user to coordinate distribution or shared execution of particular operations. Such communication may be carried out, for example, via a point-to-point link between the two hearing assist devices or via links between the two hearing assist devices and an intermediate device, such as a portable electronic device being carried by a user.
- the determination of whether a particular operation should be performed by hearing assist device 1601 versus the second hearing assist device may be made by battery management module 1633 , a battery management module of the second hearing assist device, or via coordination between both battery management modules.
- hearing assist device 1601 may be selected to perform a particular operation, such as taking a blood pressure reading or the like.
- a particular operation such as taking a blood pressure reading or the like.
- Such battery imbalance may result from, for example, one hearing assist device being used at a higher volume than the other over an extended period of time.
- a more balanced discharging of the batteries of both devices can be achieved.
- certain sensors may be present on hearing assist device 1601 that are not present on the second hearing assist device and certain sensors may be present on the second hearing assist device that are not present on hearing assist device 1601 , such that a distribution of functionality between the two hearing assist devices is achieved by design.
- Hearing assist device 1601 comprises a speech generation module 1623 that enables hearing assist device 1601 to generate and output verbal audio information (spoken words or the like) to a wearer thereof via a speaker of hearing assist device 1601 .
- verbal audio information may be used to implement a voice UI, to provide speech-based alerts, messages and reminders as part of a clock/scheduler feature implemented by clock/schedule module 1629 , or to provide emergency alerts or messages to a wearer of hearing assist device based on a detected medical condition of the wearer, or the like.
- the speech generated by speech generation module 1623 may be pre-recorded and/or dynamically synthesized, depending upon the implementation.
- speech generation assist module 1655 of device 1603 may operate to perform all or part of the speech generation function that would otherwise be performed by speech generation module 1623 of hearing assist device 1601 . Such operation by device 1603 can advantageously cause the battery power of hearing assist device 1601 to be conserved. Any speech generated by speech generation assist module 1655 may be communicated back to hearing assist device 1601 for playback via at least one speaker of hearing assist device 1601 . Any of a wide variety of well-known speech codecs may be used to carry out such transmission of speech information in an efficient manner. Additionally or alternatively, any speech generated by speech generation assist module 1655 can be played back via one or more speakers of device 1603 if device 1603 is local with respect to the wearer of hearing assist device 1601 .
- speech generation assist module 1655 may provide a more elaborate set of features than those provided by speech generation module 1623 , as device 1603 may have access to greater power, processing and storage resources than hearing assist device 1601 to support such additional features.
- speech generation assist module 1655 may provide a more extensive vocabulary of pre-recorded words, terms and sentences or may provide a more powerful speech synthesis engine.
- Hearing assist device 1601 includes a speech/noise recognition module 1625 that is operable to apply speech and/or noise recognition algorithms to audio input received via one or more microphones of hearing assist device 1601 .
- speech/noise recognition module 1625 can enable speech/noise recognition module 1625 to determine when a wearer of hearing assist device 1601 is speaking and further to recognize words that are spoken by such wearer, while rejecting non-speech utterances and noise.
- Such algorithms may be used, for example, to enable hearing assist device 1601 to provide a voice-based UI by which a wearer of hearing assist device 1601 can exercise voice-based control over the device.
- speech/noise recognition assist module 1657 of device 1603 may operate to perform all or part of the speech/noise recognition functions that would otherwise be performed by speech/noise recognition module 1625 of hearing assist device 1601 . Such operation by device 1603 can advantageously cause the battery power of hearing assist device 1601 to be conserved.
- speech/noise recognition assist module 1657 may provide a more elaborate set of features than those provided by speech/noise recognition module 1625 , as device 1603 may have access to greater power, processing and storage resources than hearing assist device 1601 to support such additional features.
- speech/noise recognition assist module 1657 may include a training program that a wearer of hearing assist device 1601 can use to train the speech recognition logic to better recognize and interpret his/her own voice.
- speech/noise recognition assist module 1657 may include a process by which a wearer of hearing assist device 1601 can add new words to the dictionary of words that are recognized by the speech recognition logic.
- Such additional features may be included in an application that can be installed by the wearer on device 1603 .
- Such additional features may also be supported by a user interface that forms part of supplemental user interface components and associated circuitry 1675 .
- speech/noise recognition module 1625 in accordance with certain embodiments.
- Hearing assist device 1601 includes an enhanced audio processing module 1627 .
- Enhanced audio processing module 1627 may be configured to process an input audio signal received by hearing assist device 1601 to achieve a desired frequency response prior to playing back such input audio signal to a wearer of hearing assist device 1601 .
- enhanced audio processing module 1627 may selectively amplify certain frequency components of an input audio signal prior to playing back such input audio signal to the wearer.
- the frequency response to be achieved may specified by or derived from a prescription for the wearer that is provided to hearing assist device 1601 by an external device or system.
- such external device or system may include any of portable electronic device 1505 , hearing assist device support service(s) 1511 , or support personnel system(s) 1515 .
- such prescription may be formatted in a standardized manner in order to facilitate use thereof by any of a variety of hearing assistance devices and audio reproduction systems.
- enhanced audio processing module 1627 may modify a first input audio signal received by hearing assist device 1601 prior to playback of the first input audio signal to one ear of the wearer, while an enhanced audio processing module of the second hearing assist device modifies a second input audio signal received by the second hearing assist device prior to playback of the second input audio signal to the other ear of the wearer.
- Such modification of the first and second input audio signals can be used to achieve enhanced spatial signaling for the wearer. That is to say, the enhanced audio signals provided to both ears of the wearer will enable the wearer to better determine the spatial origin of sounds.
- Such enhancement is desirable for persons who have a poor ability to detect the spatial origin of sound, and therefore a poor ability to responds to spatial cues.
- an appropriate user-specific “head transfer function” can be determined through testing of a user. The results of such testing may then be used to calibrate the spatial audio enhancement function applied at each ear.
- FIG. 17 is a block diagram of an enhanced audio processing module 1700 that may be utilized by hearing assist device 1601 to provide such enhanced spatial signaling.
- Enhanced audio processing module 1700 is configured to process an audio signal produced by a microphone of a left ear hearing assist device (denoted MIC L) and an audio signal produced by a microphone of a right ear hearing assist device (denoted MIC R) to produce an audio signal for playback to the left ear of a user (denoted LEFT).
- enhanced audio processing module 1700 includes an amplifier 1702 that amplifies the MIC L signal. Such signal may also be converted from analog to digital form by an analog-to-digital (A/D) converter (not shown in FIG. 17 ).
- the output of amplifier 1702 is passed to a logic block 1704 that applies a head transfer function (HTF) thereto.
- the output of logic block 1704 is passed to a multiplier 1706 that applies a scaling function thereto.
- the output of multiplier 1706 is passed to a mixer 1720 .
- Enhanced audio processing module 1700 also includes an amplifier 1712 that amplifies the MIC R signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown in FIG. 17 ).
- the output of amplifier 1712 is passed to a logic block 1714 that applies a HTF thereto.
- the output of logic block 1714 is passed to a multiplier 1716 that applies a scaling function thereto.
- the output of multiplier 1716 is passed to mixer 1720 .
- Mixer 1720 combines the output of multiplier 1706 and the output of multiplier 1716 .
- the audio signal output by mixer 1720 is passed to an amplifier 1722 that amplifies it to produce the LEFT audio signal. Such signal may also be converted from digital to analog form by a digital-to-analog (D/A) converter (not shown in FIG. 17 ) prior to playback.
- D/A digital-to-analog
- enhanced audio processing module 1700 must have access to both the MIC L signal obtained by the left ear hearing assist device (which is assumed to be hearing assist device 1601 in this example) and the MIC R signal obtained by the right ear hearing assist device.
- the left ear hearing assist device must be capable of communicating with the right ear hearing device in order to obtain the MIC R signal therefrom.
- the right ear hearing assist device must be capable of communicating with the left ear hearing device in order to obtain the MIC L signal therefrom.
- Such communication may be carried out, for example, via a point-to-point link between the two hearing assist devices or via links between the two hearing assist devices and an intermediate device, such as a portable electronic device being carried by a user.
- enhanced audio processing module 1627 may modify an input audio signal received by hearing assist device 1601 to achieve a desired frequency response and/or spatial signaling prior to playback of the input audio signal. Both the desired frequency response and spatial signaling may be specified by or derived from a prescription associated with a wearing of hearing assist device 1601 .
- enhanced audio processing assist module 1659 of device 1603 may operate to perform all or part of the enhanced audio processing functions that would otherwise be performed by enhanced audio processing module 1627 of hearing assist device 1601 , provided that there is a sufficiently fast communication pathway between hearing assist device 1601 and device 1603 .
- a sufficiently fast communication pathway is required so as not to introduce an inordinate amount of lag between the receipt and playback of audio signals by hearing assist device 1601 .
- Such operation by device 1603 can advantageously cause the battery power of hearing assist device 1601 to be conserved.
- audio content collected by one or more microphones of hearing assist device 1601 may be transmitted to device 1603 .
- Enhanced audio processing assist module 1659 of device 1603 may apply enhanced audio processing to such audio content, thereby producing enhanced audio content.
- the application of enhanced audio processing may comprise, but is not limited to, modifying the audio content to achieve a desired frequency response and/or spatial signaling as previously described.
- Device 1603 may then transmit the enhanced audio content back to hearing assist device 1601 , where it may be played back to a wearer thereof.
- the foregoing transmission of audio content between the devices may utilize well-known audio and speech compression techniques to achieve improved transmission efficiency.
- any enhanced audio content generated by enhanced audio processing assist module 1659 can be played back via one or more speakers of device 1603 if device 1603 is local with respect to the wearer of hearing assist device 1601 .
- a clock/scheduler module 1629 of hearing assist device 1601 is configured to provide a wearer thereof with alerts or messages concerning the date and/or time, upcoming appointments or events, or other types of information typically provided by, recorded in, or otherwise associated with a personal calendar and scheduling service or tool.
- Such alerts and messages may be conveyed on demand, such as in response to the wearer uttering the words “time” or “date” or performing some other action that is recognizable to a user interface associated with clock/scheduler module 1629 .
- Such alerts and messages may also be conveyed automatically, such as in response to clock/scheduler module 1629 determining that an appointment or event is currently occurring or is scheduled to occur within a predetermined time frame.
- the alerts or messages may comprise certain sounds or words that are played back via one or more speakers of hearing assist device 1601 . Where the alerts or messages comprise speech, such speech may be generated by speech generation module 1623 and/or speech generation assist module 1655 .
- clock/scheduler assist module 1661 comprises a personal calendar and scheduling service or tool that a user may interact with via a personal electronic device, such as personal electronic device 1505 of FIG. 15 .
- the personal calendar and scheduling service or tool may comprise MICROSOFT OUTLOOK®, GOOGLE CALENDARTM, or the like.
- Clock/scheduler module 1629 within hearing assist device 1601 may be configured to store only a subset (for example, one week's worth) of scheduled appointments and events maintained by clock/scheduler assist module 1661 to conserve local storage space. Clock/scheduler module 1629 may further be configured to periodically synchronize its record of appointments and events with that maintained by clock/scheduler assist module 1661 of device 1603 when a communication pathway has been established between hearing assist device 1601 and device 1603 .
- clock/scheduler assist module 1661 may also be utilized to perform all or a portion of the time/date reporting and alert/message generation functions that would normally be performed by clock/scheduler module 1629 . Such operation by device 1603 can advantageously cause the battery power of hearing assist device 1601 to be conserved. Any alerts or messages generated by clock/scheduler assist module 1661 may be communicated back to hearing assist device 1601 for playback via at least one speaker of hearing assist device 1601 . Any of a wide variety of well-known speech or audio codecs may be used to carry out such transmission of alerts and messages in an efficient manner. Additionally or alternatively, any alerts or messages generated by clock/scheduler assist module 1655 can be played back via one or more speakers of device 1603 if device 1603 is local with respect to the wearer of hearing assist device 1601 .
- Mode select and reconfiguration module 1631 comprises a module that enables selection and reconfiguration of various operating modes of hearing assist device 1601 .
- hearing assist device 1601 may operate in a wide variety of modes, wherein each mode may specify certain operating parameters such as: (1) from which microphones audio input is to be obtained from (for example, audio input may be captured by one or more microphones of hearing assist device 1601 and/or by one or more microphones of device 1603 ); (2) where audio input is processed (for example, audio input may be processed by hearing assist device 1601 and/or by device 1603 ; (3) how audio input is processed (for example, certain audio processing features such as noise suppression, personalized frequency response processing, selective audio boosting, customized equalization, or the like may be utilized); and (4) where audio output is delivered (for example, audio output may be played back by one or more speakers of hearing assist device 1601 and/or by one or more speakers of device 1603 ).
- the selection and reconfiguration of a particular mode of operation may be made by a user via interaction with a user interface of hearing assist device 1601 .
- device 1603 includes a mode select and reconfiguration assist module 1663 that enables a user to select and reconfigure a particular mode of operation through interaction with a user interface of device 1603 . Any mode selection or reconfiguration information input to device 1603 may be passed to hearing assist device 1601 when a communication pathway between the two devices is established.
- device 1603 may be capable of providing a more elaborate, intuitive and user-friendly user interface by which a user can select and reconfigure operational modes of hearing assist device 1601 .
- Mode select and reconfiguration module 1631 and/or mode select and reconfiguration assist module 1663 may each be further configured to enable a user to define contexts and circumstances in which a particular mode of operation of hearing assist device 1601 should be activated or deactivated.
- Audio playback queue 1637 is configured to store a limited amount of audio content that has been received by hearing assist device 1601 so that it can be selectively played back by a wearer thereof. This feature enables the wearer to selectively play back certain audio content (such as words spoken by another or the like). For example, the last 5 seconds of audio may be played back. Such playback may be carried out at a higher volume depending upon the configuration. Such playback may be deemed desirable, for example, if the wearer did not fully comprehend something that was just said to him/her.
- Audio playback queue 1637 may comprise a first-in-first-out (FIFO) queue such that only the last few seconds or minutes of audio received by hearing assist device 1601 will be stored therein at any time.
- the audio signals stored in audio playback queue 1637 may comprise processed audio signals (such as audio signals that have already been processed by enhanced audio processing module 1627 ) or unprocessed audio signals. In the latter case, the audio signals stored in audio playback queue 1637 may be processed by enhanced audio processing module 1627 before being played back to a wearer of hearing assist device 1601 . In an embodiment in which a user is wearing two hearing assist devices, a left ear queue and a right ear queue may be maintained.
- audio playback queue 1669 of device 1603 may also operate to perform all or part of the audio storage operation that would otherwise be performed by audio playback queue 1637 of hearing assist device 1601 .
- audio playback queue 1669 may also support the aforementioned audio playback functionality by storing a limited amount of audio content received by hearing assist device 1601 and transmitted to device 1603 . By so doing, power and storage resources of hearing assist device 1601 may be conserved.
- audio playback queue 1669 provided by device 1603 may be capable of storing more and/or higher quality audio content than can be stored by audio playback queue 1637 .
- device 1603 may independently record ambient audio via one or more microphones thereof and store such audio in audio playback queue 1669 for later playback to the wearer. Such playback may occur via one or more speakers of hearing assist device 1601 or, alternatively, via one or more speakers of device 1603 . Playback by device 1603 may be opted for, for example, in a case where hearing assist device 1601 is in a low power state, or is missing or fully discharged.
- a button on or tapping hearing assists device 1601 may initiate playback of a limited amount of audio.
- playback may be initiated by interacting with a user interface of device 1603 , such as by pressing a button or tapping an icon on a touchscreen of device 1603 .
- uttering certain words or sounds may trigger playback, such as “repeat” or “playback.” This feature can be implemented using the speech recognition functionality of hearing assist device 1601 or device 1603 .
- recording of audio may be carried out over extended period of times (for example, minutes, tens of minutes, or hours).
- audio playback queue 1669 may be relied upon to store the recorded audio content, as device 1603 may have access to greater storage resources than hearing assist device 1601 .
- Audio compression may be used in any of the aforementioned implementations to reduce consumption of storage.
- audio may be recorded for purposes other than playing back recently received audio.
- recording may be used to capture the content of meetings, concerts, or other events that a wearer of hearing assist device 1601 attends so that such audio can be replayed at a later time or shared with others.
- Recording may also be used for health reasons. For example, a wearer's breathing noises may be recorded while the wearer is sleeping and later analyzed to determine whether or not the wearer suffers from sleep apnea.
- FIG. 18 depicts a flowchart 1800 of a method for providing audio playback support to a hearing assist device, such as hearing assist device 1601 .
- the method of flowchart 1800 begins at step 1802 , in which an audio signal obtained via at least one microphone of the hearing assist device is received.
- a copy of the received audio signal is stored in an audio playback queue.
- the copy of the received audio signal is retrieved from the audio playback queue for playback to a wearer of the hearing assist device.
- each of steps 1802 , 1804 and 1806 is performed by a hearing assist device, such as hearing assist device 1601 .
- each of steps 1802 , 1804 and 1806 is performed by a device or service that is external to the hearing assist device and communicatively connected thereto via a communication pathway, such as device 1603 or a service implemented by device 1603 .
- the method of flowchart 1800 may further include playing back the copy of the received audio signal to the wearer of the hearing assist device via at least one speaker of the hearing assist device or via at least one speaker of a portable electronic device that is carried by or otherwise accessible to the wearer of the hearing assist device.
- Local storage 1635 also stores information and settings 1639 associated with hearing assist device 1601 , a user thereof, a device paired thereto, and to services accessed by or on behalf of hearing assist device 1601 .
- Such information and settings may include, for example, owner information (which may be used, for example, to recognize and/or authenticate an owner of hearing assist device 1601 ), security information (including but not limited to passwords, passcodes, encryption keys or the like) used to facilitate private and secure communication with external devices (such as device 1603 ), and account information useful for signing in to various services available on certain external computer systems.
- Such information and settings may also include personalized selections and controls relating to user-configurable aspects of the operation of hearing assist device 1601 and/or to user-configurable aspects of the operation of any device with which hearing assist device 1601 may be paired, or any services (cloud-based or otherwise) that may be accessed by or on behalf of hearing assist device 1601 .
- storage 1667 of device 1603 also includes information and settings 1671 associated with hearing assist device 1601 , a user thereof, a device paired thereto, and to services accessed by or on behalf of hearing assist device 1601 .
- Information and settings 1671 may comprise a backup copy of information and settings 1639 stored on hearing assist device 1601 . Such a backup copy may be updated periodically when hearing assist device 1601 and device 1603 are communicatively linked. Such a backup copy may be maintained on device 1603 in order to ensure that important data is not lost or otherwise rendered inaccessible if hearing assist device 1601 is lost or runs out of power.
- information and settings 439 stored on hearing assist device 1601 may be temporarily or permanently moved to device 1603 to free up storage space on hearing assist device 1601 , in which case information and settings 1671 may comprise the only copy of such data.
- information and settings 1671 stored on device 1603 may comprise a superset of information and settings 1639 stored on hearing assist device 1601 .
- hearing assist device 1601 may selectively retrieve necessary information and settings from device 1603 on an as-needed basis and cache only a subset of such data in local storage 1635 .
- sensor components and associated circuitry 1641 of hearing assist device 1601 may include any number of sensors including but not limited to one or more microphones, bone conduction sensors, temperature sensors, blood pressure sensors, blood glucose sensors, pulse oximetry sensors, pH sensors, vibration sensors, accelerometers, gyros, magnetos, or the like.
- device 1603 comprises a portable electronic device that is carried by or otherwise locally accessible to a wearer of hearing assist device 1601 (such as portable electronic device 1505 )
- sensor components and associated circuitry 1641 of device 1603 may also include all or some subset of the foregoing sensors.
- device 1603 may comprise a smart phone that includes one or more microphones, accelerometers, gyros, or magnetos.
- one or more of the sensors included in device 1603 may be used to perform all or a portion of the functions performed by corresponding sensor(s) in hearing assist device 1601 .
- battery power of hearing assist device 1601 may be conserved.
- data provided by the sensors included within device 1603 may be used to augment or verify information provided by the sensors within hearing assist device 1601 .
- information provided by any accelerometers, gyros or magnetos included within device 1603 may be used to provide enhanced information regarding a current body position (for example, standing up, leaning over or lying down) and/or orientation of the wearer of hearing assist device 1601 .
- Device 1603 may also include a GPS device that can be utilized to provide enhanced location information regarding the wearer of hearing assist device 1601 .
- device 1603 may include its own set of health monitoring sensors that can produce data that can be combined with data produced by health monitoring sensors of hearing assist device 1601 to provide a more accurate or complete picture of the state of health of the wearer of hearing assist device 1601 .
- hearing assist device 1601 may have a very simple user interface or a user interface that is more elaborate.
- the user interface thereof may comprise very simple mechanical elements such as switches, buttons or dials. This may be due to the very limited surface area available for supporting such an interface.
- a voice-based user interface or a simple touch-based or tap-based user interface based on the use of capacitive sensing is possible.
- head motion sensing, local or remote voice activity detection (VAD), or audio monitoring may be used to place a hearing assist device into a fully active state.
- hearing interface device 1601 comprises an integrated part of a pair of glasses, a visor, or a helmet
- a more elaborate user interface comprising one or more displays and other features may be possible.
- device 1603 comprises a portable electronic device that is carried by or otherwise locally accessible to a wearer of hearing assist device 1601 (such as portable electronic device 1505 )
- supplemental user interface components and associated circuitry 1675 of device 1603 may provide a means by which a user can interact with hearing assist device 1601 , thereby extending the user interface of that device.
- device 1603 may comprise a phone or tablet computer having a touch screen display that can be used to interact with and manage the features of hearing assist device 1601 .
- an application may be downloaded to or otherwise installed on device 1603 that enables a user thereof to interact with and manage the features of hearing assist device 1601 by interacting with a touch screen display or other user interface element of device 1603 .
- Such user interface may be made accessible to a user only when a communication pathway is established between device 1603 and hearing assist device 1601 so that changes to the configuration of hearing assist device 1601 can be applied to that device in real time.
- Such user interface may be made accessible to user even when there is no communication pathway established between device 1603 and hearing assist device 1601 .
- any changes made to the configuration of hearing assist device 1601 via the user interface provided by device 1603 may be stored on device 1603 and then later transmitted to hearing assist device 1601 when a suitable communication pathway becomes available.
- the quality of audio content received by hearing assist device may be improved by utilizing an external device or service to process such audio content when such external device or service is communicatively connected to the hearing assist device.
- enhanced audio processing assist module 1659 of device 1603 may process audio content received from hearing assist device 1601 to achieve a desired frequency response and/or spatial signaling and then return the processed audio content to hearing assist device 1601 for playback thereby.
- any other audio processing technique that may have the effect of improving audio quality may be applied by such external device or service, including but not limited to any of a variety of noise suppression or speech intelligibility enhancement techniques, whether presently known or hereinafter developed. Whether or not such connected external device or service is utilized to perform such enhanced processing may depend on a variety of factors, including a current state of a battery of the hearing assist device, a current selected mode of operation of the hearing assist device, or the like.
- an external device may forward audio content to another device to which it is communicatively connected (for example, any device used to implement hearing assist device support service(s) 1511 or support personnel system(s) 1515 ) so that such audio content may be processed by such other device.
- another device for example, any device used to implement hearing assist device support service(s) 1511 or support personnel system(s) 1515 ) so that such audio content may be processed by such other device.
- the audio that is remotely processed and returned to the hearing assist device is audio that is captured by one or more microphones of an external device rather than by the microphone(s) of the hearing assist device itself.
- the hearing assist device to avoid having to capture, package and transmit audio, thereby conserving battery power and other resources.
- device 1603 comprises a portable electronic device carried by or otherwise locally accessible to a wearer of hearing assist device 1601
- one or more microphones of device 1603 may be used to capture audio content from an environment in which the wearer is located.
- any enhanced audio processing may be performed by device 1603 or by a device or service accessible thereto.
- the processed audio content may then be delivered by device 1603 to hearing assist device 1601 for playback thereby. Additionally or alternatively, such processed audio content may be played back via one or more speakers of device 1603 itself.
- the foregoing approach to audio processing may be deemed desirable, for example, if hearing assist device 1601 is in a very low power or even non-functioning state.
- the foregoing approach to audio enhancement may actually produce higher quality audio than would be produced using only the microphone(s) of hearing assist device 1601 .
- audio content is processed for the purpose of enhancing the quality thereof.
- audio content may also be processed for speech recognition purposes.
- the audio content may comprise one or more voice commands that are intended to initiate or provide input to a process executing outside of hearing assist device 1601 .
- the audio content may be captured by microphone(s) of device 1603 and processed by device 1603 or by a device or service accessible thereto.
- what is returned to the wearer may comprise something other than a processed version of the original audio content captured by device 1603 .
- the voice commands were intended to initiate an Internet search, then what is returned to the wearer may comprise the results of such a search.
- the search results may be presented to a display of device 1603 , for example.
- hearing assist device 1601 comprises an integrated part of a pair of glasses, visor or helmet having a display
- the search results may be presented to such display.
- such search results could be played back via one or more speakers of device 1603 or hearing assist device 1601 using text-to-speech conversion.
- a wearer of hearing assist device 1601 may initiate operation in a mode in which audio content is captured by one or more microphone(s) of device 1603 and processed by device 1603 (or by a device or service accessible to device 1603 ) to achieve desired audio effects, such as custom equalization, emphasized surround sound effects, or the like.
- desired audio effects such as custom equalization, emphasized surround sound effects, or the like.
- sensors included in hearing assist device 1601 and/or device 1603 may be used to determine a position of the wearer's head relative to one or more audio sources and then to modify audio content to achieve an appropriate surround sound effect given the position of the wearer's head and the location of the audio source(s).
- each hearing assist device may include multiple speakers (such as piezoelectric speakers) to deliver a surround sound effect.
- the desired audio effects described above may be defined by a user and stored as part of a profile associated with the user and/or with a particular operational mode of a hearing assist device, wherein the operational mode may be further associated with certain contexts or conditions in which the mode should be utilized.
- a profile may be formatted in a standardized manner such that it can be used by a variety of hearing assist devices and audio reproduction systems.
- a wearer of hearing assist device 1601 may define and initiate any of the foregoing operational modes by interacting with a user interface of hearing assist device 1601 or a user interface of device 1603 depending upon the implementation.
- the improvement of audio quality as described herein may include suppressing audio components generated by certain audio sources and/or boosting audio components generated by certain other audio sources. Such suppression or boosting may be performed by device 1603 (and/or a device or service accessible thereto), with processed audio being returned to hearing assist device 1601 for playback thereby. Additionally or alternatively, processed audio may be played back by device 1603 in scenarios in which device 1603 is local with respect to the wearer of hearing assist device 1601 . In accordance with the foregoing scenarios, the original audio may be captured by one or more microphones of hearing assist device 1601 , a second hearing assist device, and/or device 1603 when device 1603 is local with respect to the wearer of hearing assist device 1601 .
- the noise suppression function may utilize not only audio signal(s) captured by the microphones of the hearing assist device(s) worn by a user but also the audio signal(s) captured by the microphone(s) of a portable electronic device carried by or otherwise accessible to the user.
- the ability of a noise suppression algorithm to identify and suppress noise can be improved.
- FIG. 19 is a block diagram of a noise suppression system 1900 that may be utilized by a hearing assist device or a device/service communicatively connected thereto in accordance with an embodiment.
- Noise suppression system 1900 is configured to process an audio signal produced by a microphone of a left ear hearing assist device (denoted MIC L), an audio signal produced by a microphone of a right ear hearing assist device (denoted MIC R), and an audio signal produced by a microphone of an external device (denoted MIC EXT) to produce a noise-suppressed audio signal for playback to the left ear of a user (denoted LEFT).
- noise suppression system 1900 includes an amplifier 1902 that amplifies the MIC L signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown in FIG. 19 ). The output of amplifier 1902 is passed to a noise suppressor 1908 .
- Noise suppression system 1900 further includes an amplifier 1904 that amplifies the MIC R signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown in FIG. 19 ). The output of amplifier 1904 is passed to noise suppressor 1908 .
- Noise suppression system 1900 still further includes an amplifier 1906 that amplifies the MIC EXT signal. Such signal may also be converted from an analog to digital form by an A/D converter (not shown in FIG. 19 ). The output of amplifier 1908 is passed to noise suppressor.
- Noise suppressor 1908 applies a noise suppression algorithm that utilizes all three amplified microphone signals to generate a noise-suppressed version of the MIC L signal.
- the noise-suppressed audio signal generated by noise suppressor 1908 is passed to an amplifier 1910 that amplifies it to produce the LEFT audio signal.
- Such signal may also be converted from digital to analog form by a D/A converter (not shown in FIG. 19 ) prior to playback.
- noise suppression system 1900 must have access to the MIC L signal obtained by the left ear hearing assist device, the MIC R signal obtained by the right ear hearing device, and the MIC EXT signal obtained by the external device. This can be achieved by establishing suitable communication pathways between such devices.
- the MIC L and MIC R signals may be obtained through skin-based communication and/or BLE communication between the portable electronic device and one or both of the two hearing assist devices, while the MIC EXT signal can be obtained directly from a microphone of the portable electronic device. Still other microphone signals other than those shown in FIG. 19 may be used to improve the performance of a noise suppressor.
- a selection may be made between using audio input provided by the microphone(s) of the hearing assist device(s) and using audio input provided by the microphone(s) of the portable electronic device. Such selection may be made manually by the wearer of the hearing assist device(s) or may be made automatically by the hearing assist device(s) and/or the portable electronic device based on a variety of factors, including but not limited to the state of the battery of the hearing assist device(s), the quality of the audio signals being captured by each device, the environment in which the wearer is located, or the like.
- improving audio quality may also comprise selectively applying a boosting or amplification function to certain types of audio signals (for example, music or speech), to components of an audio signal emanating from a certain source, and/or to components of an audio signal emanating from a particular direction, while not amplifying or actively suppressing other audio signal types or components.
- a boosting or amplification function to certain types of audio signals (for example, music or speech), to components of an audio signal emanating from a certain source, and/or to components of an audio signal emanating from a particular direction, while not amplifying or actively suppressing other audio signal types or components.
- Such processing may occur responsive to the user initiating a particular mode of operation or may occur automatically in response to detecting the existence of certain predefined conditions.
- a user may activate a “forward only” mode in which audio signals emanating from in front of the user are boosted and signals emanating from other directions are not boosted or are actively attenuated.
- Such mode of operation may be desired when the user is engaging in conversation with a person that is directly in front of him/her. Additionally, such mode of operation may automatically be activated if it can be determined from sensor data obtained by the hearing assist device(s) worn by the user and/or by a portable electronic device carried by the user that the user is engaging in conversation with a person that is directly in front of him/her.
- a user may activate a “television” mode in which audio signals emanating from a television are boosted and signals emanating from other sources are not boosted or are actively attenuated. Additionally, such mode of operation may automatically be activated if it can be determined from sensor data obtained by the hearing assist device(s) worn by the user and/or by a portable electronic device carried by the user that the user is watching television.
- the audio processing functionality may be designed, programmed or otherwise configured such that certain sounds or noises should never be suppressed.
- the audio processing functionality may be configured to always pass certain sounds such as extremely elevated sounds, a telephone or doorbell ringing, the honking of a car horn, an alarm or siren sounding, repeated sounds, or the like, to ensure that the wearer is made aware of important events.
- the audio processing functionality may utilize speech recognition to ensure that certain uttered words are passed to the wearer, such as the wearer's name, the word “help” or other words.
- the types of audio that are boosted, passed or suppressed may be determined based on detecting prior and/or current activities of the user, inactivity of the user, time of day, or the like. For example, if it is determined from sensor data and from information derived therefrom that a user is sleeping, then all audio input may be suppressed with certain predefined exceptions. Likewise, certain sounds or verbal instructions may be injected at certain times, such as an alarm or morning wakeup music in the morning.
- the required audio processing may be performed either by a hearing assist device, such as hearing assist device 1601 , or by an external device, such as device 1603 , with which hearing assist device 1601 is communicatively connected.
- a hearing assist device such as hearing assist device 1601
- an external device such as device 1603
- power, processing and storage resources of the hearing assist device may advantageously be conserved.
- one hearing assist device may be selected to perform any of the audio processing tasks described herein on behalf of the other. Such selection may be by design in that one hearing assist device is equipped with more audio processing capabilities than the other. Alternatively, such selection may be performed dynamically based on a variety of factors including the comparative battery levels of each hearing assist device, a processing load currently assigned to each hearing assist device, or the like.
- Any audio that is processed by a first hearing assist device on behalf of a second hearing assist device may originate from one or more microphones of the first hearing assist device, from one or more microphones of the second hearing assist device, or from one or more microphones of portable electronic device that is carried by or otherwise locally accessible to a wearer of the first and second hearing assist devices.
- FIG. 20 depicts a flowchart 2000 of a method for providing external operational support to a hearing assist device worn by a user, such as hearing assist device 1601 .
- the method of flowchart 2000 begins at step 2002 in which a communication pathway is established to the hearing assist device.
- a communication pathway is established to the hearing assist device.
- an audio signal obtained by the hearing assist device is received via the communication pathway.
- the audio signal is processed to obtain processing results.
- the processing results are transmitted to the hearing assist device via the communication pathway.
- each of the establishing, receiving, processing and transmitting steps may be performed by one of a second hearing assist device worn by the user, a portable electronic device carried by or otherwise accessible to the user, or a device or service that is capable of communicating with the hearing assist device via a portable electronic device carried by or otherwise accessible to the user.
- device 1603 may represent both a portable electronic device carried by or otherwise accessible to the user, or a device or service that is capable of communicating with the hearing assist device via a portable electronic device carried by or otherwise accessible to the user.
- step 2002 of flowchart 2000 may comprise establishing the communication pathway to the hearing assist device comprises establishing a communication link with the hearing assist device using one of NFC, BTLE technology, WPT technology, telecoil, or skin-based communication technology.
- step 2006 of flowchart 2000 comprises processing the audio signal to generate an enhanced audio signal having a desired frequency response associated with the user and step 2008 comprises transmitting the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby.
- step 2006 of flowchart 2000 comprises processing the audio signal to generate an enhanced audio signal having a desired spatial signaling characteristic associated with the user and step 2008 comprises transmitting the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby.
- step 2006 of flowchart 2000 comprises applying noise suppression to the audio signal to generate a noise-suppressed audio signal and step 2008 comprises transmitting the noise-suppressed audio signal to the hearing assist device via the communication pathway for playback thereby.
- applying noise suppression to the audio signal may comprise processing the audio signal and at least one additional audio signal obtained by a portable electronic device carried by or otherwise accessible to the user.
- step 2006 of flowchart 2000 comprises applying speech recognition to the audio signal to identify one or more recognized words.
- FIG. 21 depicts a flowchart 2100 that illustrates steps that may be performed in addition to those shown in flowchart 2000 to provide external operational support to a hearing assist device worn by a user, such as hearing assist device 1601 .
- the first additional step is step 2102 , which comprises receiving a second audio signal obtained by a portable electronic device that is carried by or otherwise accessible to the user.
- the second audio signal is processed to obtain processing results.
- the processing results are transmitted to the portable electronic device.
- FIG. 22 depicts a flowchart 2200 that illustrates steps that may be performed in addition to those shown in flowchart 2000 to provide external operational support to a hearing assist device worn by a user, such as hearing assist device 1601 .
- the first additional step is step 2202 , which comprises receiving a second audio signal obtained by a portable electronic device that is carried by or otherwise accessible to the user.
- the second audio signal is processed to obtain processing results.
- the processing results are transmitted to the hearing assist device.
- FIG. 23 depicts a flowchart 2300 that illustrates steps that may be performed in addition to those shown in flowchart 2000 to provide external operational support to a hearing assist device worn by a user, such as hearing assist device 1601 .
- the first additional step is step 2302 , which comprises receiving a second audio signal obtained by the hearing assist device.
- the second audio signal is processed to obtain processing results.
- the processing results are transmitted to a portable electronic device that is carried by or otherwise accessible to the user.
- an audio signal received by one or more microphones of a hearing assist device may be suppressed or blocked while a substitute audio input signal may be delivered to the wearer.
- a language translation feature may be implemented in which an audio signal received by one or more microphones of a hearing assist device is transmitted to an external device or service.
- the external device or service applies a combination of speech recognition and translation thereto to synthesize a substitute audio signal.
- the substitute audio signal comprises a translated version of the speech included in the original audio signal.
- the substitute audio signal is then transmitted back to the hearing assist device for playback thereby. While this is occurring, the hearing assist device utilizes active filtering to suppress the original audio signal or blocks it entirely, so that the wearer can clearly hear the substitute audio signal being played back through a speaker of the hearing assist device.
- an audio signal generated by a television, a DVD player, a compact disc (CD) player, a set top box, a portable media player, a handheld gaming device, or other entertainment device may be routed to a hearing assist device worn by a user for playback thereby.
- Such entertainment devices may also include smart phones, tablet computers, and other computing devices capable of running entertainment applications. While the user is listening to the audio being generated by the entertainment device, the hearing assist device may operate to suppress ambient background noise using an active filtering function, thereby providing the user with an improved listening experience.
- the delivery of the audio signal from the entertainment device to the hearing assist device and suppression of ambient background noise may occur in response to the establishment of a communication link between the hearing assist device and the entertainment device, or in response to other detectable factors, such as the hearing assist device being within a certain range of the entertainment device or the like.
- the delivery of the audio signal from the entertainment device to the hearing assist device and suppression of ambient background noise may be discontinued in response to the breaking of a communication link between the hearing assist device and the entertainment device, or in response to other detectable factors, such as the hearing assist device passing outside of a certain range of the entertainment device or the like.
- the functionality described above for suppressing ambient audio in favor of a substitute audio stream could be configured to always pass certain sounds such as extremely elevated sounds, a telephone or doorbell ringing, the honking of a car horn, an alarm or siren sounding, repeated sounds, or the like, to ensure that the wearer is made aware of important events.
- such functionality may utilize speech recognition to ensure that certain uttered words are always passed to the wearer, such as the wearer's name, the word “help” or other words.
- the functionality that monitors for such sounds and words may be present in the hearing assist device or in a portable electronic device that is communicatively connected thereto.
- the substitute audio stream may be paused or discontinued (for example, a song the wearer was listening to may be paused or discontinued or a movie the wearer was viewing may be paused or discontinued).
- the suppression of ambient noise may also be discontinued.
- a hearing assist device in accordance with an embodiment can receive any number of audio signals and selectively pass one or a mixture of some or all of the audio signals for playback to a wearer thereof. Additionally, a hearing assist device in accordance with such an embodiment can selectively amplify or suppress any one of the aforementioned audio signals. This is illustrated by the block diagram of FIG. 24 , which shows an audio processing module 2400 that may be implemented in a hearing assist device in accordance with an embodiment.
- audio processing module 2400 is capable of receiving at least four different audio signals. These include an audio signal captured by a microphone of the hearing assist device (denoted MIC), an audio signal received via an NFC interface of the hearing assist device (denoted NFC), an audio signal received via a BLE interface of the hearing assist device (denoted BLE), and an audio signal received via a skin-based communication interface of the hearing assist device (denoted SKIN). Audio processing module 2400 is configured to process these audio signals to generate an output audio signal for playback via a speaker 2412 .
- Audio processing module 2400 is configured to process these audio signals to generate an output audio signal for playback via a speaker 2412 .
- each of the MIC, NFC, BLE and SKIN signals is amplified by a corresponding amplifier 2402 , 2412 , 2422 and 2432 .
- Each of these signals may also be converted from analog to digital form by a corresponding A/D converter (not shown in FIG. 24 ).
- the amplified signals are then passed to a corresponding multiplier 2404 , 2414 , 2424 and 2434 , each of which applies a certain scaling function thereto, wherein such scaling function can be used to determine a relative degree to which each signal will contribute to a final output signal.
- switches 2406 , 2416 , 2426 and 2436 can be used to selectively remove the output of any of multipliers 2404 , 2414 , 2424 , and 2434 from the final output signal. Any signals passed through switches 2406 , 2416 , 2426 and 2436 are received by a mixer 408 which combines such signals to produce a combined audio signal. The combined audio signal is then passed to an amplifier 2410 which amplifies it to produce the output audio signal that will be played back by speaker 2412 . The output audio signal may also be converted from a digital to analog form by a D/A converter (not shown in FIG. 24 ) prior to playback.
- audio processing module 2400 may include additional logic that can apply active filtering, noise suppression, speech intelligibility enhancement, or any of a variety of audio signal processing functions to any of the audio signals received by the hearing assist device. Such functionality can be used to emphasize certain sounds, for example. Additionally, audio processing module 2400 may also include an output path by which the MIC signal can be passed to an external device for remote processing thereof. Such remotely-processed signal may then be returned via any of the NFC, BLE or skin-based communication interfaces discussed above.
- FIG. 24 thus illustrates that different audio streams may be picked up by the same hearing assist device. Whether one audio stream is exposed or not may depend on the circumstances, which can change from time to time. Consequently, each audio stream is delivered or filtered in varying dB intensities with prescribed equalization as managed by the hearing assist device or any one or more of the devices or services to which the hearing assist device may be communicatively connected.
- a portable electronic device (such as portable electronic device 1505 ) carried or otherwise locally accessible to a wearer of hearing assist device (such as hearing assist device 1501 or 1503 ) is configured to detect when the hearing assist device is missing from the wearer's ear or discharged.
- the portable electronic device responds by entering a hearing assist mode in which it captures ambient audio and processes it in accordance with a prescription associated with the wearer.
- a prescription may specify, for example, a desired frequency response or other desired characteristics of audio to be played back to the wearer.
- Such hearing assist mode may also be manually triggered by the wearer through interaction with a user interface of the portable electronic device.
- the foregoing hearing assist mode may also be used to equalize and amplify incoming telephone audio as well.
- the functionality of the hearing assist mode may be included in an application that can be downloaded or otherwise installed on the portable electronic device.
- the activation and use of the hearing assist mode of the portable electronic device may be carried out in a way that is not immediately discernible to others who may be observing the user.
- the portable electronic device comprises a telephone
- the telephone may be programmed to enter the hearing assist mode when the user raises the telephone to his/her ear and utters a particular activation word or words.
- Such a feature enables a user to make it look as if he or she is simply using his/her phone.
- the portable electronic device may be configured to use one or more sensors (for example, a camera and/or microphone) to determine who the current user of the portable electronic device and to automatically select the appropriate prescription for that user when entering hearing assist mode.
- the user may interact with a user interface of the portable electronic device to select an appropriate volume level and prescription.
- the hearing assist device may be capable of issuing a warning message to the wearer thereof when it appears that the battery level of the hearing assist device is low.
- the wearer may utilize the portable electronic device to perform a recharging operation by bringing the portable electronic device within a range of the hearing assist device that is suitable for wirelessly transferring power thereto as was previously described.
- the wearer may activate a mode of operation in which certain operations normally performed by the hearing assist device are performed instead by the portable electronic device or by a device or service that is communicatively connected to the portable electronic device.
- a personal electronic device may be used to perform a hearing test on a wearer of hearing assist device (such as hearing assist device 1501 or 1503 ).
- the hearing test may involve causing the hearing assist device to play back sounds having certain frequencies at certain volumes and soliciting feedback from the wearer regarding whether such sounds were heard or not.
- Still other types of hearing tests may be performed.
- a hearing test designed to determine a head transfer function useful in achieving desired spatial signaling for a particular user may also be administered.
- the test results may be analyzed to generate a personalized prescription for the wearer.
- Sensors within the hearing assist device may be used to measure distance to the ear drum or other factors that may influence test results so that such factors can be accounted for in the analysis.
- the personalized prescription may then be downloaded or otherwise transmitted to the hearing assist device for implementation thereby.
- Such personalized prescription may be formatted in a standardized manner such that it may be used by a variety of hearing assist devices or audio reproduction systems.
- Sensors within the hearing assist device may be used to measure distance to the ear drum or other factors that may impact test results and the analysis thereof.
- test results are processed locally by the portable electronic device to generate a prescription.
- test results are transmitted from the portable electronic device to a remote system for automated analysis and/or analysis by a clinician or other qualified party and a prescription is generated via such remote analysis.
- the hearing assist devices described herein may comprise devices such as those shown in FIGS. 2-6 and 15 . However, it is noted that the hearing assist devices described herein may comprise a part of any structure or article that may cover an ear of a user or that may be proximally located to an ear of a user. For example, the hearing assist devices described herein may comprise a part of a headset, a pair of glasses, a visor, or a helmet worn by a user or may be designed to be connected or tethered to such headset, pair of glasses, visor, or helmet.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Pharmacology & Pharmacy (AREA)
- Infusion, Injection, And Reservoir Apparatuses (AREA)
- Telephone Function (AREA)
- Medical Preparation Storing Or Oral Administration Devices (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Application No. 61/662,217, filed on Jun. 20, 2012, which is incorporated by reference herein in its entirety.
- 1. Technical Field
- The subject matter described herein relates to hearing assist devices and devices and services that are capable of providing external operational support to such hearing assist devices.
- 2. Description of Related Art
- Persons may become hearing impaired for a variety of reasons, including aging and being exposed to excessive noise, which can both damage hair cells in the inner ear. A hearing aid is an electro-acoustic device that typically fits in or behind the ear of a wearer, and amplifies and modulates sound for the wearer. Hearing aids are frequently worn by persons who are hearing impaired to improve their ability to hear sounds. A hearing aid may be worn in one or both ears of a user, depending on whether one or both of the user's ears need hearing assistance.
- Less expensive hearing aids amplify all frequencies equally, while mid-range analog and digital hearing aids can be programmed to amplify in a manner tuned to a hearing impaired wearer's actual frequency response. Most expensive models adapt via operating modes. In some modes, a directional microphone is used, while an omnidirectional microphone is used in others.
- Since most hearing aids rely on battery power to operate, it is critical that hearing aids are designed so as not consume battery power too quickly. This places a constraint on the types of features and processes that can be built into a hearing aid. Furthermore, it is desirable that hearing aids be lightweight and small so that they are comfortable to wear and not readily discernible to others. This also operates as a constraint on both the size of the batteries that can be used to power the hearing aid as well as the types of functionality that can be integrated into a hearing aid.
- If the hearing aid batteries are dead or a hearing aid is left at home, a wearer needing hearing aid support is at a loss. This often results in someone raising their speaking volume to help the wearer hear what they are saying. Unfortunately, because hearing problems often have a frequency profile, merely raising one's volume may not work. Similarly, raising the volume on a cell phone may not adequately provide understandable audio to someone with hearing impairment.
- The accompanying drawings, which are incorporated herein and form part of the specification, illustrate the subject matter of the present application and, together with the description, further serve to explain the principles of the embodiment described herein and to enable a person skilled in the relevant art(s) to make and use such embodiments.
-
FIG. 1 shows a communication system that includes a multi-sensor hearing assist device that communicates with a near field communication (NFC)-enabled communications device, according to an exemplary embodiment. -
FIGS. 2-4 show various configurations for associating a multi-sensor hearing assist device with an ear of a user, according to exemplary embodiments. -
FIG. 5 shows a multi-sensor hearing assist device that mounts over an ear of a user, according to an exemplary embodiment. -
FIG. 6 shows a multi-sensor hearing assist device that extends at least partially into the ear canal of a user, according to an exemplary embodiment. -
FIG. 7 shows a circuit block diagram of a multi-sensor hearing assist device that is configured to communicate with external devices according to multiple communication schemes, according to an exemplary embodiment. -
FIG. 8 shows a flowchart of a process for a hearing assist device that processes and transmits sensor data and receives a command from a second device, according to an exemplary embodiment. -
FIG. 9 shows a communication system that includes a multi-sensor hearing assist device that communicates with one or more communications devices and network-connected devices, according to an exemplary embodiment. -
FIG. 10 shows a flowchart of a process for a wirelessly charging a battery of a hearing assist device, according to an exemplary embodiment. -
FIG. 11 shows a flowchart of a process for broadcasting sound that is generated based on sensor data, according to an exemplary embodiment. -
FIG. 12 shows a flowchart of a process for generating and broadcasting filtered sound from a hearing assist device, according to an exemplary embodiment. -
FIG. 13 shows a flowchart of a process for generating an information signal in a hearing assist device based on a voice of a user, and transmitting the information signal to a second device, according to an exemplary embodiment. -
FIG. 14 shows a flowchart of a process for generating voice based at least on sensor data to be broadcast by a speaker of a hearing assist device to a user, according to an exemplary embodiment. -
FIG. 15 is a block diagram of an example system that enables external operational support to be provided to a hearing assist device in accordance with an embodiment. -
FIG. 16 is a block diagram of a system comprising a hearing assist device and a cloud/service/phone/portable device that may provide external operational support thereto. -
FIG. 17 is a block diagram of an enhanced audio processing module that may be implemented by a hearing assist device to provide such enhanced spatial signaling in accordance with an embodiment. -
FIG. 18 depicts a flowchart of a method for providing audio playback support to a hearing assist device in accordance with an embodiment. -
FIG. 19 is a block diagram of a noise suppression system that may be utilized by a hearing assist device or a device/service communicatively connected thereto in accordance with an embodiment. -
FIGS. 20-23 depict flowcharts of methods for providing external operational support to a hearing assist device worn by a user in accordance with various embodiments. -
FIG. 24 is a block diagram of an audio processing module that may be implemented in a hearing assist device in accordance with an embodiment. - The features and advantages of the subject matter of the present application will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
- The following detailed description discloses numerous example embodiments. The scope of the present patent application is not limited to the disclosed embodiments, but also encompasses combinations of the disclosed embodiments, as well as modifications to the disclosed embodiments.
- References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- Furthermore, it should be understood that spatial descriptions (e.g., “above,” “below,” “up,” “left,” “right,” “down,” “top,” “bottom,” “vertical,” “horizontal,” etc.) used herein are for purposes of illustration only, and that practical implementations of the structures described herein can be spatially arranged in any orientation or manner.
- Persons may become hearing impaired for a variety of reasons, including aging and being exposed to excessive noise, which can both damage hair cells in the inner ear. A hearing aid is an electro-acoustic device that typically fits in or behind the ear of a wearer, and amplifies and modulates sound for the wearer. Hearing aids are frequently worn by persons who are hearing impaired to improve their ability to hear sounds. A hearing aid may be worn in one or both ears of a user, depending on whether one or both of the user's ears need hearing assistance.
- Opportunities exist with integrating further functionality into hearing assist devices that are worn in/on a human ear. Hearing assist devices, such as hearing aids, headsets, and headphones, are typically worn in contact with the user's ear, and in some cases extend into the user's ear canal. As such, a hearing assist device is typically positioned in close proximity to various organs and physical features of a wearer, such as the inner ear structure (for example, the ear canal, ear drum, ossicles, Eustachian tube, cochlea, auditory nerve, or the like), skin, brain, veins and arteries, and further physical features of the wearer. Because of this advantageous positioning, a hearing assist device may be configured to detect various characteristics of a user's health. Furthermore, the detected characteristics may be used to treat health-related issues of the wearer, and perform further health-related functions. As such, hearing assist devices may be used by users that do not even have hearing problems, but instead may be used by these users to detect other health problems.
- For instance, in embodiments, health monitoring technology may be incorporated into a hearing assist device to monitor the health of a wearer. Examples of health monitoring technology that may be incorporated in a hearing assist device include health sensors that determine (for example, sense/detect/measure/collect, or the like) various physical characteristics of the user, such as blood pressure, heart rate, temperature, humidity, blood oxygen level, skin galvanometric levels, brain wave information, arrhythmia onset detection, skin chemistry changes, falling down impacts, long periods of activity, or the like.
- Sensor information resulting from the monitoring may be analyzed within the hearing assist device, or may be transmitted from the hearing assist device and analyzed at a remote location. For instance, the sensor information may be analyzed at a local computer, in a smart phone or other mobile device, or at a remote location, such as at a cloud-based server. In response to the analysis of the sensor information, instructions and/or other information may be communicated back to the wearer. Such information may be provided to the wearer by a display screen (for example, a desktop computer display, a smart phone display, a tablet computer display, a medical equipment display, or the like), by the hearing assist device itself (for example, by voice, beeps, or the like), or may be provided to the wearer in another manner. Medical personnel and/or emergency response personnel (for example, reachable at the 911 phone number) may be alerted when particular problems with the wearer are detected by the hearing assist device. The medical personnel may evaluate information received from the hearing assist device, and provide information back to the hearing assist device/wearer. The hearing assist device may provide the wearer with reminders, alarms, instructions, etc.
- The hearing assist device may be configured with speech/voice recognition capability. For instance, the wearer may provide commands, such as by voice, to the hearing assist device. The hearing assist device may be configured to perform various audio processing functions to suppress background noise and/or other sounds, as well amplifying other sounds, and may be configured to modify audio according to a particular frequency response of the hearing of the wearer. The hearing assist device may be configured to detect vibrations (for example, jaw movement of the wearer during talking), and may use the detected vibrations to aid in improving speech/voice recognition.
- Hearing assist devices may be configured in various ways, according to embodiments. For instance,
FIG. 1 shows acommunication system 100 that includes a multi-sensor hearing assistdevice 102 that communicates with a near field communication (NFC)-enabledcommunications device 104, according to an exemplary embodiment. Hearing assistdevice 102 may be worn in association with the ear of a user, and may be configured to communicate with other devices, such ascommunications device 104. As shown inFIG. 1 , hearing assistdevice 102 includes a plurality ofsensors processing logic 108, an NFC transceiver 110,storage 112, and arechargeable battery 114. These features of hearingassist device 102 are described as follows. -
Sensors sensors device 102 inFIG. 1 , any number of sensors may be included in hearing assistdevice 102, including three sensors, four sensors, five sensors, etc. (e.g., tens of sensors, hundreds of sensors, etc.). Examples of sensors forsensors sensors -
Processing logic 108 may be implemented in hardware (e.g., one or more processors, electrical circuits, etc.), or any combination of hardware with software and/or firmware.Processing logic 108 may receive sensor information fromsensors Processing logic 108 may execute one or more programs that define various operational characteristics, such as: (i) a sequence or order of retrieving sensor information from sensors of hearingassist device 102, (ii) sensor configurations and reconfigurations (via a preliminary setup or via adaptations over the course of time), (iii) routines by which particular sensor data is at least pre-processed, and (iv) one or more functions/actions to be performed based on particular sensor data values, etc. - For instance,
processing logic 108 may store and/or access sensor data instorage 112, processed or unprocessed. Furthermore,processing logic 108 may access one or more programs stored instorage 112 for execution.Storage 112 may include one or more types of storage, including memory (e.g., random access memory (RAM), read only memory (ROM), etc.) that is volatile or non-volatile. - NFC transceiver 110 is configured to wirelessly communicate with a second device (for example, a local or remote supporting device), such as NFC-enabled
communications device 104 according to NFC techniques. NFC uses magnetic induction between two loop antennas (e.g., coils, microstrip antennas, or the like) located within each other's near field, effectively forming an air-core transformer. As such, NFC communications occur over relatively short ranges (e.g., within a few centimeters), and are conducted at radio frequencies. For instance, in one example, NFC communications may be performed by NFC transceiver 110 at a 13.56 MHz frequency, with data transfers of up to 424 kilobits per second. In other embodiments, NFC transceiver 110 may be configured to perform NFC communications at other frequencies and data transfer rates. Examples of standards according to which NFC transceiver 110 may be configured to conduct NFC communications include ISO/IEC 18092 and those defined by the NFC Forum, which was founded in 2004 by Nokia, Philips and Sony. - NFC-enabled
communications device 104 may be configured with an NFC transceiver to perform NFC communications. NFC-enabledcommunications device 104 may be any type of device that may be enabled with NFC capability, such as a docking station, a desktop computer (e.g., a personal computer, etc.), a mobile computing device (e.g., a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer (e.g., an Apple iPad™), a netbook, etc.), a mobile phone (e.g., a cell phone, a smart phone, etc.), a medical appliance, etc. Furthermore, NFC-enabledcommunications device 104 may be network-connected to enable hearing assistdevice 102 to communicate with entities over the network (e.g., cloud computers or servers, web services, etc.). -
NFC transceiver 102 enables sensor data (processed or unprocessed) to be transmitted by processinglogic 108 from hearingassist device 102 to NFC-enabledcommunications device 104. In this manner, the sensor data may be reported, processed, and/or analyzed externally to hearing assistdevice 102. Furthermore,NFC transceiver 102 enablesprocessing logic 108 at hearingassist device 102 to receive data and/or instructions/commands from NFC-enabledcommunications device 104 in response to the transmitted sensor data. Furthermore,NFC transceiver 102 enablesprocessing logic 108 at hearingassist device 102 to receive programs (e.g., program code), including new programs, program updates, applications, “apps”, and/or other programs from NFC-enabledcommunications device 104 that can be executed by processinglogic 108 to change/update the functionality of hearingassist device 102. -
Rechargeable battery 114 is a rechargeable battery that includes one or more electrochemical cells that store charge that may be used to power components of hearingassist device 102, including one or more ofsensor processing logic 108, NFC transceiver 110, andstorage 112.Rechargeable battery 114 may be any suitable rechargeable battery type, including lead-acid, nickel cadmium (NiCd), nickel metal hydride (NiMH), lithium ion (Li-ion), and lithium ion polymer (Li-ion polymer). Charging of the batteries may be through a typical tethered recharger or via NFC power delivery. - Although NFC communications are shown, alternative communication approaches can be employed. Such alternatives may include wireless power transfer schemes as well.
- Hearing assist
device 102 may be configured in any manner to be associated with the ear of a user. For instance,FIGS. 2-4 show various configurations for associating a hearing assist device with an ear of a user, according to exemplary embodiments. InFIG. 2 , hearing assistdevice 102 may be a hearing aid type that fits and is inserted partially or fully in anear 202 of a user. As shown inFIG. 2 , hearing assistdevice 102 includes sensors 106 a-106 n that contact the user. Examples forms of hearingassist device 102 ofFIG. 2 include ear buds, “receiver in the canal” hearing aids, “in the ear” (ITE) hearing aids, “invisible in canal” (IIC) hearing aids, “completely in canal” (CIC) hearing aids, etc. Although not illustrated, cochlear implant configurations may also be used. - In
FIG. 3 , hearing assistdevice 102 may be a hearing aid type that mounts on top of, or behindear 202 of the user. As shown inFIG. 3 , hearing assistdevice 102 includes sensors 106 a-106 n that contact the user. Examples forms of hearingassist device 102 ofFIG. 3 include “behind the ear” (BTE) hearing aids, “open fit” or “over the ear” (OTE) hearing aids, eyeglasses hearing aids (e.g., that contain hearing aid functionality in or on the glasses arms), etc. - In
FIG. 4 , hearing assistdevice 102 may be a headset or head phones that mounts on the head of the user and include speakers that are held close to the user's ears. As shown inFIG. 4 , hearing assistdevice 102 includes sensors 106 a-106 n that contact the user. In the embodiment ofFIG. 4 , sensors 106 a-106 n may be spaced further apart in the headphones, including being dispersed in the ear pad(s) and/or along the headband that connects together the ear pads (when a head band is present). - It is noted that hearing assist
device 102 may be configured in further forms, including combinations of the forms shown inFIGS. 2-4 , and is not intended to be limited to the embodiments illustrated inFIGS. 2-4 . For instance, hearing assistdevice 102 may be a cochlear implant-type hearing aid, or other type of hearing assist device. The following section describes some example forms of hearingassist device 102 with associated sensor configurations. - As described above, hearing assist
device 102 may be configured in various forms, and may include any number and type of sensors. For instance,FIG. 5 shows ahearing assist device 500 that is an example of hearingassist device 102 according to an exemplary embodiment. Hearing assistdevice 500 is configured to mount over an ear of a user, and has a portion that is at least partially inserted into the ear. A user may wear a singlehearing assist device 500 on one ear, or may simultaneously wear first and second hearing assistdevices 500 on the user's right and left ears, respectively. - As shown in
FIG. 5 , hearing assistdevice 500 includes a case orhousing 502 that includes afirst portion 504, asecond portion 506, and athird portion 508.First portion 504 is shaped to be positioned behind/over the ear of a user. For instance, as shown inFIG. 5 ,first portion 504 has a crescent shape, and may optionally be molded in the shape of a user's outer ear (e.g., by taking an impression of the outer ear, etc.).Second portion 506 extends perpendicularly from a side of an end offirst portion 504.Second portion 506 is shaped to be inserted at least partially into the ear canal of the user.Third portion 508 extends fromsecond portion 506, and may be referred to as an earmold shaped to conform to the user's ear shape, to better adhere hearing assistdevice 500 to the user's ear. - As shown in
FIG. 5 , hearing assistdevice 500 further includes aspeaker 512, a forward IR/UV (ultraviolet)communication transceiver 520, a BTLE (BLUETOOTH low energy)antenna 522, at least onemicrophone 524, atelecoil 526, atethered sensor port 528, askin communication conductor 534, avolume controller 540, and a communication andpower delivery coil 542. Furthermore, hearing assistdevice 500 includes a plurality of medical sensors, including at least onepH sensor 510, an IR (infrared) orsonic distance sensor 514, an innerear temperature sensor 516, a position/motion sensor 518, a WPT (wireless power transfer)/NFC coil 530, aswitch 532, aglucose spectroscopy sensor 536, aheart rate sensor 538, and asubcutaneous sensor 544. In embodiments, hearing assistdevice 500 may include one or more of these further features and/alternative features. The features of hearingassist device 500 are described as follows. - As shown in
FIG. 5 ,speaker 512, IR orsonic distance sensor 514, and innerear temperature sensor 516 are located on a circular surface ofsecond portion 506 of hearingassist device 500 that faces into the ear of the user. Position/motion sensor 518 andpH sensor 510 are located on a perimeter surface ofsecond portion 506 around the circular surface that contacts the ear canal of the user. In alternative embodiments, one or more of these features may be located in/on different locations of hearingassist device 500. -
pH sensor 510 is a sensor that may be present to measure a pH of skin of the user's inner ear. The measured pH value may be used to determine a medical problem of the user, such an onset of stroke.pH sensor 510 may include one or more metallic plates. Upon receiving power (e.g., fromrechargeable battery 114 ofFIG. 1 ),pH sensor 510 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured pH value. - Speaker 512 (also referred to as a “loudspeaker”) is a speaker of hearing
assist device 500 that broadcasts environmental sound received by microphone(s) 524, that is subsequently amplified and/or filtered by processing logic of the hearing assistdevice 600, into the ear of the user to assist the user in hearing the environmental sound. Furthermore,speaker 512 may broadcast additional sounds into the ear of the user for the user to hear, including alerts (e.g., tones, beeping sounds), voice, and/or further sounds that may be generated by or received by processing logic of hearingassist device 500, and/or may be stored in hearing assistdevice 500. - IR or
sonic distance sensor 514 is a sensor that may be present to sense a displacement distance. Upon receiving power, IR orsonic distance sensor 514 may generate an IR light pulse, a sonic (e.g., ultrasonic) pulse, or other light or sound pulse, that may be reflected in the ear of the user, and the reflection may be received by IR orsonic distance sensor 514. A time of reflection may be compared for a series of pulses to determine a displacement distance within the ear of user. IR orsonic distance sensor 514 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured displacement distance. - A distance and eardrum deflection that is determined using IR or sonic distance sensor 514 (e.g., by using a high rate sampling or continuous sampling) may be used to calculate an estimate of the “actual” or “true” decibel level of an audio signal being input to the ear of the user. By incorporating such functionality, hearing assist
device 500 can perform the following when a user inserts and turns on hearing assist device 500: (i) automatically adjust the volume to fall within a target range; and (ii) prevent excess volume associated with unexpected loud sound events. It is noted that the amount of volume adjustment that may be applied can vary by frequency. It is also noted that the excess volume associated with unexpected loud sound events may be further prevented by using a hearing assist device that has a relatively tight fit, thereby allowing the hearing assist device to act as an ear plug. - Hearing efficiency and performance data over the spectrum of normal audible frequencies can be gathered by delivering each frequency (or frequency range) at an output volume level, measuring eardrum deflection characteristics, and delivering audible test questions to the user via hearing
assist device 500. This can be accomplished solely by hearingassist device 500 or with assistance from a smartphone or other external device or service. For example, a user may respond to an audio (or textual) prompt “Can you hear this?” with a “yes” or “no” response. The response is received by microphone(s) 524 (or via touch input for example) and processed internally or on an assisting external device to identify the response. Depending on the user's response, the amplitude of the audio output can be adjusted to determine a given user's hearing threshold for each frequency (or frequency range). From this hearing efficiency and performance data, input frequency equalization can be performed by hearingassist device 500 so as to deliver to the user audio signals that will be perceived in much the same way as someone with no hearing impairment. In addition, such data can be delivered to the assisting external device (e.g., to a smartphone) for use by such device in producing audio output for the user. For example, the assisting device can deliver an adjusted audio output tailored for the user if (i) the user is not wearing hearing assistdevice 500, (ii) the battery power of hearingassist device 500 is depleted, (iii) hearingassist device 500 is powered down, or (iv) hearingassist device 500 is operating in a lower power mode. In such situations, the supporting device can deliver the audio signal: (a) in an audible form via a speaker which will be generated with intent of directly reaching the eardrum; (b) in an audible form intended for receipt and amplification control by hearingassist device 500 without further need for user specific audio equalization; and (c) in a non-audible form (e.g.) electromagnetic transmission for receipt and conversion to an audible form by hearingassist device 500 and again without further equalization. - After testing and setup, a wearer may further tweak their recommended equalization via slide bars and such in a manner similar to adjusting equalization for other conventional audio equipment. Such tweaking can be carried out via the supporting device user interface. In addition, a plurality of equalization settings can be supported with each being associated with a particular mode of operation of hearing
assist device 500. That is conversation in a quiet room with one other might receive one equalization profile while a concert hall might receive another. Modes can be selected in many automatic or commanded ways via either or both hearing assistdevice 500 and the external supporting device. Automatic selection can be performed via analysis and classification of captured audio. Certain classifications may trigger selection of a particular mode. Commands may delivered via any user input interface such as voice input (voice recognized commands), tactile input commands, etc. - Audio modes also comprise alternate or additional audio processing techniques as well. For example, in one mode, to enhance audio perspective and directionality, delays might be selectively introduced (or increased in a stereoscopic manner) to enhance a wearer's ability to discern the location of an audio source. Sensor data may support automatic mode selection in such situations. Detecting walking impacts and outdoor GPS (Global Positioning System) location might automatically trigger such enhanced perspective mode. A medical condition might trigger another mode which attenuates environmental audio while delivering synthesized voice commands to the wearer. In another exemplary mode, both echoes and delays might be introduced to simulate a theater environment. For example, when audio is being sourced by a television channel broadcast of a movie, the theater environment mode might be selected. Such selection may be in response to a set top box, television or media player's commands or by identifying one of the same as the audio source.
- Other similar and all of such functionality can be carried out by one or both of hearing
assist device 500 and an external supporting device. When assisting the hearing aid device, the external supporting device may receive the audio for processing: (i) directly via built in microphones; (ii) from storage; or (iii) via yet another external device. Alternatively, the source audio may be captured by hearingassist device 500 itself and delivered via a wired or wireless pathway to the external supporting device for processing before delivery of either the processed audio signals or substitute audio back to hearing assistdevice 500 for delivery to the wearer. - Similarly, sensor data may be captured in one or both of hearing
assist device 500 and an external supporting device. Sensor data captured by hearingassist device 500 may likewise be delivered via such or other wired or wireless pathways to the external supporting device for (further) processing. The external supporting device may then respond to the sensor data received and processed by delivering audio content and/or hearing aid commands back to hearing assistdevice 500. Such commands may be to reconfigure some aspect of hearingassist device 500 or manage communication or power delivery. Such audio content may be instructional, comprise queries, or consist of commands to be delivered the wearer via the ear drums. Sensor data may be stored and displayed in some form locally on the external supporting device along with similar audio, graphical or textual content, commands or queries. In addition, such sensor data can be further delivered to yet other external supporting devices for further processing, analysis and storage. Sensors within one or both hearing assistdevice 500 and an external supporting device may be medical sensors or environmental sensors (e.g., latitude/longitude, velocity, temperature, wearer's physical orientation, acceleration, elevation, tilt, humidity, etc.). - Although not shown, hearing assist
device 500 may also be configured with an imager that may be located neartransceiver 520. The imager can then be used to capture images or video that may be relayed to one or more external supporting device for real time display, storage or processing. For example, detecting a medical situation and no response to audible content queries delivered via hearingassist device 500, the imager can be commanded (internal or external command origin) to capture an image or a video sequence. Such imager output can be delivered to medical staff via a user's supporting smartphone so that a determination can be made as to the user's condition or the position/location of hearingassist device 500. - Inner
ear temperature sensor 516 is a sensor that may be present to measure a temperature of the user. For instance, in an embodiment, upon receiving power, innerear temperature sensor 516 may include a lens used to measure inner ear temperature. IR light may be reflected from the user skin by an IR light emitter, such as the ear canal or ear drum, and received by a single temperature sensor element, a one-dimensional array of temperature sensor elements, a two-dimensional array of temperature sensor elements, or other configuration of temperature sensor elements. Innerear temperature sensor 516 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured inner ear temperature. - Such a configuration may also be used to determine a distance to the user's ear drum. The IR light emitter and sensor may be used to determine a distance to the user's ear drum from hearing
assist device 500, which may be used by processing logic to automatically control a volume of sound emitted from hearingassist device 500, as well as for other purposes. Furthermore, the IR light emitter/sensor may also be used as an imager that captures an image of the inside of the user's ear. This could be used to identify characteristics of vein structures inside the user's ear, for example. The IR light emitter/sensor could also be used to detect the user's heartbeat, as well as to perform further functions. - Position/
motion sensor 518 includes one or more sensors that may be present to measure time of day, location, acceleration, orientation, vibrations, and/or other movement related characteristics of the user. For instance, position/motion sensor 518 may include one or more of a GPS (global positioning system) receiver (to measure user position), an accelerometer (to measure acceleration of the user), a gyroscope (to measure orientation of the head of the user), a magneto (to determine a direction the user is facing), a vibration sensor (for example, a micro-electromechanical system (MEMS) vibration sensor), or the like. Position/motion sensor 518 may be used for various benefits, including determining whether a user has fallen (e.g., based on measured position, acceleration, orientation, etc.), for local VoD, and many more benefits. Position/motion sensor 518 may generate a sensor output signal (e.g., an electrical signal) that indicates one or more of the measured time of day, location, acceleration, orientation, vibration, etc. - The sensor information indicated by position/
motion sensor 518 and/or other sensors may be used for various purposes. For instance, position/motion information may be used to determine that the user has fallen down/collapsed. In response, voice and/or video assist (e.g., by a handheld device in communication with hearing assist device 500) may be used to gather feedback from the user (e.g., to find out if they are ok, and/or to further supplement the sensor data collection (which triggered the feedback request)). Such sensor data and feedback information, if warranted, can be automatically forwarded to medical staff, ambulance services, and/or family members, for example, as described elsewhere herein. The analysis of the data that triggered the forwarding process may be performed in whole or in part on one (or both) of hearingassist device 500, and/or on the assisting local device (e.g., a smart phone, tablet computer, set top box, TV, etc., in communication with a hearing assist device 500) and/or remote computing systems (e.g., at medical staff offices or as might be available through a cloud or portal service). - As shown in
FIG. 5 , forward IR/UV (ultraviolet)communication transceiver 520,BTLE antenna 522, microphone(s) 524,telecoil 526, tetheredsensor port 528, WPT/NFC coil 530,switch 532,skin communication conductor 534,glucose spectroscopy sensor 536, aheart rate sensor 538,volume controller 540, and communication andpower delivery coil 542 are located at different locations in/on thefirst portion 504 of hearingassist device 500. In alternative embodiments, one or more of these features may be located in/on different locations of hearingassist device 500. - Forward IR/
UV communication transceiver 520 is a communication mechanism that may be present to enable communications with another device, such as a smart phone, computer, etc. Forward IR/UV communication transceiver 520 may receive information/data from processing logic of hearingassist device 500 to be transmitted to the other device in the form of modulated light (e.g., IR light, UV light, etc.), and may receive information/data in the form of modulated light from the other device to be provided to the processing logic of hearingassist device 500. Forward IR/UV communication transceiver 520 may enable low power communications for hearingassist device 500, to reduce a load on a battery of hearingassist device 500. In an embodiment, an emitter/receiver of forward IR/UV communication transceiver 520 may be positioned onhousing 502 to be facing forward in a direction a wearer of hearingassist device 500 faces. In this manner, the forward IR/UV communication transceiver 520 may communicate with a device held by the wearer, such as a smart phone, a tablet computer, etc., to provide text to be displayed to the wearer, etc. -
BTLE antenna 522 is a communication mechanism coupled to a Bluetooth™ transceiver in hearing assistdevice 500 that may be present to enable communications with another device, such as a smart phone, computer, etc.BTLE antenna 522 may receive information/data from processing logic of hearingassist device 500 to be transmitted to the other device according to the Bluetooth™ specification, and may receive information/data transmitted according to the Bluetooth™ specification from the other device to be provided to the processing logic of hearingassist device 500. - Microphone(s) 524 is a sensor that may be present to receive environmental sounds, including voice of the user, voice of other persons, and other sounds in the environment (e.g., traffic noise, music, etc.). Microphone(s) 524 may include any number of microphones, and may be configured in any manner, including being omni-directional (non-directional), directional, etc. Microphone(s) 524 generates an audio signal based on the received environmental sound that may be processed and/or filtered by processing logic of hearing
assist device 500, may be stored in digital form in hearing assistdevice 500, may be transmitted from hearingassist device 500, and may be used in other ways. -
Telecoil 526 is a communication mechanism that may be present to enable communications with another device.Telecoil 526 is an audio induction loop that enables audio sources to be directly coupled to hearing assistdevice 500 in a manner known to persons skilled in the relevant art(s).Telecoil 526 may be used with a telephone, a radio system, and induction loop systems that transmit sound to hearing aids. -
Tethered sensor port 528 is a port that a remote sensor (separate from hearing assist device 500) may be coupled with to interface with hearingassist device 500. For instance,port 528 may be an industry standard or proprietary connector type. A remote sensor may have a tether (one or more wires) with a connector at an end that may be plugged intoport 528. Any number of tetheredsensor ports 528 may be present. Examples of sensor types that may interface with tetheredsensor port 528 include brainwave sensors (e.g., electroencephalography (EEG) sensors that record electrical activity along the scalp according to EEG techniques) attached to the user's scalp, heart rate/arrhythmia sensors attached to a chest of the user, etc. - WPT/
NFC coil 530 is a communication mechanism coupled to a NFC transceiver in hearing assistdevice 500 that may be present to enable communications with another device, such as a smart phone, computer, etc., as described above with respect to NFC transceiver 110 (FIG. 1 ). -
Switch 532 is a switching mechanism that may be present onhousing 502 to perform various functions, such as switching power on or off, switching between different power and/or operational modes, etc. A user may interact withswitch 532 to switch power on or off, to switch between modes, etc.Switch 532 may be any type of switch, including a toggle switch, a push button switch, a rocker switch, a three-(or greater) position switch, a dial switch, etc. -
Skin communication conductor 534 is a communication mechanism coupled to a transceiver in hearing assistdevice 500 that may be present to enable communications with another device, such as a smart phone, computer, etc., through skin of the user. For instance,skin communication conductor 534 may enable communications to flow between hearing assistdevice 500 and a smart phone held in the hand of the user, a second hearing assist device worn on an opposite ear of the user, a pacemaker or other device implanted in the user, or other communications device in communication with skin of the user. A transceiver of hearingassist device 500 may receive information/data from processing logic to be transmitted fromskin communication conductor 534 through the user's skin to the other device, and the transceiver may receive information/data atskin communication conductor 534 that was transmitted from the other device through the user's skin to be provided to the processing logic of hearingassist device 500. -
Glucose spectroscopy sensor 536 is a sensor that may be present to measure a glucose level of the user using spectroscopy techniques in a manner known to persons skilled in the relevant art(s). Such a measurement may be valuable in determining whether a user has diabetes. Such a measurement can also be valuable in helping a diabetic user determine whether insulin is needed, etc. (e.g., hypoglycemia or hyperglycemia).Glucose spectroscopy sensor 536 may be configured to monitor glucose in combination withsubcutaneous sensor 544. As shown inFIG. 5 ,subcutaneous sensor 544 is shown separate from, and proximate to hearing assistdevice 500. In an alternative embodiment,subcutaneous sensor 544 may be located in/on hearingassist device 500.Subcutaneous sensor 544 is a sensor that may be present to measure any attribute of a user's health, characteristics or status. For example,subcutaneous sensor 544 may be a glucose sensor implanted under the skin behind the ear so as to provide a reasonably close mating location with communication andpower delivery coil 542. When powered,glucose spectroscopy sensor 536 may measure the user glucose level with respect tosubcutaneous sensor 544, and may generate a sensor output signal (e.g., an electrical signal) that indicates a glucose level of the user. -
Heart rate sensor 538 is a sensor that may be present to measure a heart rate of the user. For instance, in an embodiment, upon receiving power,heart rate sensor 538 may pressure changes with respect to a blood vessel in the ear, or may measure heart rate in another manner such as changes in reflectivity or otherwise as would be known to persons skilled in the relevant art(s). Missed beats, elevated heart rate, and further heart conditions may be detected in this manner.Heart rate sensor 538 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured heart rate. In addition,subcutaneous sensor 544 might comprise at least a portion of an internal heart monitoring device which communicates via communication andpower delivery coil 542 heart status information and data.Subcutaneous sensor 544 could also be associated with or be part of a pacemaker or defibrillating implant, insulin pump, etc. -
Volume controller 540 is a user interface mechanism that may be present onhousing 502 to enable a user to modify a volume at which sound is broadcast fromspeaker 512. A user may interact withvolume controller 520 to increase or decrease the volume.Volume controller 540 may be any suitable controller type (e.g., a potentiometer), including a rotary volume dial, a thumb wheel, etc. - Instead of supporting both power delivery and communications, communication and
power delivery coil 542 may be dedicated to one or the other. For example, such coil may only support power delivery (if needed to charge or otherwise deliver power to subcutaneous sensor 544), and can be replaced with any other type of communication system that supports communication withsubcutaneous sensor 544. It is noted that the coils/antennas of hearingassist device 500 may be separately included in hearing assistdevice 500, or in embodiments, two or more of the coils/antennas may be combined as a single coil/antenna. - The processing logic of hearing
assist device 500 may be operable to set up/configure and adaptively reconfigure each of the sensors of hearingassist device 500 based on an analysis of the data obtained by such sensor as well as on an analysis of data obtained by other sensors. For example, a first sensor of hearingassist device 500 may be configured to operate at one sampling rate (or sensing rate) which is analyzed periodically or continuously. Furthermore, a second sensor of hearingassist device 500 can be in a sleep or power down mode to conserve battery power. When a threshold is exceeded or other triggering event occurs, such first sensor can be reconfigured by the processing logic of hearingassist device 500 to sample at a higher rate or continuously and the second sensor can be powered up and configured. Additionally, multiple types of sensor data can be used to construct or derive single conclusions. For example, heart rate can be gathered multiple ways (via multiple sensors) and combined to provide a more robust and trustworthy conclusion. Likewise, a combination of data obtained from different sensors (e.g., pH plus temperature plus horizontal posture plus impact detected plus weak heart rate) may result in an ambulance being called or indicate a possible heart attack. Or, if glucose is too high, hyperglycemia may be indicated while if glucose it too low, hypoglycemia may be indicated. Or, if glucose and heart data is acceptable, then a stroke may be indicated. This processing can be done in whole or in part within hearing assistdevice 500 with audio content being played to the wearer thereof to gather further voiced information from the wearer to assist in conclusions or to warn the wearer. -
FIG. 6 shows ahearing assist device 600 that is an example of hearingassist device 102 according to an exemplary embodiment. Hearing assistdevice 600 is configured to be at least partially inserted into the ear canal of a user (for example, an ear bud). A user may wear a singlehearing assist device 600 on one ear, or may simultaneously wear first and second hearing assistdevices 600 on the user's right and left ears, respectively. - As shown in
FIG. 6 , hearing assistdevice 600 includes a case orhousing 602 that has a generally cylindrical shape, and includes afirst portion 604, asecond portion 606, and athird portion 608.First portion 604 is shaped to be inserted at least partially into the ear canal of the user.Second portion 606 extends coaxially fromfirst portion 604.Third portion 608 is a handle that extends fromsecond portion 606. A user graspsthird portion 608 to extract hearing assistdevice 600 from the ear of the user. - As shown in
FIG. 6 , hearing assistdevice 600 further includespH sensor 510,speaker 512, IR (infrared) orsonic distance sensor 514, innerear temperature sensor 516, and anantenna 610.pH sensor 510,speaker 512, IR (infrared) orsonic distance sensor 514, innerear temperature sensor 516 may function and be configured similarly as described above.Antenna 610 may be include one or more coils or other types of antennas to function as any one or more of the coils/antennas described above with respect toFIG. 5 and/or elsewhere herein (e.g., an NFC antenna, a Bluetooth™ antenna, etc.). - It is noted that antennas, such as coils, mentioned herein may be implemented as any suitable type of antenna, including a coil, a microstrip antenna, or other antenna type. Although further sensors, communication mechanisms, switches, etc., of hearing
assist device 500 ofFIG. 5 are not shown included in hearing assistdevice 600, one or more further of these features of hearingassist device 500 may additionally and/or alternatively be included in hearing assistdevice 600. Furthermore, sensors that are present in a hearing assist device may all operate simultaneously, or one or more sensors may be run periodically, and may be off at other times (e.g., based on an algorithm in program code, etc.). By running fewer sensors at any one time, battery power may be conserved. Note that in addition to one or more of sensor data compression, analysis, encryption, and processing, sensor management (duty cycling, continuous operations, threshold triggers, sampling rates, etc.) can be performed in whole or in part in any one or both hear assist devices, the assisting local device (e.g., smart phone, tablet computer, set top box, TV, etc.), and/or remote computing systems (at medical staff offices or as might be available through a cloud or portal service). - Hearing assist
devices - According to embodiments, hearing assist devices may be configured in various ways to perform their functions. For instance,
FIG. 7 shows a circuit block diagram of ahearing assist device 700 that is configured to communicate with external devices according to multiple communication schemes, according to an exemplary embodiment. Hearing assistdevices device 700, according to embodiments. - As shown in
FIG. 7 , hearing assistdevice 700 includes a plurality of sensors 702 a-702 c,processing logic 704, amicrophone 706, anamplifier 708, afilter 710, an analog-to-digital (A/D)converter 712, aspeaker 714, anNFC coil 716, anNFC transceiver 718, anantenna 720, aBluetooth™ transceiver 722, acharge circuit 724, abattery 726, a plurality of sensor interfaces 728 a-728 c, and a digital-to-analog (D/A)converter 764.Processing logic 704 includes a digital signal processor (DSP) 730, a central processing unit (CPU) 732, and amemory 734. Sensors 702 a-702 c,processing logic 704,amplifier 708,filter 710, A/D converter 712,NFC transceiver 718,Bluetooth™ transceiver 722,charge circuit 724, sensor interfaces 728 a-728 c, D/Aconverter 764,DSP 730,CPU 732, may each be implemented in the form of hardware (e.g., electrical circuits, digital logic, etc.) or a combination of hardware and software/firmware. The features of hearingassist device 700 shown inFIG. 7 are described as follows. - For instance, hearing aid functionality of hearing
assist device 700 is first described. InFIG. 7 ,microphone 706,amplifier 708,filter 710, A/D converter 712,processing logic 704, D/Aconverter 764, andspeaker 714 provide at least some of the hearing aid functionality of hearingassist device 700.Microphone 706 is a sensor that receives environmental sounds, including voice of the user of hearingassist device 700, voice of other persons, and other sounds in the environment (e.g., traffic noise, music, etc.).Microphone 706 may be configured in any manner, including being omni-directional (non-directional), directional, etc., and may include one or more microphones.Microphone 706 may be a miniature microphone conventionally used in hearing aids, as would be known to persons skilled in the relevant art(s), or may be another suitable type of microphone. Microphone(s) 524 (FIG. 5 ) is an example ofmicrophone 706.Microphone 706 generates a receivedaudio signal 740 based on the received environmental sound. -
Amplifier 708 receives and amplifies receivedaudio signal 740 to generate an amplifiedaudio signal 742.Amplifier 708 may be any type of amplifier, including a low-noise amplifier for amplifying low level signals.Filter 710 receives and processes amplifiedaudio signal 742 to generate a filteredaudio signal 744.Filter 710 may be any type of filter, including being a filter configured to filter out noise, other high frequencies, and/or other frequencies as desired. A/D converter 712 receives filteredaudio signal 742, which may be an analog signal, and converts filteredaudio signal 742 to digital form, to generate adigital audio signal 746. A/D converter 712 may be configured in any manner, including as a conventional A/D converter. -
Processing logic 704 receivesdigital audio signal 746, and may processdigital audio signal 746 in any manner to generate processeddigital audio signal 762. For instance, as shown inFIG. 7 ,DSP 730 may receivedigital audio signal 746, and may perform digital signal processing ondigital audio signal 746 to generate processeddigital audio signal 762.DSP 730 may be configured in any manner, including as a conventional DSP known to person skilled in the relevant art(s), or in another manner.DSP 730 may perform any suitable type of digital signal processing to process/filterdigital audio signal 746, including processingdigital audio signal 746 in the frequency domain to manipulate the frequency spectrum of digital audio signal 746 (e.g., according to Fourier transform/analysis techniques, etc.).DSP 730 may amplify particular frequencies, may attenuate particular frequencies, and may otherwise modifydigital audio signal 746 in the discrete domain.DSP 730 may perform the signal processing for various reasons, including noise cancellation or hearing loss compensation. For instance,DSP 730 may processdigital audio signal 746 to compensate for a personal hearing frequency response of the user, such as compensating for poor hearing of high frequencies, middle range frequencies, or other personal frequency response characteristics of the user. - In one embodiment,
DSP 730 may be pre-configured to processdigital audio signal 746. In another embodiment,DSP 730 may receive instructions fromCPU 732 regarding how to processdigital audio signal 746. For instance,CPU 732 may access one or more DSP configurations in stored in memory 734 (e.g., in other data 768) that may be provided toDSP 730 to configureDSP 730 for digital signal processing ofdigital audio signal 746. For instance,CPU 732 may select a DSP configuration based on a hearing assist mode selected by a user of hearing assist device 700 (e.g., by interacting withswitch 532, etc.). - As shown in
FIG. 7 , D/Aconverter 764 receives processeddigital audio signal 762, and converts processeddigital audio signal 762 to digital form, generating processedaudio signal 766. D/A converter 764 may be configured in any manner, including as a conventional D/A converter.Speaker 714 receives processedaudio signal 766, and broadcasts sound generated based on processedaudio signal 766 into the ear of the user. The user is enabled to hear the broadcast sound, which may be amplified, filtered, and/or otherwise frequency manipulated with respect to the sound received bymicrophone 706.Speaker 714 may be a miniature speaker conventionally used in hearing aids, as would be known to persons skilled in the relevant art(s), or may be another suitable type of speaker. Speaker 512 (FIG. 5 ) is an example ofspeaker 714.Speaker 714 may include one or more speakers. - Hearing assist
device 700 ofFIG. 7 is further described as follows with respect toFIGS. 8-14 .FIG. 8 shows aflowchart 800 of a process for a hearing assist device that processes and transmits sensor data and receives a command from a second device, according to an exemplary embodiment. In an embodiment, hearing assist device 700 (as well as any of hearing assistdevices flowchart 800. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following description offlowchart 800 and hearing assistdevice 700. -
Flowchart 800 begins withstep 802. Instep 802, a sensor output signal is received from a medical sensor of the hearing assist device that senses a characteristic of the user. For example, as shown inFIG. 7 , sensors 702 a-702 c may each sense/measure information about a health characteristic of the user of hearingassist device 700. Sensors 702 a-702 c may each be one of the sensors shown inFIGS. 5 and 6 , and/or mentioned elsewhere herein. Although three sensors are shown inFIG. 7 for purposes of illustration, other numbers of sensors may be present in hearing assistdevice 700, including one sensor, two sensors, or greater numbers of sensors. Sensors 702 a-702 c each may generate a corresponding sensor output signal 758 a-758 c (e.g., an electrical signal) that indicates the measured information about the corresponding health characteristic. For instance, sensor output signals 758 a-758 c may be analog or digital signals having levels or values corresponding to the measured information. - Sensor interfaces 728 a-728 c are each optionally present, depending on whether the corresponding sensor outputs a sensor output signal that needs to be modified to be receivable by
CPU 732. For instance, each of sensor interfaces 728 a-728 c may include an amplifier, filter, and/or A/D converter (e.g., similar toamplifier 708,filter 710, and A/D converter 712) that respectively amplify (e.g., increase or decrease), reduces particular frequencies, and/or convert to digital form the corresponding sensor output signal. Sensor interfaces 728 a-728 c (when present) respectively output modified sensor output signals 760 a-760 c. - In
step 804, the sensor output signal is processed to generate processed sensor data. For instance, as shown inFIG. 7 ,processing logic 704 receives modified sensor output signals 760 a-760 c.Processing logic 704 may process modified sensor output signals 760 a-760 c in any manner to generate processed sensor data. For instance, as shown inFIG. 7 ,CPU 732 may receive modified sensor output signals 760 a-760 c.CPU 732 may process the sensor information in one or more of modified sensor output signals 760 a-760 c to generate processed sensor data. For instance,CPU 732 may manipulate the sensor information (e.g., according to an algorithm of code 738) to convert the sensor information into a presentable form (e.g., scaling the sensor information, adding or subtracting a constant to/from the sensor information, etc.). Furthermore,CPU 732 may transmit the sensor information of modified sensor output signals 760 a-760 c toDSP 730 to be digital signal processed byDSP 730 to generate processed sensor data, and may receive the processed sensor data fromDSP 730. The processed and/or raw (unprocessed) sensor data may optionally be stored in memory 734 (e.g., as sensor data 736). - In
step 806, the processed sensor data is wirelessly transmitted from the hearing assist device to a second device. For instance, as shown inFIG. 7 ,CPU 732 may provide the sensor data (processed or raw) (e.g., from CPU registers, fromDSP 730, frommemory 734, etc.) to a transceiver to be transmitted from hearingassist device 700. In the embodiment ofFIG. 7 , hearing assistdevice 700 includes anNFC transceiver 718 and aBT transceiver 722, which may each be used to transmit sensor data from hearingassist device 700. In alternative embodiments, hearing assistdevice 700 may include one or more additional and/or alternative transceivers that may transmit sensor data from hearingassist device 700, including a Wi-Fi transceiver, a forward IR/UV communication transceiver (e.g.,transceiver 520 ofFIG. 5 ), a telecoil transceiver (which may transmit via telecoil 526), a skin communication transceiver 534 (which may transmit via skin communication conductor 534), etc. The operation of such alternative transceivers will become apparent to persons skilled in the relevant art(s) based on the teachings provided herein. - As shown in
FIG. 7 ,NFC transceiver 718 may receive aninformation signal 740 fromCPU 732 that includes sensor data for transmitting. In an embodiment,NFC transceiver 718 may modulate the sensor data ontoNFC antenna signal 748 to be transmitted from hearingassist device 700 byNFC coil 716 whenNFC coil 716 is energized by an RF field generated by a second device. - Similarly,
BT transceiver 722 may receive aninformation signal 754 fromCPU 732 that includes sensor data for transmitting. In an embodiment,BT transceiver 722 may modulate the sensor data ontoBT antenna signal 752 to be transmitted from hearingassist device 700 by antenna 720 (e.g.,BTLE antenna 522 ofFIG. 5 ), according to a Bluetooth™ communication protocol or standard. - In embodiments, a hearing assist device may communicate with one or more other devices to provide sensor data and/or other information, and to receive information. For instance,
FIG. 9 shows acommunication system 900 that includes a hearing assist device communicating with other communication devices, according to an exemplary embodiment. As shown inFIG. 9 ,communication system 900 includes hearing assistdevice 700, amobile computing device 902, astationary computing device 904, and aserver 906.System 900 is described as follows. - Mobile computing device 902 (for example, a local supporting device) is a device capable of communicating with hearing
assist device 700 according to one or more communication techniques. For instance, as shown inFIG. 9 ,mobile computing device 902 includes atelecoil 910, one ormore microphones 912, an IR/UV communication transceiver 914, a WPT/NFC coil 916, and aBluetooth™ antenna 918. In embodiments,mobile computing device 902 may include one or more of these features and/or alternative or additional features (e.g., communication mechanisms, etc.).Mobile computing device 902 may be any type of mobile electronic device, including a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer (e.g., an Apple iPad™), a netbook, a mobile phone (e.g., a cell phone, a smart phone, etc.), a special purpose medical device, etc. The features ofmobile computing device 902 shown inFIG. 9 are described as follows. -
Telecoil 910 is a communication mechanism that may be present to enablemobile computing device 902 to communicate with hearingassist device 700 via a telecoil (e.g., telecoil 526 ofFIG. 5 ). For instance,telecoil 910 and an associated transceiver may enablemobile computing device 902 to couple audio sources and/or other communications to hearing assistdevice 700 in a manner known to persons skilled in the relevant art(s). - Microphone(s) 912 may be present to receive voice of a user of
mobile computing device 902. For instance, the user may provide instructions formobile computing device 902 and/or for hearingassist device 700 by speaking into microphone(s) 912. The received voice may be transmitted to hearing assist device 700 (in digital or analog form) according to any communication mechanism, or may be converted into data and/or commands to be provided to hearing assistdevice 700 to cause functions/actions in hearing assistdevice 700. Microphone(s) 912 may include any number of microphones, and may be configured in any manner, including being omni-directional (non-directional), directional, etc. - IR/
UV communication transceiver 914 is a communication mechanism that may be present to enable communications with hearingassist device 700 via an IR/UV communication transceiver of hearing assist device 700 (e.g., forward IR/UV communication transceiver 520 ofFIG. 5 ). IR/UV communication transceiver 914 may receive information/data from and/or transmit information/data to hearing assist device 700 (e.g., in the form of modulated light, as described above). - WPT/
NFC coil 916 is an NFC antenna coupled to a NFC transceiver inmobile computing device 902 that may be present to enable NFC communications with an NFC communication mechanism of hearing assist device 700 (e.g., NFC transceiver 110 ofFIG. 1 ,NFC coil 530 ofFIG. 5 ). WPT/NFC coil 916 may be used to receive information/data from and/or transmit information/data to hearing assistdevice 700. -
Bluetooth™ antenna 918 is a communication mechanism coupled to a Bluetooth™ transceiver inmobile computing device 902 that may be present to enable communications with hearing assist device 700 (e.g.,BT transceiver 722 andantenna 720 ofFIG. 7 ).Bluetooth™ antenna 918 may be used to receive information/data from and/or transmit information/data to hearing assistdevice 700. - As shown in
FIG. 9 ,mobile computing device 902 and hearing assistdevice 700 may exchange communication signals 920 according to any communication mechanism/protocol/standard mentioned herein or otherwise known. According to step 806, hearing assistdevice 700 may wirelessly transmit sensor data tomobile computing device 902. - Stationary computing device 904 (for example, a local supporting device) is also a device capable of communicating with hearing
assist device 700 according to one or more communication techniques. For instance,stationary computing device 904 may be capable of communicating with hearingassist device 700 according to any of the communication mechanisms shown formobile computing device 902 inFIG. 9 , and/or according to other communication mechanisms/protocols/standards described elsewhere herein or otherwise known.Stationary computing device 904 may be any type of stationary electronic device, including a desktop computer (e.g., a personal computer, etc.), a docking station, a set top box, a gateway device, an access point, special purpose medical equipment, etc. - As shown in
FIG. 9 ,stationary computing device 904 and hearing assistdevice 700 may exchange communication signals 922 according to any communication mechanism/protocol/standard mentioned herein or otherwise known. According to step 806, hearing assistdevice 700 may wirelessly transmit sensor data tostationary computing device 904. - It is noted that mobile computing device 902 (and/or stationary computing device 904) may communicate with server 906 (for example, a remote supporting device, a third device). For instance, as shown in
FIG. 9 , mobile computing device (and/or stationary computing device 904) may be communicatively coupled withserver 906 bynetwork 908.Network 908 may be any type of communication network, including a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a phone network (e.g., a cellular network, a land based network), or a combination of communication networks, such as the Internet.Network 908 may include wired and/or wireless communication pathway(s) implemented using any of a wide variety of communication media and associated protocols. For example, such communication pathway(s) may comprise wireless communication pathways implemented via radio frequency (RF) signaling, infrared (IR) signaling, or the like. Such signaling may be carried out using long-range wireless protocols such as WIMAX® (IEEE 802.16) or GSM (Global System for Mobile Communications), medium-range wireless protocols such as WI-FI® (IEEE 802.11), and/or short-range wireless protocols such as BLUETOOTH® or any of a variety of IR-based protocols. Such communication pathway(s) may also comprise wired communication pathways established over twisted pair, Ethernet cable, coaxial cable, optical fiber, or the like, using suitable communication protocols therefor. It is noted that security protocols (e.g., private key exchange, etc.) may be used to protect sensitive health information that is communicated by hearingassist device 700 to and from remote devices. -
Server 906 may be any computer system, including a stationary computing device, a server computer, a mobile computing device, etc.Server 906 may include a web service, an API (application programming interface), or other service or interface for communications. - Sensor data and/or other information may be transmitted (for example, relayed) to
server 906 overnetwork 908 to be processed. After such processing, in response,server 906 may transmit processed data, instructions, and/or other information throughnetwork 908 to mobile computing device 902 (and/or stationary computing device 904) to be transmitted to hearing assistdevice 700 to be stored, to cause a function/action at hearingassist device 700, and/or for other reason. - Referring back to
FIG. 8 , instep 808, at least one command is received from the second device at the hearing assist device. For instance, referring toFIG. 7 , hearing assistdevice 700 may receive a command wirelessly transmitted in a communication signal from a second device atNFC coil 716,antenna 720, or other antenna or communication mechanism at hearingassist device 700. In the example ofNFC coil 716, the command may be transmitted fromNFC coil 716 onNFC antenna signal 748 toNFC transceiver 718.NFC transceiver 718 may demodulate command data from the received communication signal, and provide the command toCPU 732. In the example ofantenna 720, the command may be transmitted fromantenna 720 onBT antenna signal 752 toBT transceiver 722.BT transceiver 722 may demodulate command data from the received communication signal, and provide the command toCPU 732. -
CPU 732 may execute the received command. The received command may cause hearing assistdevice 700 to perform one or more functions/actions. For instance, in embodiments, the command may cause hearing assistdevice 700 to turn on or off, to change modes, to activate or deactivate one or more sensors, to wirelessly transmit further information, to execute particular program code (e.g., stored ascode 738 in memory 734), to play a sound (e.g., an alert, a tone, a beeping noise, pre-recorded or synthesized voice, etc.) fromspeaker 714 to the user to inform the user of information and/or cause the user to perform a function/action, and/or cause one or more additional and/or alternative functions/actions to be performed by hearingassist device 700. Further examples of such commands and functions/actions are described elsewhere herein. - In embodiments, a hearing assist device may be configured to convert received RF energy into charge for storage in a battery of the hearing assist device. For instance, as shown in
FIG. 7 , hearing assistdevice 700 includescharge circuit 724 for chargingbattery 726, which is a rechargeable battery (e.g., rechargeable battery 114). In an embodiment,charge circuit 724 may operate according toFIG. 10 .FIG. 10 shows aflowchart 1000 of a process for a wirelessly charging a battery of a hearing assist device, according to an exemplary embodiment.Flowchart 1000 is described as follows. - In
step 1002 offlowchart 1000, a radio frequency signal is received. For example, as shown inFIG. 7 ,NFC coil 716,antenna 720, and/or other antenna or coil of hearingassist device 700 may receive a radio frequency (RF) signal. The RF signal may be a communication signal that includes data (e.g., modulated on the RF signal), or may be an un-modulated RF signal.Charge circuit 724 may be coupled to one or more ofNFC coil 716,antenna 720, or other antenna to receive the RF signal. - In
step 1004, a charge current is generated that charges a rechargeable battery of the hearing assist device based on the received radio frequency signal. In an embodiment,charge circuit 724 is configured to generate a charge current 756 that is used to chargebattery 726.Charge circuit 724 may be configured in various ways to convert a received RF signal to a charge current. For instance,charge circuit 724 may include an induction coil to take power from an electromagnetic field and convert it to electrical current. Alternatively,charge circuit 724 may include a diode rectifier circuit that rectifies the received RF signal to a DC (direct current) signal, and may include one or more charge pump circuits coupled to the diode rectifier circuit used to create a higher voltage value from the DC signal. Alternatively,charge circuit 724 may be configured in other ways to generate charge current 756 from a received RF signal. - In this manner, hearing assist
device 700 may maintain power for operation, withbattery 726 being charged periodically by RF fields generated by other devices, rather than needing to physically replace batteries. - In another embodiment, hearing assist
device 700 may be configured to generate sound based on received sensor data. For instance, hearing assistdevice 700 may operate according toFIG. 11 .FIG. 11 shows aflowchart 1100 of a process for generating and broadcasting sound based on sensor data, according to an exemplary embodiment. For purposes of illustration,flowchart 1100 is described as follows with reference toFIG. 7 . -
Flowchart 1100 begins withstep 1102. Instep 1102, an audio signal is generated based at least on the processed sensor data. For instance, as described above with respect tosteps FIG. 8 ), a sensor output signal may be processed to generate processed sensor data. The processed sensor data may be stored inmemory 736 assensor data 736, may be held in registers inCPU 732, or may be present in another location. Audio data for one or more sounds (e.g., tones, beeping sounds, voice segments, etc.) may be stored in memory 734 (e.g., as other data 768) that may be selected for play to the user based on particular sensor data (e.g., particular values of sensor data, etc.).CPU 732 orDSP 730 may select the audio data corresponding to particular sensor data frommemory 734. Alternatively,CPU 732 may transmit a request for the audio data from another device using a communication mechanism (e.g.,NFC transceiver 718,BT transceiver 722, etc.).DSP 730 may receive the audio data fromCPU 732, frommemory 734, or from another device, and may generate processeddigital audio signal 762 based thereon. - In
step 1104, sound is generated based on the audio signal, the sound broadcast from a speaker of the hearing assist device into the ear of the user. For instance, as shown inFIG. 7 , D/Aconverter 764 may be present, and may receive processeddigital audio signal 762. D/A converter 764 may convert processeddigital audio signal 762 to digital form to generate processedaudio signal 766.Speaker 714 receives processedaudio signal 766, and broadcasts sound generated based on processedaudio signal 766 into the ear of the user. - In this manner, sounds may be provided to the user by hearing
assist device 700 based at least on sensor data, and optionally further based on additional information. The sounds may provide information to the user, and may remind or instruct the user to perform a function/action. The sounds may include one or more of a tone, a beeping sound, or a voice that includes at least one of a verbal instruction to the user, a verbal warning to the user, or a verbal question to the user. For instance, a tone or a beeping sound may be provided to the user as an alert based on particular values of sensor data (e.g., indicating a high glucose/blood sugar value), and/or a voice instruction may be provided to the user as the alert based on the particular values of sensor data (e.g., a voice segment stating “Blood sugar is low—Insulin is required” or “hey, your heart rate is 80 beats per minute, your heart is fine, your pacemaker has got 6 hours of battery left.”). - In another embodiment, hearing assist
device 700 may be configured to generate filtered environmental sound. For instance, hearing assistdevice 700 may operate according toFIG. 12 .FIG. 12 shows aflowchart 1200 of a process for generating and broadcasting filtered sound from a hearing assist device, according to an exemplary embodiment. For purposes of illustration,flowchart 1200 is described as follows with reference toFIG. 7 . -
Flowchart 1200 begins withstep 1202. Instep 1202, an audio signal is generated based on environmental sound received by at least one microphone of the hearing assist device. For instance, as shown inFIG. 7 ,microphone 706 may generate a receivedaudio signal 740 based on received environmental sound. Receivedaudio signal 740 may optionally be amplified, filtered, and converted to digital form to generatedigital audio signal 746, as shown inFIG. 7 . - In step 1204, one or more frequencies of the audio signal are selectively favored to generate a modified audio signal. As shown in
FIG. 7 ,DSP 730 may receivedigital audio signal 746, and may perform digital signal processing ondigital audio signal 746 to generate processeddigital audio signal 762.DSP 730 may favor one or more frequencies by amplifying particular frequencies, attenuate particular frequencies, and/or by otherwise filteringdigital audio signal 746 in the discrete domain.DSP 730 may perform the signal processing for various reasons, including noise cancellation or hearing loss compensation. For instance,DSP 730 may processdigital audio signal 746 to compensate for a personal hearing frequency response of the user, such as compensating for poor hearing of high frequencies, middle range frequencies, or other personal frequency response characteristics of the user. - In
step 1206, sound is generated based on the modified audio signal, the sound broadcast from a speaker of the hearing assist device into the ear of the user. For instance, as shown inFIG. 7 , D/Aconverter 764 may be present, and may receive processeddigital audio signal 762. D/A converter 764 may convert processeddigital audio signal 762 to digital form to generate processedaudio signal 766.Speaker 714 receives processedaudio signal 766, and broadcasts sound generated based on processedaudio signal 766 into the ear of the user. - In this manner, environmental noise, voice, and other sounds may be tailored to a particular user's personal hearing frequency response characteristics. Furthermore, particular noises in the environment may be attenuated (e.g., road noise, engine noise, etc.) to be filtered from the received environmental sounds so that the user may better hear important or desired sounds. Furthermore, sounds that are desired to be heard (e.g., music, a conversation, a verbal warning, verbal instructions, sirens, sounds of a nearby car accident, etc.) may be amplified so that the user may better hear them.
- In another embodiment, hearing assist
device 700 may be configured to transmit recorded voice of a user to another device. For instance, hearing assistdevice 700 may operate according toFIG. 13 .FIG. 13 shows aflowchart 1300 of a process for generating an information signal in a hearing assist device based on a voice of a user, and for transmitting the information signal to a second device, according to an exemplary embodiment. For purposes of illustration,flowchart 1300 is described as follows with reference toFIG. 7 . -
Flowchart 1300 begins withstep 1302. Instep 1302, an audio signal is generated based on a voice of the user received at a microphone of the hearing assist device. For instance, as shown inFIG. 7 ,microphone 706 may generate a receivedaudio signal 740 based on received voice of the user. Receivedaudio signal 740 may optionally be amplified, filtered, and converted to digital form to generatedigital audio signal 746, as shown inFIG. 7 . - The voice of the user may be any statement made by the user, including a question, a statement of fact, a command, or any other verbal sequence. For instance, the user may ask “what is my heart rate”. All such statements made by the user can be those intended for capture by one or more hearing assist devices, supporting local and remote systems. Such statements may also include unintentional sounds such as semi-lucid ramblings, moaning, choking, coughing, and/or other sounds. Any one or more of the hearing assist devices and the supporting local device can receive (via microphones) such audio and forward the audio from the hearing assist device(s) as needed for further processing. This processing may include voice and/or sound recognition, comparisons with command words or sequences, (video, audio) prompting for (gesture, tactile or audible) confirmation, carrying out commands, storage for later analysis or playback, and/or forwarding to an appropriate recipient system for further processing, storage, and/or presentations to others.
- In
step 1304, an information signal is generated based on the audio signal. As shown inFIG. 7 ,DSP 730 may receivedigital audio signal 746. In an embodiment,DSP 730 and/orCPU 732 may generate an information signal fromdigital audio signal 746 to be transmitted to a second device from hearingassist device 700.DSP 730 and/orCPU 732 may optionally perform voice/speech recognition ondigital audio signal 746 to recognize spoken words included therein, and may include the spoken words in the generated information signal. - For instance, in an embodiment,
code 738 stored inmemory 734 may include a voice recognition program that may be executed byCPU 732 and/orDSP 730. The voice recognition program may use conventional or proprietary voice recognition techniques. Furthermore, such voice recognition techniques may be augmented by sensor data. For instance, as described above, position/motion sensor 518 may include a vibration sensor. The vibration sensor may detect vibrations of the user associated with speaking (e.g., jaw movement of the wearer during talking), and generates corresponding vibration information/data. The vibration information output by the vibration sensor may be received byCPU 732 and/orDSP 730, and may be used to aid in improving speech/voice recognition performed by the voice recognition program. For instance, the vibration information may be used by the voice recognition program to detect breaks between words, to identify the location of spoken syllables, to identify the syllables themselves, and/or to better perform other aspects of voice recognition. Alternatively, the vibration information may be transmitted from hearingassist device 700, along with the information signal, to a second device to perform the voice recognition process at the second device (or other device). - In
step 1306, the generated information signal is transmitted to the second device. For instance, as shown inFIG. 7 ,CPU 732 may provide the information signal (e.g., from CPU registers, fromDSP 730, frommemory 734, etc.) to a transceiver to be transmitted from hearing assist device 700 (e.g.,NFC transceiver 718,BT transceiver 722, or other transceiver). - Another device, such as
mobile computing device 902,stationary computing device 904, orserver 906, may receive the transmitted voice information, and may analyze the voice (spoken words, moans, slurred words, etc.) therein to determine one or more functions/actions to be performed. As a result, one or more functions/actions may be determined to be performed by hearingassist device 700 or another device. - In another embodiment, hearing assist
device 700 may be configured to enable voice to be received and/or generated to be played to the user. For instance, hearing assistdevice 700 may operate according toFIG. 14 .FIG. 14 shows aflowchart 1400 of a process for generating voice to be broadcast to a user, according to an exemplary embodiment. For purposes of illustration,flowchart 1400 is described as follows with reference toFIG. 7 . -
Flowchart 1400 begins withstep 1402. Instep 1402, a sensor output signal is received from a medical sensor of the hearing assist device that senses a characteristic of the user. Similarly to step 802 ofFIG. 8 , sensors 702 a-702 c each sense/measure information about a health characteristic of the user of hearingassist device 700. For instance,sensor 702 a may sense a characteristic of the user (e.g., a heart rate, a blood pressure, a glucose level, a temperature, etc.).Sensors 702 a generatessensor output signal 758 a, which indicates the measured information about the corresponding health characteristic.Sensor interface 728 a, when present, may convertsensor output signal 758 a to modifiedsensor output signal 760 a, to be received by processing logic. - In
step 1404, processed sensor data is generated based on the sensor output signal. Similarly to step 804 ofFIG. 8 ,processing logic 704 receives modifiedsensor output signal 760 a, and may process modifiedsensor output signal 760 a in any manner. For instance, as shown inFIG. 7 ,CPU 732 may receive modifiedsensor output signal 760 a, and may process the sensor information contained therein to generate processed sensor data. For instance,CPU 732 may manipulate the sensor information (e.g., according to an algorithm of code 738) to convert the sensor information into a presentable form (e.g., scaling the sensor information, adding or subtracting a constant to/from the sensor information, etc.), or may otherwise process the sensor information. Furthermore,CPU 732 may transmit the sensor information of modifiedsensor output signal 760 a toDSP 730 to be digital signal processed. - In
step 1406, a voice audio signal generated based at least on the processed sensor data is received. In an embodiment, the processed sensor data generated instep 1404 may be transmitted from hearingassist device 700 to another device (e.g., as shown inFIG. 9 ), and a voice audio signal may be generated at the other device based on the processed sensor data. In another embodiment, the voice audio signal may be generated by processinglogic 704 based on the processed sensor data. The voice audio signal contains voice information (e.g., spoken words) that relate to the processed sensor data. For instance, the voice information may include a verbal alert, verbal instructions, and/or other verbal information to be provided to the user based on the processed sensor data (e.g., based on a value of measured sensor data, etc.). The voice information may be generated by being synthesized, being retrieved from memory 734 (e.g., a library of record spoken segments in other data 768), or being generated from a combination thereof. It is noted that the voice audio signal may be generated based on processed sensor data from one or more sensors.DSP 730 may output the voice audio signal as processeddigital audio signal 762. - In
step 1408, voice is broadcast from the speaker into the ear of the user based on the received voice audio signal. For instance, as shown inFIG. 7 , D/Aconverter 764 may be present, and may receive processeddigital audio signal 762. D/A converter 764 may convert processeddigital audio signal 762 to digital form to generate processedaudio signal 766.Speaker 714 receives processedaudio signal 766, and broadcasts voice generated based on processedaudio signal 766 into the ear of the user. - In this manner, voice may be provided to the user by hearing
assist device 700 based at least on sensor data, and optionally further based on additional information. The voice may provide information to the user, and may remind or instruct the user to perform a function/action. For instance, the voice may include at least one of a verbal instruction to the user (“take an iron supplement”), a verbal warning to the user (“your heart rate is high”), a verbal question to the user (“have you fallen down, and do you need assistance?”), or a verbal answer to the user (“your heart rate is 98 beats per minute”). - In accordance with various embodiments, the performance of one or more functions by a hearing assist device is assisted or improved in some manner by utilizing resources of an external device and/or service to which the hearing assist device may be communicatively connected. Such performance assistance or improvement may be achieved, for example and without limitation, by utilizing power resources, processing resources, storage resources, sensor resources, and/or user interface resources of an external device or service to which the hearing assist device may be communicatively connected.
-
FIG. 15 is a block diagram of anexample system 1500 that enables external operational support to be provided to a hearing assist device in accordance with an embodiment. As shown inFIG. 15 ,system 1500 includes a firsthearing assist device 1501, a secondhearing assist device 1503, and a portableelectronic device 1505. First and second hearing assistdevices devices 1501 and 303 are not limited to those implementations. Furthermore, althoughFIG. 15 shows two hearing assist devices that can be worn by a user, it is to be understood that the external operational support techniques described herein can also be applied to a single hearing assist device worn by a user. - Portable
electronic device 1505 is intended to represent an electronic device that may be carried by or is otherwise locally accessible to a wearer of first and second hearing assistdevices electronic device 1505 may comprise a smart phone, a tablet computer, a netbook, a laptop computer, a remote control device, a personal media player, a handheld gaming device, or the like. It is noted that certain external operational support features described herein are premised on the ability of a wearer of a hearing assist device to hold portableelectronic device 1505 and/or lift portableelectronic device 1505 toward his/her ear. For these embodiments, it is to be understood that portableelectronic device 1505 has a form factor that permits such actions to be taken. However, for embodiments that comprise other external operational support features that do not require such actions to be taken, it is to be understood that portableelectronic device 1505 may have a larger form factor. For example, in accordance with certain embodiments, portableelectronic device 1505 may comprise a desktop computer or television. - As further shown in
FIG. 15 , firsthearing assist device 1501 and secondhearing assist device 1503 are capable of communicating with each other via acommunication link 1521.Communication link 1521 may be established using, for example and without limitation, a wired communication link, a wireless communication link (wherein such wireless communication link may be established using NFC, BLUETOOTH® low energy (BTLE) technology, wireless power transfer (WPT) technology, telecoil, or the like), or skin-based signal transmission. Furthermore, firsthearing assist device 1501 is capable of communicating with portableelectronic device 1505 via acommunication link 1523 and second hearing assist device 303 is capable of communicating with portableelectronic device 1505 via acommunication link 1525. Each ofcommunication links - As also shown in
FIG. 15 , portableelectronic device 1505 is capable of communicating with various other entities via one or more wired and/orwireless communication pathways 1513. For example, portableelectronic device 1505 may access one or more hearing assistdevice support services 1511 via communication pathway(s) 1513. Such hearing assist device support service(s) 1511 may be executed or otherwise provided by a device such as but not limited to a set top box, a television, a wired or wireless access point, or a server that is accessed via communication pathway(s) 1513. Such device may also comprise a gateway via which such hearing assist device support service(s) 1511 may be accessed. As will be appreciated by persons skilled in the art, such hearing assist device support service(s) 1511 may also comprise cloud-based services accessed via a network. Since portableelectronic device 1505 can access such hearing assist device support service(s) 1511 and can also communicate with first and second hearing assistdevices electronic device 1505 is capable of making hearing assist device support service(s) 1511 available to first and second hearing assistdevices - Portable
electronic device 1505 can also access one or more support personnel system(s) 1515 via communication pathway(s) 1513. Support personnel system(s) 1515 are intended to generally represent systems that are owned and/or operated by persons having an interest (personal, professional, fiduciary or otherwise) in the health, well-being, or some other state of a wearer of first and second hearing assistdevices devices devices electronic device 1505 can access such support personnel system(s) 1515 and can also communicate with first and second hearing assistdevices electronic device 1505 is capable of carrying out communication between first and second hearing assistdevices - Wired and/or wireless communication pathway(s) 1513 may be implemented using any of a wide variety of communication media and associated protocols. For example, communication pathway(s) 1513 may comprise wireless communication pathways implemented via radio frequency (RF) signaling, infrared (IR) signaling, or the like. Such signaling may be carried out using long-range wireless protocols such as WIMAX® (IEEE 802.16) or GSM (Global System for Mobile Communications), medium-range wireless protocols such as WI-FI® (IEEE 802.11), and/or short-range wireless protocols such as BLUETOOTH® or any of a variety of IR-based protocols. Communication pathway(s) 1513 may also comprise wired communication pathways established over twisted pair, Ethernet cable, coaxial cable, optical fiber, or the like, using suitable communication protocols therefor.
-
Communication links devices electronic device 1505 enable first and second hearing assistdevices electronic device 1505 to assist in performing certain operations and/or improve the performance of such operations. Furthermore, since portableelectronic device 1505 can access hearing assist device support service(s) 1511 and support personnel system(s) 1515, portableelectronic device 1505 can also make such system(s) and service(s) available to first and second hearing assistdevices devices - These concepts will now be further explained with respect to
FIG. 16 , which depicts asystem 1600 comprising ahearing assist device 1601 and a cloud/service/phone/portable device 1603 that may be communicatively connected thereto.Hearing assist device 1601 may comprise, for example and without limitation, either of hearingassist device FIG. 15 or any of the hearing assist devices described above in Sections II-IV. Although only a singlehearing assist device 1601 is shown inFIG. 16 , it is to be understood thatsystem 1600 may include two hearing assist devices.Device 1603 may comprise, for example and without limitation, portableelectronic device 1505 or a device used to implement any of hearing assist device support service(s) 1511 or support personnel system(s) 1515 that are accessible to portableelectronic device 1505 as described above in reference toFIG. 15 . Thusdevice 1603 may be local with respect to the wearer of hearingassist device 1601 or remote with respect to the wearer of hearingassist device 1601. -
Hearing assist device 1601 includes a number of processing modules that may be implemented as software or firmware running on one or more general purpose processors and/or digital signal processors (DSPs), as dedicated circuitry, or as a combination thereof. Such processors and/or dedicated circuitry are collectively referred to inFIG. 16 as general purpose (DSP) anddedicated processing circuitry 1613. As shown inFIG. 16 , the processing modules include aspeech generation module 1623, a speech/noise recognition module 1625, an enhancedaudio processing module 1627, a clock/scheduler module 1629, a mode select and reconfiguration module 1631, and abattery management module 1633. - As also shown in
FIG. 16 , hearingassist device 1601 further includeslocal storage 1635.Local storage 1635 comprises one or more volatile and/or non-volatile memory devices or structures that are internal to hearing assistdevice 1601. Such memory devices or structures may be used to store recorded audio information in anaudio playback queue 1637 as well as to store information and settings 1639 associated with hearingassist device 1601, a user thereof, a device paired thereto, and to services (cloud-based or otherwise) accessed by or on behalf of hearingassist device 1601. -
Hearing assist device 1601 further includes sensor components and associatedcircuitry 1641. Such sensor components and associated circuitry may include but are not limited to one or more microphones, bone conduction sensors, temperature sensors, blood pressure sensors, blood glucose sensors, pulse oximetry sensors, pH sensors, vibration sensors, accelerometers, gyros, magnetos, or the like. Further sensor types that may be included in hearingassist device 1601 and information regarding the structure, function and operation of such sensors is provided above in Sections II-Iv. -
Hearing assist device 1601 still further includes user interface (UI) components and associated circuitry 1643. Such UI components may include buttons, switches, dials or other mechanical components by which a user may control and configure the operation of hearingassist device 1601. Such UI components may also comprise capacitive sensing components to allow for touch-based or tap-based interaction with hearingassist device 1601. Such UI components may further include a voice-based UI. Such voice-based UI may utilize speech/noise recognition module 1625 to recognize commands uttered by a user of hearingassist device 1601 and/orspeech generation module 1623 to provide output in the form of pre-defined or synthesized speech. In an embodiment in whichhearing assist device 1601 comprise an integrated part of a pair of glasses, visor or helmet, user interface component and associated circuitry 1643 may also comprise a display integrated with or projected upon a portion of the glasses, visor or helmet for presenting information to a user. -
Hearing assist device 1601 also includes communication interfaces and associatedcircuitry 1645 for carrying out communication over one or more wired, wireless, or skin-based communication pathways. Communication interfaces and associatedcircuitry 1645 enable hearing assistdevice 1601 to communicate withdevice 1603. Communication interfaces and associatedcircuitry 1645 may also enable hearing assistdevice 1601 to communicate with a second hearing assist device worn by the same user as well as with other devices. - Generally speaking, cloud/service/phone/
portable device 1603 comprises power resources, processing resources, and storage resources that can be used by hearingassist device 1601 to assist in performing certain operations and/or to improve the performance of such operations when a communication pathway has been established between the two devices. - In particular,
device 1603 includes a number of assist processing modules that may be implemented as software or firmware running on one or more general purpose processors and/or DSPs, as dedicated circuitry, or as a combination thereof. Such processors and/or dedicated circuitry are collectively referred to inFIG. 16 as general/dedicated processing circuitry (with hearing assist device support) 1653. As shown inFIG. 16 , the processing modules include a speech generation assistmodule 1655, a speech/noise recognition assistmodule 1657, an enhanced audioprocessing assist module 1659, a clock/scheduler assist module 1661, a mode select and reconfiguration assist module 1663, and a batterymanagement assist module 1665. - As also shown in
FIG. 16 ,device 1603 further includesstorage 1667.Storage 1667 comprises one or more volatile and/or non-volatile memory devices/structures and/or storage systems that are internal to or otherwise accessible todevice 1603. Such memory devices/structures and/or storage systems may be used to store recorded audio information in anaudio playback queue 1669 as well as to store information and settings 1671 associated with hearingassist device 1601, a user thereof, a device paired thereto, and to services (cloud-based or otherwise) accessed by or on behalf of hearingassist device 1601. -
Device 1603 also includes communication interfaces and associatedcircuitry 1677 for carrying out communication over one or more wired, wireless or skin-based communication pathways. Communication interfaces and associatedcircuitry 1677 enabledevice 1603 to communicate with hearingassist device 1601. Such communication may be direct (point-to-point betweendevice 1603 and hearing assist device 1601) or indirect (through one or more intervening devices or nodes). Communication interfaces and associatedcircuitry 1677 may also enabledevice 1603 to communicate with other devices or access various remote services, including cloud-based services. - In an embodiment in which
device 1603 comprises a device that is carried by or is otherwise locally accessible to a wearer of hearingassist device 1601,device 1603 may also comprise supplemental sensor components and associatedcircuitry 1673 and supplemental user interface components and associated circuitry 1675 that can be used by hearingassist device 1601 to assist in performing certain operations and/or to improve the performance of such operations. - Further explanation and examples of how external operational support may be provided to a hearing assist device will now be provided with continued reference to
system 1600 ofFIG. 16 . - A prerequisite for providing external operational support to hearing assist
device 1601 bydevice 1603 may be the establishment of a communication pathway betweendevice 1603 and hearing assistdevice 1601. In one embodiment, the establishment of such a communication pathway is achieved by implementing a communication service on hearingassist device 1601 that monitors for the presence ofdevice 1603 and selectively establishes communication therewith in accordance with a predefined protocol. Alternatively, a communication service may be implemented ondevice 1603 that monitors for the presence of hearingassist device 1601 and selectively establishes communication therewith in accordance with a predefined protocol. Still other methods of establishing a communication pathway between hearing assistdevice 1601 anddevice 1603 may be used. - Battery Management.
-
Hearing assist device 1601 includesbattery management module 1633 that monitors a state of a battery internal to hearing assistdevice 1601.Battery management module 1601 may also be configured to alert a wearer of hearingassist device 1601 when such battery is in a low-power state so that the wearer can recharge the battery. As discussed above, the wearer of hearingassist device 1601 can cause such recharging to occur by bringing a portable electronic device within a certain distance of hearingassist device 1601 such that power may be transferred via an NFC link, WPT link, or other suitable link for transferring power between such devices. In an embodiment in whichdevice 1603 comprises such a portable electronic device, hearingassist device 1601 may be said to be utilizing the power resources ofdevice 1603 to assist in the performance of its operations. - As also noted above, when a communication pathway has been established between hearing assist
device 1601 anddevice 1603, hearingassist device 1601 can also utilize other resources ofdevice 1603 to assist in performing certain operations and/or to improve the performance of such operations. Whether and when hearingassist device 1601 so utilizes the resources ofdevice 1603 may vary depending upon the designs of such devices and/or any user configuration of such devices. - For example, hearing
assist device 1601 may be programmed to only utilize certain resources ofdevice 1603 when the battery power available to hearing assistdevice 1601 has dropped below a certain level. As another example, hearingassist device 1601 may be programmed to only utilize certain resources ofdevice 1603 when it is determined that an estimated amount of power that will be consumed in maintaining a particular communication pathway between hearing assistdevice 1601 anddevice 1603 will be less than an estimated amount of power that will be saved by offloading functionality to and/or utilizing the resources ofdevice 1603. In accordance with such an embodiment, an assistance feature ofdevice 1603 may be provided when a very low power communication pathway can be established or exists between hearing assistdevice 1601 anddevice 1603, but that same assistance feature ofdevice 1603 may be disabled if the only communication pathway that can be established or exists between hearing assistdevice 1601 anddevice 1603 is one that consumes a relatively greater amount of power. - Still other decision algorithms can be used to determine whether and when hearing
assist device 1601 will utilize resources ofdevice 1603. Such algorithms may be applied bybattery management module 1633 of hearingassist device 1601 and/or by batterymanagement assist module 1665 ofdevice 1603 prior to activating assistance features ofdevice 1603. Furthermore, a user interface provided by hearingassist device 1601 and/ordevice 1603 may enable a user to select which features of hearingassist device 1601 should be able to utilize external operational support and/or under what conditions such external operational support should be provided. The settings established by the user may be stored as part of information and settings 1639 inlocal storage 1635 of hearingassist device 1601 and/or as part of information and settings 1671 instorage 1667 ofdevice 1603. - In accordance with certain embodiments, hearing
assist device 1601 can also utilize resources of a second hearing assist device to perform certain operations. For example, hearingassist device 1601 may communicate with a second hearing assist device worn by the same user to coordinate distribution or shared execution of particular operations. Such communication may be carried out, for example, via a point-to-point link between the two hearing assist devices or via links between the two hearing assist devices and an intermediate device, such as a portable electronic device being carried by a user. The determination of whether a particular operation should be performed by hearingassist device 1601 versus the second hearing assist device may be made bybattery management module 1633, a battery management module of the second hearing assist device, or via coordination between both battery management modules. - For example, if hearing
assist device 1601 has more battery power available then the second hearing assist device, hearingassist device 1601 may be selected to perform a particular operation, such as taking a blood pressure reading or the like. Such battery imbalance may result from, for example, one hearing assist device being used at a higher volume than the other over an extended period of time. Via coordination between the two hearing assist devices, a more balanced discharging of the batteries of both devices can be achieved. Furthermore, in accordance with certain embodiments, certain sensors may be present on hearingassist device 1601 that are not present on the second hearing assist device and certain sensors may be present on the second hearing assist device that are not present on hearingassist device 1601, such that a distribution of functionality between the two hearing assist devices is achieved by design. - Speech Generation.
-
Hearing assist device 1601 comprises aspeech generation module 1623 that enables hearingassist device 1601 to generate and output verbal audio information (spoken words or the like) to a wearer thereof via a speaker of hearingassist device 1601. Such verbal audio information may be used to implement a voice UI, to provide speech-based alerts, messages and reminders as part of a clock/scheduler feature implemented by clock/schedule module 1629, or to provide emergency alerts or messages to a wearer of hearing assist device based on a detected medical condition of the wearer, or the like. The speech generated byspeech generation module 1623 may be pre-recorded and/or dynamically synthesized, depending upon the implementation. - When a communication pathway has been established between hearing assist
device 1601 anddevice 1603, speech generation assistmodule 1655 ofdevice 1603 may operate to perform all or part of the speech generation function that would otherwise be performed byspeech generation module 1623 of hearingassist device 1601. Such operation bydevice 1603 can advantageously cause the battery power of hearingassist device 1601 to be conserved. Any speech generated by speech generation assistmodule 1655 may be communicated back to hearing assistdevice 1601 for playback via at least one speaker of hearingassist device 1601. Any of a wide variety of well-known speech codecs may be used to carry out such transmission of speech information in an efficient manner. Additionally or alternatively, any speech generated by speech generation assistmodule 1655 can be played back via one or more speakers ofdevice 1603 ifdevice 1603 is local with respect to the wearer of hearingassist device 1601. - Furthermore, speech generation assist
module 1655 may provide a more elaborate set of features than those provided byspeech generation module 1623, asdevice 1603 may have access to greater power, processing and storage resources than hearing assistdevice 1601 to support such additional features. For example, speech generation assistmodule 1655 may provide a more extensive vocabulary of pre-recorded words, terms and sentences or may provide a more powerful speech synthesis engine. - Speech and Noise Recognition.
-
Hearing assist device 1601 includes a speech/noise recognition module 1625 that is operable to apply speech and/or noise recognition algorithms to audio input received via one or more microphones of hearingassist device 1601. Such algorithms can enable speech/noise recognition module 1625 to determine when a wearer of hearingassist device 1601 is speaking and further to recognize words that are spoken by such wearer, while rejecting non-speech utterances and noise. Such algorithms may be used, for example, to enable hearing assistdevice 1601 to provide a voice-based UI by which a wearer of hearingassist device 1601 can exercise voice-based control over the device. - When a communication pathway has been established between hearing assist
device 1601 anddevice 1603, speech/noise recognition assistmodule 1657 ofdevice 1603 may operate to perform all or part of the speech/noise recognition functions that would otherwise be performed by speech/noise recognition module 1625 of hearingassist device 1601. Such operation bydevice 1603 can advantageously cause the battery power of hearingassist device 1601 to be conserved. - Furthermore, speech/noise recognition assist
module 1657 may provide a more elaborate set of features than those provided by speech/noise recognition module 1625, asdevice 1603 may have access to greater power, processing and storage resources than hearing assistdevice 1601 to support such additional features. For example, speech/noise recognition assistmodule 1657 may include a training program that a wearer of hearingassist device 1601 can use to train the speech recognition logic to better recognize and interpret his/her own voice. As another example, speech/noise recognition assistmodule 1657 may include a process by which a wearer of hearingassist device 1601 can add new words to the dictionary of words that are recognized by the speech recognition logic. Such additional features may be included in an application that can be installed by the wearer ondevice 1603. Such additional features may also be supported by a user interface that forms part of supplemental user interface components and associated circuitry 1675. Of course, such features may be included in speech/noise recognition module 1625 in accordance with certain embodiments. - Enhanced Audio Processing.
-
Hearing assist device 1601 includes an enhancedaudio processing module 1627. Enhancedaudio processing module 1627 may be configured to process an input audio signal received by hearingassist device 1601 to achieve a desired frequency response prior to playing back such input audio signal to a wearer of hearingassist device 1601. For example, enhancedaudio processing module 1627 may selectively amplify certain frequency components of an input audio signal prior to playing back such input audio signal to the wearer. The frequency response to be achieved may specified by or derived from a prescription for the wearer that is provided to hearing assistdevice 1601 by an external device or system. With reference to the example components ofFIG. 15 , such external device or system may include any of portableelectronic device 1505, hearing assist device support service(s) 1511, or support personnel system(s) 1515. In certain embodiments, such prescription may be formatted in a standardized manner in order to facilitate use thereof by any of a variety of hearing assistance devices and audio reproduction systems. - In accordance with a further embodiment in which
hearing assist device 1601 is worn in conjunction with a second hearing assist device, enhancedaudio processing module 1627 may modify a first input audio signal received by hearingassist device 1601 prior to playback of the first input audio signal to one ear of the wearer, while an enhanced audio processing module of the second hearing assist device modifies a second input audio signal received by the second hearing assist device prior to playback of the second input audio signal to the other ear of the wearer. Such modification of the first and second input audio signals can be used to achieve enhanced spatial signaling for the wearer. That is to say, the enhanced audio signals provided to both ears of the wearer will enable the wearer to better determine the spatial origin of sounds. Such enhancement is desirable for persons who have a poor ability to detect the spatial origin of sound, and therefore a poor ability to responds to spatial cues. To determine the appropriate modifications for the left and right ear of the wearer, an appropriate user-specific “head transfer function” can be determined through testing of a user. The results of such testing may then be used to calibrate the spatial audio enhancement function applied at each ear. -
FIG. 17 is a block diagram of an enhancedaudio processing module 1700 that may be utilized by hearingassist device 1601 to provide such enhanced spatial signaling. Enhancedaudio processing module 1700 is configured to process an audio signal produced by a microphone of a left ear hearing assist device (denoted MIC L) and an audio signal produced by a microphone of a right ear hearing assist device (denoted MIC R) to produce an audio signal for playback to the left ear of a user (denoted LEFT). - In particular, enhanced
audio processing module 1700 includes anamplifier 1702 that amplifies the MIC L signal. Such signal may also be converted from analog to digital form by an analog-to-digital (A/D) converter (not shown inFIG. 17 ). The output ofamplifier 1702 is passed to alogic block 1704 that applies a head transfer function (HTF) thereto. The output oflogic block 1704 is passed to amultiplier 1706 that applies a scaling function thereto. The output ofmultiplier 1706 is passed to amixer 1720. Enhancedaudio processing module 1700 also includes anamplifier 1712 that amplifies the MIC R signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown inFIG. 17 ). The output ofamplifier 1712 is passed to alogic block 1714 that applies a HTF thereto. The output oflogic block 1714 is passed to amultiplier 1716 that applies a scaling function thereto. The output ofmultiplier 1716 is passed tomixer 1720.Mixer 1720 combines the output ofmultiplier 1706 and the output ofmultiplier 1716. The audio signal output bymixer 1720 is passed to anamplifier 1722 that amplifies it to produce the LEFT audio signal. Such signal may also be converted from digital to analog form by a digital-to-analog (D/A) converter (not shown inFIG. 17 ) prior to playback. - It is noted that to operate in such a manner, enhanced
audio processing module 1700 must have access to both the MIC L signal obtained by the left ear hearing assist device (which is assumed to be hearingassist device 1601 in this example) and the MIC R signal obtained by the right ear hearing assist device. Thus, the left ear hearing assist device must be capable of communicating with the right ear hearing device in order to obtain the MIC R signal therefrom. Likewise, the right ear hearing assist device must be capable of communicating with the left ear hearing device in order to obtain the MIC L signal therefrom. Such communication may be carried out, for example, via a point-to-point link between the two hearing assist devices or via links between the two hearing assist devices and an intermediate device, such as a portable electronic device being carried by a user. - Thus, in accordance with the foregoing, enhanced
audio processing module 1627 may modify an input audio signal received by hearingassist device 1601 to achieve a desired frequency response and/or spatial signaling prior to playback of the input audio signal. Both the desired frequency response and spatial signaling may be specified by or derived from a prescription associated with a wearing of hearingassist device 1601. - When a communication pathway has been established between hearing assist
device 1601 anddevice 1603, enhanced audioprocessing assist module 1659 ofdevice 1603 may operate to perform all or part of the enhanced audio processing functions that would otherwise be performed by enhancedaudio processing module 1627 of hearingassist device 1601, provided that there is a sufficiently fast communication pathway between hearing assistdevice 1601 anddevice 1603. A sufficiently fast communication pathway is required so as not to introduce an inordinate amount of lag between the receipt and playback of audio signals by hearingassist device 1601. Such operation bydevice 1603 can advantageously cause the battery power of hearingassist device 1601 to be conserved. - Thus, for example, audio content collected by one or more microphones of hearing
assist device 1601 may be transmitted todevice 1603. Enhanced audioprocessing assist module 1659 ofdevice 1603 may apply enhanced audio processing to such audio content, thereby producing enhanced audio content. The application of enhanced audio processing may comprise, but is not limited to, modifying the audio content to achieve a desired frequency response and/or spatial signaling as previously described.Device 1603 may then transmit the enhanced audio content back to hearing assistdevice 1601, where it may be played back to a wearer thereof. The foregoing transmission of audio content between the devices may utilize well-known audio and speech compression techniques to achieve improved transmission efficiency. Additionally or alternatively, any enhanced audio content generated by enhanced audioprocessing assist module 1659 can be played back via one or more speakers ofdevice 1603 ifdevice 1603 is local with respect to the wearer of hearingassist device 1601. - Clock/Scheduler.
- A clock/
scheduler module 1629 of hearingassist device 1601 is configured to provide a wearer thereof with alerts or messages concerning the date and/or time, upcoming appointments or events, or other types of information typically provided by, recorded in, or otherwise associated with a personal calendar and scheduling service or tool. Such alerts and messages may be conveyed on demand, such as in response to the wearer uttering the words “time” or “date” or performing some other action that is recognizable to a user interface associated with clock/scheduler module 1629. Such alerts and messages may also be conveyed automatically, such as in response to clock/scheduler module 1629 determining that an appointment or event is currently occurring or is scheduled to occur within a predetermined time frame. The alerts or messages may comprise certain sounds or words that are played back via one or more speakers of hearingassist device 1601. Where the alerts or messages comprise speech, such speech may be generated byspeech generation module 1623 and/or speech generation assistmodule 1655. - As shown in
FIG. 16 ,device 1603 includes a clock/scheduler assist module 1661. In an embodiment, clock/scheduler assist module 1661 comprises a personal calendar and scheduling service or tool that a user may interact with via a personal electronic device, such as personalelectronic device 1505 ofFIG. 15 . For example and without limitation, the personal calendar and scheduling service or tool may comprise MICROSOFT OUTLOOK®, GOOGLE CALENDAR™, or the like. When a communication pathway has been established between hearing assistdevice 1601 anddevice 1603, information concerning the current date/time, scheduled appointments and events, and the like, may be transferred from clock/scheduler assist module 1661 to clock/scheduler module 1629. Clock/scheduler module 1629 may then store such information locally where it can be used for alert and message generation as previously described. - Clock/
scheduler module 1629 within hearingassist device 1601 may be configured to store only a subset (for example, one week's worth) of scheduled appointments and events maintained by clock/scheduler assist module 1661 to conserve local storage space. Clock/scheduler module 1629 may further be configured to periodically synchronize its record of appointments and events with that maintained by clock/scheduler assist module 1661 ofdevice 1603 when a communication pathway has been established between hearing assistdevice 1601 anddevice 1603. - When a communication pathway has been established between hearing assist
device 1601 anddevice 1603, clock/scheduler assist module 1661 may also be utilized to perform all or a portion of the time/date reporting and alert/message generation functions that would normally be performed by clock/scheduler module 1629. Such operation bydevice 1603 can advantageously cause the battery power of hearingassist device 1601 to be conserved. Any alerts or messages generated by clock/scheduler assist module 1661 may be communicated back to hearing assistdevice 1601 for playback via at least one speaker of hearingassist device 1601. Any of a wide variety of well-known speech or audio codecs may be used to carry out such transmission of alerts and messages in an efficient manner. Additionally or alternatively, any alerts or messages generated by clock/scheduler assist module 1655 can be played back via one or more speakers ofdevice 1603 ifdevice 1603 is local with respect to the wearer of hearingassist device 1601. - Mode Select and Reconfiguration.
- Mode select and reconfiguration module 1631 comprises a module that enables selection and reconfiguration of various operating modes of hearing
assist device 1601. As will be made evident by the discussion provided below, hearingassist device 1601 may operate in a wide variety of modes, wherein each mode may specify certain operating parameters such as: (1) from which microphones audio input is to be obtained from (for example, audio input may be captured by one or more microphones of hearingassist device 1601 and/or by one or more microphones of device 1603); (2) where audio input is processed (for example, audio input may be processed by hearingassist device 1601 and/or bydevice 1603; (3) how audio input is processed (for example, certain audio processing features such as noise suppression, personalized frequency response processing, selective audio boosting, customized equalization, or the like may be utilized); and (4) where audio output is delivered (for example, audio output may be played back by one or more speakers of hearingassist device 1601 and/or by one or more speakers of device 1603). - The selection and reconfiguration of a particular mode of operation may be made by a user via interaction with a user interface of hearing
assist device 1601. Furthermore,device 1603 includes a mode select and reconfiguration assist module 1663 that enables a user to select and reconfigure a particular mode of operation through interaction with a user interface ofdevice 1603. Any mode selection or reconfiguration information input todevice 1603 may be passed to hearing assistdevice 1601 when a communication pathway between the two devices is established. As will be discussed below, in certain embodiments,device 1603 may be capable of providing a more elaborate, intuitive and user-friendly user interface by which a user can select and reconfigure operational modes of hearingassist device 1601. - Mode select and reconfiguration module 1631 and/or mode select and reconfiguration assist module 1663 may each be further configured to enable a user to define contexts and circumstances in which a particular mode of operation of hearing
assist device 1601 should be activated or deactivated. - Local Storage: Audio Playback Queue.
-
Local storage 1635 of hearingassist device 1601 includes anaudio playback queue 1637.Audio playback queue 1637 is configured to store a limited amount of audio content that has been received by hearingassist device 1601 so that it can be selectively played back by a wearer thereof. This feature enables the wearer to selectively play back certain audio content (such as words spoken by another or the like). For example, the last 5 seconds of audio may be played back. Such playback may be carried out at a higher volume depending upon the configuration. Such playback may be deemed desirable, for example, if the wearer did not fully comprehend something that was just said to him/her. -
Audio playback queue 1637 may comprise a first-in-first-out (FIFO) queue such that only the last few seconds or minutes of audio received by hearingassist device 1601 will be stored therein at any time. The audio signals stored inaudio playback queue 1637 may comprise processed audio signals (such as audio signals that have already been processed by enhanced audio processing module 1627) or unprocessed audio signals. In the latter case, the audio signals stored inaudio playback queue 1637 may be processed by enhancedaudio processing module 1627 before being played back to a wearer of hearingassist device 1601. In an embodiment in which a user is wearing two hearing assist devices, a left ear queue and a right ear queue may be maintained. - When a communication pathway has been established between hearing assist
device 1601 anddevice 1603,audio playback queue 1669 ofdevice 1603 may also operate to perform all or part of the audio storage operation that would otherwise be performed byaudio playback queue 1637 of hearingassist device 1601. Thus,audio playback queue 1669 may also support the aforementioned audio playback functionality by storing a limited amount of audio content received by hearingassist device 1601 and transmitted todevice 1603. By so doing, power and storage resources of hearingassist device 1601 may be conserved. Furthermore, sincedevice 1603 may have greater storage resources than hearing assistdevice 1601,audio playback queue 1669 provided bydevice 1603 may be capable of storing more and/or higher quality audio content than can be stored byaudio playback queue 1637. - In an alternate embodiment in which
device 1603 is carried by or otherwise locally accessible to a wearer of hearingassist device 1601,device 1603 may independently record ambient audio via one or more microphones thereof and store such audio inaudio playback queue 1669 for later playback to the wearer. Such playback may occur via one or more speakers of hearingassist device 1601 or, alternatively, via one or more speakers ofdevice 1603. Playback bydevice 1603 may be opted for, for example, in a case where hearingassist device 1601 is in a low power state, or is missing or fully discharged. - Various user interface techniques may be used to initiate playback of recorded audio in accordance with different embodiments. For example, in an embodiment, pressing a button on or tapping
hearing assists device 1601 may initiate playback of a limited amount of audio. In an embodiment in whichdevice 1603 is carried by or is otherwise locally accessible to the wearer of hearingassist device 1601, playback may be initiated by interacting with a user interface ofdevice 1603, such as by pressing a button or tapping an icon on a touchscreen ofdevice 1603. Furthermore, uttering certain words or sounds may trigger playback, such as “repeat” or “playback.” This feature can be implemented using the speech recognition functionality of hearingassist device 1601 ordevice 1603. - In certain embodiments, recording of audio may be carried out over extended period of times (for example, minutes, tens of minutes, or hours). In accordance with such embodiments,
audio playback queue 1669 may be relied upon to store the recorded audio content, asdevice 1603 may have access to greater storage resources than hearing assistdevice 1601. Audio compression may be used in any of the aforementioned implementations to reduce consumption of storage. - It is noted that audio may be recorded for purposes other than playing back recently received audio. For example, recording may be used to capture the content of meetings, concerts, or other events that a wearer of hearing
assist device 1601 attends so that such audio can be replayed at a later time or shared with others. Recording may also be used for health reasons. For example, a wearer's breathing noises may be recorded while the wearer is sleeping and later analyzed to determine whether or not the wearer suffers from sleep apnea. However, these are examples only, and other uses may exist for such recording functionality. - To help further illustrate the audio playback functionality,
FIG. 18 depicts aflowchart 1800 of a method for providing audio playback support to a hearing assist device, such as hearingassist device 1601. As shown inFIG. 18 , the method offlowchart 1800 begins atstep 1802, in which an audio signal obtained via at least one microphone of the hearing assist device is received. Atstep 1804, a copy of the received audio signal is stored in an audio playback queue. Atstep 1806, the copy of the received audio signal is retrieved from the audio playback queue for playback to a wearer of the hearing assist device. In accordance with one embodiment, each ofsteps assist device 1601. In accordance with an alternate embodiment, each ofsteps device 1603 or a service implemented bydevice 1603. The method offlowchart 1800 may further include playing back the copy of the received audio signal to the wearer of the hearing assist device via at least one speaker of the hearing assist device or via at least one speaker of a portable electronic device that is carried by or otherwise accessible to the wearer of the hearing assist device. - Local Storage: Information and Settings.
-
Local storage 1635 also stores information and settings 1639 associated with hearingassist device 1601, a user thereof, a device paired thereto, and to services accessed by or on behalf of hearingassist device 1601. Such information and settings may include, for example, owner information (which may be used, for example, to recognize and/or authenticate an owner of hearing assist device 1601), security information (including but not limited to passwords, passcodes, encryption keys or the like) used to facilitate private and secure communication with external devices (such as device 1603), and account information useful for signing in to various services available on certain external computer systems. Such information and settings may also include personalized selections and controls relating to user-configurable aspects of the operation of hearingassist device 1601 and/or to user-configurable aspects of the operation of any device with which hearingassist device 1601 may be paired, or any services (cloud-based or otherwise) that may be accessed by or on behalf of hearingassist device 1601. - As shown in
FIG. 16 ,storage 1667 ofdevice 1603 also includes information and settings 1671 associated with hearingassist device 1601, a user thereof, a device paired thereto, and to services accessed by or on behalf of hearingassist device 1601. Information and settings 1671 may comprise a backup copy of information and settings 1639 stored on hearingassist device 1601. Such a backup copy may be updated periodically when hearingassist device 1601 anddevice 1603 are communicatively linked. Such a backup copy may be maintained ondevice 1603 in order to ensure that important data is not lost or otherwise rendered inaccessible if hearingassist device 1601 is lost or runs out of power. In a further embodiment, information and settings 439 stored on hearingassist device 1601 may be temporarily or permanently moved todevice 1603 to free up storage space on hearingassist device 1601, in which case information and settings 1671 may comprise the only copy of such data. In a still further embodiment, information and settings 1671 stored ondevice 1603 may comprise a superset of information and settings 1639 stored on hearingassist device 1601. In accordance with such an embodiment, hearingassist device 1601 may selectively retrieve necessary information and settings fromdevice 1603 on an as-needed basis and cache only a subset of such data inlocal storage 1635. - Sensor Components and Associated Circuitry.
- As noted above, sensor components and associated
circuitry 1641 of hearingassist device 1601 may include any number of sensors including but not limited to one or more microphones, bone conduction sensors, temperature sensors, blood pressure sensors, blood glucose sensors, pulse oximetry sensors, pH sensors, vibration sensors, accelerometers, gyros, magnetos, or the like. In an embodiment in whichdevice 1603 comprises a portable electronic device that is carried by or otherwise locally accessible to a wearer of hearing assist device 1601 (such as portable electronic device 1505), sensor components and associatedcircuitry 1641 ofdevice 1603 may also include all or some subset of the foregoing sensors. For example, in an embodiment,device 1603 may comprise a smart phone that includes one or more microphones, accelerometers, gyros, or magnetos. - In accordance with such an embodiment, when a communication pathway has been established between hearing assist
device 1601 anddevice 1603, one or more of the sensors included indevice 1603 may be used to perform all or a portion of the functions performed by corresponding sensor(s) in hearingassist device 1601. By utilizing such sensor(s) ofdevice 1603, battery power of hearingassist device 1601 may be conserved. - Furthermore, data provided by the sensors included within
device 1603 may be used to augment or verify information provided by the sensors within hearingassist device 1601. For example, information provided by any accelerometers, gyros or magnetos included withindevice 1603 may be used to provide enhanced information regarding a current body position (for example, standing up, leaning over or lying down) and/or orientation of the wearer of hearingassist device 1601.Device 1603 may also include a GPS device that can be utilized to provide enhanced location information regarding the wearer of hearingassist device 1601. Furthermore,device 1603 may include its own set of health monitoring sensors that can produce data that can be combined with data produced by health monitoring sensors of hearingassist device 1601 to provide a more accurate or complete picture of the state of health of the wearer of hearingassist device 1601. - User Interface Components and Associated Circuitry.
- Depending upon the implementation, hearing assist
device 1601 may have a very simple user interface or a user interface that is more elaborate. For example, in an embodiment in whichhearing assist device 1601 comprises an ear bud, the user interface thereof may comprise very simple mechanical elements such as switches, buttons or dials. This may be due to the very limited surface area available for supporting such an interface. Even with a small form factor device, however, a voice-based user interface or a simple touch-based or tap-based user interface based on the use of capacitive sensing is possible. Also, head motion sensing, local or remote voice activity detection (VAD), or audio monitoring may be used to place a hearing assist device into a fully active state. In contrast, in an embodiment in whichhearing interface device 1601 comprises an integrated part of a pair of glasses, a visor, or a helmet, a more elaborate user interface comprising one or more displays and other features may be possible. - In an embodiment in which
device 1603 comprises a portable electronic device that is carried by or otherwise locally accessible to a wearer of hearing assist device 1601 (such as portable electronic device 1505), supplemental user interface components and associated circuitry 1675 ofdevice 1603 may provide a means by which a user can interact with hearingassist device 1601, thereby extending the user interface of that device. For example,device 1603 may comprise a phone or tablet computer having a touch screen display that can be used to interact with and manage the features of hearingassist device 1601. For example, in accordance with such an embodiment, an application may be downloaded to or otherwise installed ondevice 1603 that enables a user thereof to interact with and manage the features of hearingassist device 1601 by interacting with a touch screen display or other user interface element ofdevice 1603. This can enable a more elaborate, intuitive and user-friendly user interface to be designed for hearingassist device 1601. Such user interface may be made accessible to a user only when a communication pathway is established betweendevice 1603 and hearing assistdevice 1601 so that changes to the configuration of hearingassist device 1601 can be applied to that device in real time. Alternatively, such user interface may be made accessible to user even when there is no communication pathway established betweendevice 1603 and hearing assistdevice 1601. In this case, any changes made to the configuration of hearingassist device 1601 via the user interface provided bydevice 1603 may be stored ondevice 1603 and then later transmitted to hearing assistdevice 1601 when a suitable communication pathway becomes available. - In accordance with the embodiments described above in reference to
FIGS. 15 and 16 , the quality of audio content received by hearing assist device may be improved by utilizing an external device or service to process such audio content when such external device or service is communicatively connected to the hearing assist device. For example, as discussed above in reference toFIG. 16 , enhanced audioprocessing assist module 1659 ofdevice 1603 may process audio content received from hearingassist device 1601 to achieve a desired frequency response and/or spatial signaling and then return the processed audio content to hearing assistdevice 1601 for playback thereby. Furthermore, any other audio processing technique that may have the effect of improving audio quality may be applied by such external device or service, including but not limited to any of a variety of noise suppression or speech intelligibility enhancement techniques, whether presently known or hereinafter developed. Whether or not such connected external device or service is utilized to perform such enhanced processing may depend on a variety of factors, including a current state of a battery of the hearing assist device, a current selected mode of operation of the hearing assist device, or the like. - In addition to processing audio content received from a hearing assist device, an external device (such as portable electronic device 1505) may forward audio content to another device to which it is communicatively connected (for example, any device used to implement hearing assist device support service(s) 1511 or support personnel system(s) 1515) so that such audio content may be processed by such other device.
- In a further embodiment, the audio that is remotely processed and returned to the hearing assist device is audio that is captured by one or more microphones of an external device rather than by the microphone(s) of the hearing assist device itself. This enables the hearing assist device to avoid having to capture, package and transmit audio, thereby conserving battery power and other resources. For example, with continued reference to
system 1600 ofFIG. 16 , in an embodiment in whichdevice 1603 comprises a portable electronic device carried by or otherwise locally accessible to a wearer of hearingassist device 1601, one or more microphones ofdevice 1603 may be used to capture audio content from an environment in which the wearer is located. In this case, any enhanced audio processing may be performed bydevice 1603 or by a device or service accessible thereto. The processed audio content may then be delivered bydevice 1603 to hearing assistdevice 1601 for playback thereby. Additionally or alternatively, such processed audio content may be played back via one or more speakers ofdevice 1603 itself. The foregoing approach to audio processing may be deemed desirable, for example, if hearingassist device 1601 is in a very low power or even non-functioning state. In certain embodiments in whichdevice 1603 comprises a device having more, larger and/or more sensitive microphones than those available on hearingassist device 1601, the foregoing approach to audio enhancement may actually produce higher quality audio than would be produced using only the microphone(s) of hearingassist device 1601. - The foregoing example assumes that audio content is processed for the purpose of enhancing the quality thereof. However, such audio content may also be processed for speech recognition purposes. In the case of speech recognition, the audio content may comprise one or more voice commands that are intended to initiate or provide input to a process executing outside of hearing
assist device 1601. In such a case, like principles apply in that the audio content may be captured by microphone(s) ofdevice 1603 and processed bydevice 1603 or by a device or service accessible thereto. However, in this case, what is returned to the wearer may comprise something other than a processed version of the original audio content captured bydevice 1603. For example, if the voice commands were intended to initiate an Internet search, then what is returned to the wearer may comprise the results of such a search. The search results may be presented to a display ofdevice 1603, for example. Alternatively, if hearingassist device 1601 comprises an integrated part of a pair of glasses, visor or helmet having a display, then the search results may be presented to such display. Still further, such search results could be played back via one or more speakers ofdevice 1603 or hearing assistdevice 1601 using text-to-speech conversion. - In a further embodiment, a wearer of hearing
assist device 1601 may initiate operation in a mode in which audio content is captured by one or more microphone(s) ofdevice 1603 and processed by device 1603 (or by a device or service accessible to device 1603) to achieve desired audio effects, such as custom equalization, emphasized surround sound effects, or the like. For example, in the case of surround sound, sensors included in hearingassist device 1601 and/ordevice 1603 may be used to determine a position of the wearer's head relative to one or more audio sources and then to modify audio content to achieve an appropriate surround sound effect given the position of the wearer's head and the location of the audio source(s). The processed audio may then be delivered to the wearer via one or more speakers of hearingassist device 1601, a second hearing assist device, and/ordevice 1603. To support surround sound implementations, each hearing assist device may include multiple speakers (such as piezoelectric speakers) to deliver a surround sound effect. - In an embodiment, the desired audio effects described above may be defined by a user and stored as part of a profile associated with the user and/or with a particular operational mode of a hearing assist device, wherein the operational mode may be further associated with certain contexts or conditions in which the mode should be utilized. Such profile may be formatted in a standardized manner such that it can be used by a variety of hearing assist devices and audio reproduction systems.
- A wearer of hearing
assist device 1601 may define and initiate any of the foregoing operational modes by interacting with a user interface of hearingassist device 1601 or a user interface ofdevice 1603 depending upon the implementation. - The improvement of audio quality as described herein may include suppressing audio components generated by certain audio sources and/or boosting audio components generated by certain other audio sources. Such suppression or boosting may be performed by device 1603 (and/or a device or service accessible thereto), with processed audio being returned to hearing assist
device 1601 for playback thereby. Additionally or alternatively, processed audio may be played back bydevice 1603 in scenarios in whichdevice 1603 is local with respect to the wearer of hearingassist device 1601. In accordance with the foregoing scenarios, the original audio may be captured by one or more microphones of hearingassist device 1601, a second hearing assist device, and/ordevice 1603 whendevice 1603 is local with respect to the wearer of hearingassist device 1601. - With respect to noise suppression, the noise suppression function may utilize not only audio signal(s) captured by the microphones of the hearing assist device(s) worn by a user but also the audio signal(s) captured by the microphone(s) of a portable electronic device carried by or otherwise accessible to the user. As is known to persons skilled in the art of audio processing, by adding additional and diverse microphone reference signals, the ability of a noise suppression algorithm to identify and suppress noise can be improved.
- For example,
FIG. 19 is a block diagram of anoise suppression system 1900 that may be utilized by a hearing assist device or a device/service communicatively connected thereto in accordance with an embodiment.Noise suppression system 1900 is configured to process an audio signal produced by a microphone of a left ear hearing assist device (denoted MIC L), an audio signal produced by a microphone of a right ear hearing assist device (denoted MIC R), and an audio signal produced by a microphone of an external device (denoted MIC EXT) to produce a noise-suppressed audio signal for playback to the left ear of a user (denoted LEFT). - In particular,
noise suppression system 1900 includes anamplifier 1902 that amplifies the MIC L signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown inFIG. 19 ). The output ofamplifier 1902 is passed to anoise suppressor 1908.Noise suppression system 1900 further includes anamplifier 1904 that amplifies the MIC R signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown inFIG. 19 ). The output ofamplifier 1904 is passed tonoise suppressor 1908.Noise suppression system 1900 still further includes anamplifier 1906 that amplifies the MIC EXT signal. Such signal may also be converted from an analog to digital form by an A/D converter (not shown inFIG. 19 ). The output ofamplifier 1908 is passed to noise suppressor.Noise suppressor 1908 applies a noise suppression algorithm that utilizes all three amplified microphone signals to generate a noise-suppressed version of the MIC L signal. The noise-suppressed audio signal generated bynoise suppressor 1908 is passed to anamplifier 1910 that amplifies it to produce the LEFT audio signal. Such signal may also be converted from digital to analog form by a D/A converter (not shown inFIG. 19 ) prior to playback. - It is noted that to operate in such a manner,
noise suppression system 1900 must have access to the MIC L signal obtained by the left ear hearing assist device, the MIC R signal obtained by the right ear hearing device, and the MIC EXT signal obtained by the external device. This can be achieved by establishing suitable communication pathways between such devices. For example, in an embodiment in whichnoise suppression system 1900 is implemented in a portable electronic device carried by a user, the MIC L and MIC R signals may be obtained through skin-based communication and/or BLE communication between the portable electronic device and one or both of the two hearing assist devices, while the MIC EXT signal can be obtained directly from a microphone of the portable electronic device. Still other microphone signals other than those shown inFIG. 19 may be used to improve the performance of a noise suppressor. - In further embodiments, a selection may be made between using audio input provided by the microphone(s) of the hearing assist device(s) and using audio input provided by the microphone(s) of the portable electronic device. Such selection may be made manually by the wearer of the hearing assist device(s) or may be made automatically by the hearing assist device(s) and/or the portable electronic device based on a variety of factors, including but not limited to the state of the battery of the hearing assist device(s), the quality of the audio signals being captured by each device, the environment in which the wearer is located, or the like.
- In accordance with further embodiments, improving audio quality may also comprise selectively applying a boosting or amplification function to certain types of audio signals (for example, music or speech), to components of an audio signal emanating from a certain source, and/or to components of an audio signal emanating from a particular direction, while not amplifying or actively suppressing other audio signal types or components. Such processing may occur responsive to the user initiating a particular mode of operation or may occur automatically in response to detecting the existence of certain predefined conditions.
- For example, in one embodiment, a user may activate a “forward only” mode in which audio signals emanating from in front of the user are boosted and signals emanating from other directions are not boosted or are actively attenuated. Such mode of operation may be desired when the user is engaging in conversation with a person that is directly in front of him/her. Additionally, such mode of operation may automatically be activated if it can be determined from sensor data obtained by the hearing assist device(s) worn by the user and/or by a portable electronic device carried by the user that the user is engaging in conversation with a person that is directly in front of him/her. In a like manner, a user may activate a “television” mode in which audio signals emanating from a television are boosted and signals emanating from other sources are not boosted or are actively attenuated. Additionally, such mode of operation may automatically be activated if it can be determined from sensor data obtained by the hearing assist device(s) worn by the user and/or by a portable electronic device carried by the user that the user is watching television.
- In accordance with further embodiments, the audio processing functionality may be designed, programmed or otherwise configured such that certain sounds or noises should never be suppressed. For example, the audio processing functionality may be configured to always pass certain sounds such as extremely elevated sounds, a telephone or doorbell ringing, the honking of a car horn, an alarm or siren sounding, repeated sounds, or the like, to ensure that the wearer is made aware of important events. Likewise, the audio processing functionality may utilize speech recognition to ensure that certain uttered words are passed to the wearer, such as the wearer's name, the word “help” or other words.
- In accordance with further embodiments, the types of audio that are boosted, passed or suppressed may be determined based on detecting prior and/or current activities of the user, inactivity of the user, time of day, or the like. For example, if it is determined from sensor data and from information derived therefrom that a user is sleeping, then all audio input may be suppressed with certain predefined exceptions. Likewise, certain sounds or verbal instructions may be injected at certain times, such as an alarm or morning wakeup music in the morning.
- For each of the modes of operation described above, the required audio processing may be performed either by a hearing assist device, such as hearing
assist device 1601, or by an external device, such asdevice 1603, with which hearingassist device 1601 is communicatively connected. By utilizing an external device, power, processing and storage resources of the hearing assist device may advantageously be conserved. - The foregoing describes the use of an external device or service to provide improved audio quality to a hearing assist device. It should be noted, however, that in a scenario in which a user is wearing two hearing assist devices that are capable of communicating with each other, one hearing assist device may be selected to perform any of the audio processing tasks described herein on behalf of the other. Such selection may be by design in that one hearing assist device is equipped with more audio processing capabilities than the other. Alternatively, such selection may be performed dynamically based on a variety of factors including the comparative battery levels of each hearing assist device, a processing load currently assigned to each hearing assist device, or the like. Any audio that is processed by a first hearing assist device on behalf of a second hearing assist device may originate from one or more microphones of the first hearing assist device, from one or more microphones of the second hearing assist device, or from one or more microphones of portable electronic device that is carried by or otherwise locally accessible to a wearer of the first and second hearing assist devices.
- To help further illustrate the foregoing concepts,
FIG. 20 depicts aflowchart 2000 of a method for providing external operational support to a hearing assist device worn by a user, such as hearingassist device 1601. As shown inFIG. 20 , the method offlowchart 2000 begins atstep 2002 in which a communication pathway is established to the hearing assist device. Atstep 2004, an audio signal obtained by the hearing assist device is received via the communication pathway. Atstep 2006, the audio signal is processed to obtain processing results. Atstep 2008, the processing results are transmitted to the hearing assist device via the communication pathway. - Depending upon the implementation, each of the establishing, receiving, processing and transmitting steps may be performed by one of a second hearing assist device worn by the user, a portable electronic device carried by or otherwise accessible to the user, or a device or service that is capable of communicating with the hearing assist device via a portable electronic device carried by or otherwise accessible to the user. As noted above,
device 1603 may represent both a portable electronic device carried by or otherwise accessible to the user, or a device or service that is capable of communicating with the hearing assist device via a portable electronic device carried by or otherwise accessible to the user. - In accordance with certain embodiments,
step 2002 offlowchart 2000 may comprise establishing the communication pathway to the hearing assist device comprises establishing a communication link with the hearing assist device using one of NFC, BTLE technology, WPT technology, telecoil, or skin-based communication technology. - In one embodiment,
step 2006 offlowchart 2000 comprises processing the audio signal to generate an enhanced audio signal having a desired frequency response associated with the user andstep 2008 comprises transmitting the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby. - In another embodiment,
step 2006 offlowchart 2000 comprises processing the audio signal to generate an enhanced audio signal having a desired spatial signaling characteristic associated with the user andstep 2008 comprises transmitting the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby. - In a further embodiment,
step 2006 offlowchart 2000 comprises applying noise suppression to the audio signal to generate a noise-suppressed audio signal andstep 2008 comprises transmitting the noise-suppressed audio signal to the hearing assist device via the communication pathway for playback thereby. In further accordance with such an embodiment, applying noise suppression to the audio signal may comprise processing the audio signal and at least one additional audio signal obtained by a portable electronic device carried by or otherwise accessible to the user. - In a still further embodiment,
step 2006 offlowchart 2000 comprises applying speech recognition to the audio signal to identify one or more recognized words. -
FIG. 21 depicts aflowchart 2100 that illustrates steps that may be performed in addition to those shown inflowchart 2000 to provide external operational support to a hearing assist device worn by a user, such as hearingassist device 1601. As shown inFIG. 21 , the first additional step isstep 2102, which comprises receiving a second audio signal obtained by a portable electronic device that is carried by or otherwise accessible to the user. Atstep 2104, the second audio signal is processed to obtain processing results. Atstep 2106, the processing results are transmitted to the portable electronic device. These additional steps encompass the scenario wherein at least the audio capturing and playback tasks are offloaded from the hearing assist device to the portable electronic device. -
FIG. 22 depicts aflowchart 2200 that illustrates steps that may be performed in addition to those shown inflowchart 2000 to provide external operational support to a hearing assist device worn by a user, such as hearingassist device 1601. As shown inFIG. 22 , the first additional step isstep 2202, which comprises receiving a second audio signal obtained by a portable electronic device that is carried by or otherwise accessible to the user. Atstep 2204, the second audio signal is processed to obtain processing results. Atstep 2206, the processing results are transmitted to the hearing assist device. These additional steps encompass the scenario wherein audio capturing tasks are offloaded from the hearing assist device to the portable electronic device, but the audio playback task is retained by the hearing assist device. -
FIG. 23 depicts aflowchart 2300 that illustrates steps that may be performed in addition to those shown inflowchart 2000 to provide external operational support to a hearing assist device worn by a user, such as hearingassist device 1601. As shown inFIG. 23 , the first additional step isstep 2302, which comprises receiving a second audio signal obtained by the hearing assist device. Atstep 2304, the second audio signal is processed to obtain processing results. Atstep 2306, the processing results are transmitted to a portable electronic device that is carried by or otherwise accessible to the user. These additional steps encompass the scenario wherein audio capturing tasks are retained by the hearing assist device while audio playback tasks are offloaded to the portable electronic device. - In accordance with further embodiments, an audio signal received by one or more microphones of a hearing assist device may be suppressed or blocked while a substitute audio input signal may be delivered to the wearer. For example, a language translation feature may be implemented in which an audio signal received by one or more microphones of a hearing assist device is transmitted to an external device or service. The external device or service applies a combination of speech recognition and translation thereto to synthesize a substitute audio signal. The substitute audio signal comprises a translated version of the speech included in the original audio signal. The substitute audio signal is then transmitted back to the hearing assist device for playback thereby. While this is occurring, the hearing assist device utilizes active filtering to suppress the original audio signal or blocks it entirely, so that the wearer can clearly hear the substitute audio signal being played back through a speaker of the hearing assist device.
- As another example, an audio signal generated by a television, a DVD player, a compact disc (CD) player, a set top box, a portable media player, a handheld gaming device, or other entertainment device may be routed to a hearing assist device worn by a user for playback thereby. Such entertainment devices may also include smart phones, tablet computers, and other computing devices capable of running entertainment applications. While the user is listening to the audio being generated by the entertainment device, the hearing assist device may operate to suppress ambient background noise using an active filtering function, thereby providing the user with an improved listening experience. The delivery of the audio signal from the entertainment device to the hearing assist device and suppression of ambient background noise may occur in response to the establishment of a communication link between the hearing assist device and the entertainment device, or in response to other detectable factors, such as the hearing assist device being within a certain range of the entertainment device or the like. Conversely, the delivery of the audio signal from the entertainment device to the hearing assist device and suppression of ambient background noise may be discontinued in response to the breaking of a communication link between the hearing assist device and the entertainment device, or in response to other detectable factors, such as the hearing assist device passing outside of a certain range of the entertainment device or the like.
- For safety reasons as well as certain practical reasons, there may be certain sounds or noises should never be suppressed. Accordingly, the functionality described above for suppressing ambient audio in favor of a substitute audio stream could be configured to always pass certain sounds such as extremely elevated sounds, a telephone or doorbell ringing, the honking of a car horn, an alarm or siren sounding, repeated sounds, or the like, to ensure that the wearer is made aware of important events. Likewise, such functionality may utilize speech recognition to ensure that certain uttered words are always passed to the wearer, such as the wearer's name, the word “help” or other words. The functionality that monitors for such sounds and words may be present in the hearing assist device or in a portable electronic device that is communicatively connected thereto. When such sounds and words are passed to the hearing assist device, the substitute audio stream may be paused or discontinued (for example, a song the wearer was listening to may be paused or discontinued or a movie the wearer was viewing may be paused or discontinued). Furthermore, when such sounds and words are passed to the hearing assist device, the suppression of ambient noise may also be discontinued.
- Generally speaking, a hearing assist device in accordance with an embodiment can receive any number of audio signals and selectively pass one or a mixture of some or all of the audio signals for playback to a wearer thereof. Additionally, a hearing assist device in accordance with such an embodiment can selectively amplify or suppress any one of the aforementioned audio signals. This is illustrated by the block diagram of
FIG. 24 , which shows anaudio processing module 2400 that may be implemented in a hearing assist device in accordance with an embodiment. - As shown in
FIG. 24 ,audio processing module 2400 is capable of receiving at least four different audio signals. These include an audio signal captured by a microphone of the hearing assist device (denoted MIC), an audio signal received via an NFC interface of the hearing assist device (denoted NFC), an audio signal received via a BLE interface of the hearing assist device (denoted BLE), and an audio signal received via a skin-based communication interface of the hearing assist device (denoted SKIN).Audio processing module 2400 is configured to process these audio signals to generate an output audio signal for playback via aspeaker 2412. - As shown in
FIG. 24 , each of the MIC, NFC, BLE and SKIN signals is amplified by a correspondingamplifier FIG. 24 ). The amplified signals are then passed to acorresponding multiplier multipliers switches amplifier 2410 which amplifies it to produce the output audio signal that will be played back byspeaker 2412. The output audio signal may also be converted from a digital to analog form by a D/A converter (not shown inFIG. 24 ) prior to playback. - In further embodiments,
audio processing module 2400 may include additional logic that can apply active filtering, noise suppression, speech intelligibility enhancement, or any of a variety of audio signal processing functions to any of the audio signals received by the hearing assist device. Such functionality can be used to emphasize certain sounds, for example. Additionally,audio processing module 2400 may also include an output path by which the MIC signal can be passed to an external device for remote processing thereof. Such remotely-processed signal may then be returned via any of the NFC, BLE or skin-based communication interfaces discussed above. -
FIG. 24 thus illustrates that different audio streams may be picked up by the same hearing assist device. Whether one audio stream is exposed or not may depend on the circumstances, which can change from time to time. Consequently, each audio stream is delivered or filtered in varying dB intensities with prescribed equalization as managed by the hearing assist device or any one or more of the devices or services to which the hearing assist device may be communicatively connected. - In an embodiment, a portable electronic device (such as portable electronic device 1505) carried or otherwise locally accessible to a wearer of hearing assist device (such as hearing
assist device 1501 or 1503) is configured to detect when the hearing assist device is missing from the wearer's ear or discharged. In such a scenario, the portable electronic device responds by entering a hearing assist mode in which it captures ambient audio and processes it in accordance with a prescription associated with the wearer. As discussed above, such prescription may specify, for example, a desired frequency response or other desired characteristics of audio to be played back to the wearer. Such hearing assist mode may also be manually triggered by the wearer through interaction with a user interface of the portable electronic device. In an embodiment in which the portable electronic device comprises a telephone, the foregoing hearing assist mode may also be used to equalize and amplify incoming telephone audio as well. The functionality of the hearing assist mode may be included in an application that can be downloaded or otherwise installed on the portable electronic device. - In accordance with certain embodiments, the activation and use of the hearing assist mode of the portable electronic device may be carried out in a way that is not immediately discernible to others who may be observing the user. For example, in an embodiment in which the portable electronic device comprises a telephone, the telephone may be programmed to enter the hearing assist mode when the user raises the telephone to his/her ear and utters a particular activation word or words. Such a feature enables a user to make it look as if he or she is simply using his/her phone.
- In an embodiment, the portable electronic device may be configured to use one or more sensors (for example, a camera and/or microphone) to determine who the current user of the portable electronic device and to automatically select the appropriate prescription for that user when entering hearing assist mode. Alternatively, the user may interact with a user interface of the portable electronic device to select an appropriate volume level and prescription.
- In accordance with further embodiments, the hearing assist device may be capable of issuing a warning message to the wearer thereof when it appears that the battery level of the hearing assist device is low. In response to receiving such warning message, the wearer may utilize the portable electronic device to perform a recharging operation by bringing the portable electronic device within a range of the hearing assist device that is suitable for wirelessly transferring power thereto as was previously described. Additionally or alternatively, the wearer may activate a mode of operation in which certain operations normally performed by the hearing assist device are performed instead by the portable electronic device or by a device or service that is communicatively connected to the portable electronic device.
- In an embodiment, a personal electronic device (such as personal electronic device 1505) may be used to perform a hearing test on a wearer of hearing assist device (such as hearing
assist device 1501 or 1503). The hearing test may involve causing the hearing assist device to play back sounds having certain frequencies at certain volumes and soliciting feedback from the wearer regarding whether such sounds were heard or not. Still other types of hearing tests may be performed. For example, a hearing test designed to determine a head transfer function useful in achieving desired spatial signaling for a particular user may also be administered. The test results may be analyzed to generate a personalized prescription for the wearer. Sensors within the hearing assist device may be used to measure distance to the ear drum or other factors that may influence test results so that such factors can be accounted for in the analysis. The personalized prescription may then be downloaded or otherwise transmitted to the hearing assist device for implementation thereby. Such personalized prescription may be formatted in a standardized manner such that it may be used by a variety of hearing assist devices or audio reproduction systems. Sensors within the hearing assist device may be used to measure distance to the ear drum or other factors that may impact test results and the analysis thereof. - In certain embodiments, the test results are processed locally by the portable electronic device to generate a prescription. In alternate embodiments, the test results are transmitted from the portable electronic device to a remote system for automated analysis and/or analysis by a clinician or other qualified party and a prescription is generated via such remote analysis.
- The hearing assist devices described herein may comprise devices such as those shown in
FIGS. 2-6 and 15. However, it is noted that the hearing assist devices described herein may comprise a part of any structure or article that may cover an ear of a user or that may be proximally located to an ear of a user. For example, the hearing assist devices described herein may comprise a part of a headset, a pair of glasses, a visor, or a helmet worn by a user or may be designed to be connected or tethered to such headset, pair of glasses, visor, or helmet. - While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/623,435 US20130343584A1 (en) | 2012-06-20 | 2012-09-20 | Hearing assist device with external operational support |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261662217P | 2012-06-20 | 2012-06-20 | |
US13/623,435 US20130343584A1 (en) | 2012-06-20 | 2012-09-20 | Hearing assist device with external operational support |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130343584A1 true US20130343584A1 (en) | 2013-12-26 |
Family
ID=49774491
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/594,489 Expired - Fee Related US9185501B2 (en) | 2012-06-20 | 2012-08-24 | Container-located information transfer module |
US13/623,435 Abandoned US20130343584A1 (en) | 2012-06-20 | 2012-09-20 | Hearing assist device with external operational support |
US13/623,545 Abandoned US20130343585A1 (en) | 2012-06-20 | 2012-09-20 | Multisensor hearing assist device for health |
US14/879,765 Active US9730005B2 (en) | 2012-06-20 | 2015-10-09 | Container-located information transfer module |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/594,489 Expired - Fee Related US9185501B2 (en) | 2012-06-20 | 2012-08-24 | Container-located information transfer module |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/623,545 Abandoned US20130343585A1 (en) | 2012-06-20 | 2012-09-20 | Multisensor hearing assist device for health |
US14/879,765 Active US9730005B2 (en) | 2012-06-20 | 2015-10-09 | Container-located information transfer module |
Country Status (1)
Country | Link |
---|---|
US (4) | US9185501B2 (en) |
Cited By (161)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140273824A1 (en) * | 2013-03-15 | 2014-09-18 | Medtronic, Inc. | Systems, apparatus and methods facilitating secure pairing of an implantable device with a remote device using near field communication |
US20140270287A1 (en) * | 2013-03-15 | 2014-09-18 | Qualcomm Incorporated | Bluetooth hearing aids enabled during voice activity on a mobile phone |
US20140348345A1 (en) * | 2013-05-23 | 2014-11-27 | Knowles Electronics, Llc | Vad detection microphone and method of operating the same |
US20150058001A1 (en) * | 2013-05-23 | 2015-02-26 | Knowles Electronics, Llc | Microphone and Corresponding Digital Interface |
US20150079900A1 (en) * | 2013-09-18 | 2015-03-19 | Plantronics, Inc. | Audio Delivery System for Headsets |
US20150172832A1 (en) * | 2013-12-17 | 2015-06-18 | United Sciences, Llc | Iidentity confirmation using wearable computerized earpieces and related methods |
US20150172827A1 (en) * | 2013-12-17 | 2015-06-18 | United Sciences, Llc | Identity confirmation using wearable computerized earpieces and related methods |
US20150172828A1 (en) * | 2013-12-17 | 2015-06-18 | United Sciences, Llc | Iidentity confirmation using wearable computerized earpieces and related methods |
CN104754467A (en) * | 2013-12-30 | 2015-07-01 | Gn瑞声达A/S | Hearing device with position data and method of operating a hearing device |
EP2890156A1 (en) * | 2013-12-30 | 2015-07-01 | GN Resound A/S | Hearing device with position data and method of operating a hearing device |
JP2015144430A (en) * | 2013-12-30 | 2015-08-06 | ジーエヌ リザウンド エー/エスGn Resound A/S | Hearing device using position data, audio system and related method |
EP2908550A1 (en) * | 2014-02-13 | 2015-08-19 | Oticon A/s | A hearing aid device comprising a sensor member |
US9185501B2 (en) | 2012-06-20 | 2015-11-10 | Broadcom Corporation | Container-located information transfer module |
US9191755B2 (en) * | 2012-12-14 | 2015-11-17 | Starkey Laboratories, Inc. | Spatial enhancement mode for hearing aids |
US20150351143A1 (en) * | 2014-05-30 | 2015-12-03 | Apple Inc. | Seamless connectivity between hearing aid and multiple devices |
US20160038738A1 (en) * | 2014-08-07 | 2016-02-11 | Oticon A/S | Hearing assistance system with improved signal processing comprising an implanted part |
US9438300B1 (en) * | 2015-03-10 | 2016-09-06 | Invensense, Inc. | Sensor fusion for antenna tuning |
US9478234B1 (en) | 2015-07-13 | 2016-10-25 | Knowles Electronics, Llc | Microphone apparatus and method with catch-up buffer |
US9502028B2 (en) | 2013-10-18 | 2016-11-22 | Knowles Electronics, Llc | Acoustic activity detection apparatus and method |
US20170094401A1 (en) * | 2014-05-20 | 2017-03-30 | Bugatone Ltd. | Aural measurements from earphone output speakers |
US20170095202A1 (en) * | 2015-10-02 | 2017-04-06 | Earlens Corporation | Drug delivery customized ear canal apparatus |
EP3157270A1 (en) * | 2015-10-14 | 2017-04-19 | Sonion Nederland B.V. | Hearing device with vibration sensitive transducer |
WO2017068004A1 (en) * | 2015-10-20 | 2017-04-27 | Bragi GmbH | Wearable earpiece voice command control system and method |
EP3179741A1 (en) * | 2015-12-08 | 2017-06-14 | GN ReSound A/S | Hearing aid with power management |
US9711166B2 (en) | 2013-05-23 | 2017-07-18 | Knowles Electronics, Llc | Decimation synchronization in a microphone |
US20170215010A1 (en) * | 2016-01-25 | 2017-07-27 | Sean Lineaweaver | Device Monitoring for Program Switching |
WO2017157409A1 (en) * | 2016-03-14 | 2017-09-21 | Sonova Ag | Wireless body worn personal device with loss detection functionality |
US9779752B2 (en) | 2014-10-31 | 2017-10-03 | At&T Intellectual Property I, L.P. | Acoustic enhancement by leveraging metadata to mitigate the impact of noisy environments |
US9830080B2 (en) | 2015-01-21 | 2017-11-28 | Knowles Electronics, Llc | Low power voice trigger for acoustic apparatus and method |
US9830913B2 (en) | 2013-10-29 | 2017-11-28 | Knowles Electronics, Llc | VAD detection apparatus and method of operation the same |
US9924276B2 (en) | 2014-11-26 | 2018-03-20 | Earlens Corporation | Adjustable venting for hearing instruments |
US9930458B2 (en) | 2014-07-14 | 2018-03-27 | Earlens Corporation | Sliding bias and peak limiting for optical hearing devices |
US9937346B2 (en) | 2016-04-26 | 2018-04-10 | Cochlear Limited | Downshifting of output in a sense prosthesis |
US9942051B1 (en) | 2013-03-15 | 2018-04-10 | Poltorak Technologies Llc | System and method for secure relayed communications from an implantable medical device |
US9949035B2 (en) | 2008-09-22 | 2018-04-17 | Earlens Corporation | Transducer devices and methods for hearing |
US9949039B2 (en) | 2005-05-03 | 2018-04-17 | Earlens Corporation | Hearing system having improved high frequency response |
US20180113673A1 (en) * | 2016-10-20 | 2018-04-26 | Qualcomm Incorporated | Systems and methods for in-ear control of remote devices |
US9961454B2 (en) | 2008-06-17 | 2018-05-01 | Earlens Corporation | Optical electro-mechanical hearing devices with separate power and signal components |
EP3213529A4 (en) * | 2014-10-30 | 2018-06-06 | Smartear Inc. | Smart flexible interactive earplug |
US10034103B2 (en) | 2014-03-18 | 2018-07-24 | Earlens Corporation | High fidelity and reduced feedback contact hearing apparatus and methods |
EP3358812A1 (en) * | 2017-02-03 | 2018-08-08 | Widex A/S | Communication channels between a personal communication device and at least one head-worn device |
US10063979B2 (en) | 2015-12-08 | 2018-08-28 | Gn Hearing A/S | Hearing aid with power management |
US10121472B2 (en) | 2015-02-13 | 2018-11-06 | Knowles Electronics, Llc | Audio buffer catch-up apparatus and method with two microphones |
US10154352B2 (en) | 2007-10-12 | 2018-12-11 | Earlens Corporation | Multifunction system and method for integrated hearing and communication with noise cancellation and feedback management |
US10178483B2 (en) | 2015-12-30 | 2019-01-08 | Earlens Corporation | Light based hearing systems, apparatus, and methods |
EP3439327A1 (en) * | 2017-07-31 | 2019-02-06 | Starkey Laboratories, Inc. | Ear-worn electronic device for conducting and monitoring mental exercises |
US10284964B2 (en) | 2010-12-20 | 2019-05-07 | Earlens Corporation | Anatomically customized ear canal hearing apparatus |
EP3493556A1 (en) * | 2017-12-01 | 2019-06-05 | Semiconductor Components Industries, LLC | All-in-one method for wireless connectivity and contactless battery charging of small wearables |
EP3503589A1 (en) * | 2015-06-22 | 2019-06-26 | GN Hearing A/S | A hearing aid having combined antennas |
US10339960B2 (en) | 2016-10-13 | 2019-07-02 | International Business Machines Corporation | Personal device for hearing degradation monitoring |
CN109996164A (en) * | 2017-12-29 | 2019-07-09 | 大北欧听力公司 | Hearing instrument including parasitic battery antenna element |
EP3477968A3 (en) * | 2017-10-31 | 2019-07-31 | Starkey Laboratories, Inc. | Hearing device including a sensor and a method of forming same |
US10396743B2 (en) | 2015-05-01 | 2019-08-27 | Nxp B.V. | Frequency-domain dynamic range control of signals |
WO2019169142A1 (en) * | 2018-02-28 | 2019-09-06 | Starkey Laboratories, Inc. | Health monitoring with ear-wearable devices and accessory devices |
DE102018204260A1 (en) | 2018-03-20 | 2019-09-26 | Zf Friedrichshafen Ag | Evaluation device, apparatus, method and computer program product for a hearing-impaired person for the environmental perception of a sound event |
US10433077B2 (en) * | 2015-09-02 | 2019-10-01 | Sonion Nederland B.V. | Augmented hearing device |
US10492010B2 (en) | 2015-12-30 | 2019-11-26 | Earlens Corporations | Damping in contact hearing systems |
US20190387330A1 (en) * | 2018-06-18 | 2019-12-19 | Sivantos Pte. Ltd. | Method for controlling the transmission of data between at least one hearing aid and a peripheral device of a hearing aid system and also a hearing aid |
US10524068B2 (en) | 2016-01-07 | 2019-12-31 | Sonova Ag | Hearing assistance device transducers and hearing assistance devices with same |
WO2020040638A1 (en) | 2018-08-23 | 2020-02-27 | Audus B.V. | Method, system, and hearing device for enhancing an environmental audio signal of such a hearing device |
EP3684079A1 (en) * | 2019-03-29 | 2020-07-22 | Sonova AG | Hearing device for orientation estimation and method of its operation |
WO2020124022A3 (en) * | 2018-12-15 | 2020-07-23 | Starkey Laboratories, Inc. | Hearing assistance system with enhanced fall detection features |
WO2020160288A1 (en) * | 2019-02-01 | 2020-08-06 | Starkey Laboratories, Inc. | Efficient wellness measurement in ear-wearable devices |
EP3657816A4 (en) * | 2017-07-21 | 2020-08-19 | Sony Corporation | Sound output device |
US20200267481A1 (en) * | 2015-08-24 | 2020-08-20 | Ivana Popovac | Prosthesis functionality control and data presentation |
WO2020172580A1 (en) * | 2019-02-22 | 2020-08-27 | Starkey Laboratories, Inc. | Sharing of health-related data based on data exported by ear-wearable device |
US10832535B1 (en) * | 2019-09-26 | 2020-11-10 | Bose Corporation | Sleepbuds for parents |
WO2020232121A1 (en) * | 2019-05-13 | 2020-11-19 | Starkey Laboratories, Inc. | Ear-worn devices for communication with medical devices |
CN112218221A (en) * | 2020-10-21 | 2021-01-12 | 歌尔智能科技有限公司 | Hearing aid adapter and control method |
US10911878B2 (en) | 2018-12-21 | 2021-02-02 | Starkey Laboratories, Inc. | Modularization of components of an ear-wearable device |
US10939216B2 (en) | 2018-02-28 | 2021-03-02 | Starkey Laboratories, Inc. | Health monitoring with ear-wearable devices and accessory devices |
US20210105566A1 (en) * | 2018-04-11 | 2021-04-08 | Gn Resound A/S | Hearing aid housing with an integrated antenna |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
EP3806493A1 (en) * | 2019-10-11 | 2021-04-14 | GN Hearing A/S | A hearing device having a magnetic induction coil |
WO2021069715A1 (en) * | 2019-10-09 | 2021-04-15 | Jacoti Bv | System of processing devices to perform an algorithm |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10993027B2 (en) | 2015-11-23 | 2021-04-27 | Goodix Technology (Hk) Company Limited | Audio system controller based on operating condition of amplifier |
US11011182B2 (en) * | 2019-03-25 | 2021-05-18 | Nxp B.V. | Audio processing system for speech enhancement |
US11012793B2 (en) | 2017-08-25 | 2021-05-18 | Starkey Laboratories, Inc. | Cognitive benefit measure related to hearing-assistance device use |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
WO2021127228A1 (en) * | 2019-12-17 | 2021-06-24 | Starkey Laboratories, Inc. | Hearing assistance systems and methods for monitoring emotional state |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US11076243B2 (en) * | 2019-06-20 | 2021-07-27 | Samsung Electro-Mechanics Co., Ltd. | Terminal with hearing aid setting, and setting method for hearing aid |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US11102594B2 (en) | 2016-09-09 | 2021-08-24 | Earlens Corporation | Contact hearing systems, apparatus and methods |
US11115519B2 (en) * | 2014-11-11 | 2021-09-07 | K/S Himpp | Subscription-based wireless service for a hearing device |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11126400B2 (en) | 2015-09-08 | 2021-09-21 | Apple Inc. | Zero latency digital assistant |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
EP3890344A1 (en) * | 2020-03-30 | 2021-10-06 | Sonova AG | Hearing devices and methods for implementing automatic sensor-based on/off control of a hearing device |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US11166114B2 (en) | 2016-11-15 | 2021-11-02 | Earlens Corporation | Impression procedure |
US11172315B2 (en) * | 2015-06-22 | 2021-11-09 | Gn Hearing A/S | Hearing aid having combined antennas |
US11169616B2 (en) | 2018-05-07 | 2021-11-09 | Apple Inc. | Raise to speak |
US11212626B2 (en) | 2018-04-09 | 2021-12-28 | Earlens Corporation | Dynamic filter |
US11213688B2 (en) | 2019-03-30 | 2022-01-04 | Advanced Bionics Ag | Utilization of a non-wearable coil to remotely power a cochlear implant from a distance |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11259130B2 (en) * | 2018-12-14 | 2022-02-22 | Widex A/S | Hearing assistive system with sensors |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US11265665B2 (en) | 2014-08-22 | 2022-03-01 | K/S Himpp | Wireless hearing device interactive with medical devices |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US20220148599A1 (en) * | 2019-01-05 | 2022-05-12 | Starkey Laboratories, Inc. | Audio signal processing for automatic transcription using ear-wearable device |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11350226B2 (en) | 2015-12-30 | 2022-05-31 | Earlens Corporation | Charging protocol for rechargeable hearing systems |
US11380310B2 (en) | 2017-05-12 | 2022-07-05 | Apple Inc. | Low-latency intelligent automated assistant |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US11431642B2 (en) | 2018-06-01 | 2022-08-30 | Apple Inc. | Variable latency device coordination |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11490208B2 (en) | 2016-12-09 | 2022-11-01 | The Research Foundation For The State University Of New York | Fiber microphone |
US11496827B2 (en) * | 2015-12-21 | 2022-11-08 | Bragi GmbH | Microphone natural speech capture voice dictation system and method |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11516603B2 (en) | 2018-03-07 | 2022-11-29 | Earlens Corporation | Contact hearing device and retention structure materials |
US11527265B2 (en) * | 2018-11-02 | 2022-12-13 | BriefCam Ltd. | Method and system for automatic object-aware video or audio redaction |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US11570559B2 (en) | 2017-12-29 | 2023-01-31 | Gn Hearing A/S | Hearing instrument comprising a parasitic battery antenna element |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11599331B2 (en) | 2017-05-11 | 2023-03-07 | Apple Inc. | Maintaining privacy of personal information |
US11638563B2 (en) | 2018-12-27 | 2023-05-02 | Starkey Laboratories, Inc. | Predictive fall event management system and method of using same |
US11653158B2 (en) | 2020-07-27 | 2023-05-16 | Gn Hearing A/S | Head-wearable hearing instrument with improved co-existence of multiple communication interfaces |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11670289B2 (en) | 2014-05-30 | 2023-06-06 | Apple Inc. | Multi-command single utterance input method |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
US11675491B2 (en) | 2019-05-06 | 2023-06-13 | Apple Inc. | User configurable task triggers |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11710482B2 (en) | 2018-03-26 | 2023-07-25 | Apple Inc. | Natural assistant interaction |
US11716580B2 (en) | 2018-02-28 | 2023-08-01 | Starkey Laboratories, Inc. | Health monitoring with ear-wearable devices and accessory devices |
US20230247373A1 (en) * | 2019-10-14 | 2023-08-03 | Starkey Laboratories, Inc. | Hearing assistance system with automatic hearing loop memory |
US11727219B2 (en) | 2013-06-09 | 2023-08-15 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11783815B2 (en) | 2019-03-18 | 2023-10-10 | Apple Inc. | Multimodality in digital assistant systems |
US11786694B2 (en) | 2019-05-24 | 2023-10-17 | NeuroLight, Inc. | Device, method, and app for facilitating sleep |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11853647B2 (en) | 2015-12-23 | 2023-12-26 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11854539B2 (en) | 2018-05-07 | 2023-12-26 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11869505B2 (en) | 2019-01-05 | 2024-01-09 | Starkey Laboratories, Inc. | Local artificial intelligence assistant system with ear-wearable device |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
US12001933B2 (en) | 2015-05-15 | 2024-06-04 | Apple Inc. | Virtual assistant in a communication session |
US12010262B2 (en) | 2013-08-06 | 2024-06-11 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US12014118B2 (en) | 2017-05-15 | 2024-06-18 | Apple Inc. | Multi-modal interfaces having selection disambiguation and text modification capability |
US12051413B2 (en) | 2015-09-30 | 2024-07-30 | Apple Inc. | Intelligent device identification |
US12067985B2 (en) | 2018-06-01 | 2024-08-20 | Apple Inc. | Virtual assistant operations in multi-device environments |
US12073147B2 (en) | 2013-06-09 | 2024-08-27 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US12095940B2 (en) | 2019-07-19 | 2024-09-17 | Starkey Laboratories, Inc. | Hearing devices using proxy devices for emergency communication |
US12136419B2 (en) | 2023-08-31 | 2024-11-05 | Apple Inc. | Multimodality in digital assistant systems |
Families Citing this family (137)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USD288512S (en) * | 1985-04-24 | 1987-03-03 | Thermo-Serv, Inc. | Wine glass |
US8621724B2 (en) | 2008-06-27 | 2014-01-07 | Snik Llc | Headset cord holder |
US10652661B2 (en) | 2008-06-27 | 2020-05-12 | Snik, LLC | Headset cord holder |
US11478190B2 (en) | 2008-10-29 | 2022-10-25 | Flashback Technologies, Inc. | Noninvasive hydration monitoring |
US11395594B2 (en) | 2008-10-29 | 2022-07-26 | Flashback Technologies, Inc. | Noninvasive monitoring for fluid resuscitation |
US11857293B2 (en) | 2008-10-29 | 2024-01-02 | Flashback Technologies, Inc. | Rapid detection of bleeding before, during, and after fluid resuscitation |
US11382571B2 (en) | 2008-10-29 | 2022-07-12 | Flashback Technologies, Inc. | Noninvasive predictive and/or estimative blood pressure monitoring |
US11406269B2 (en) | 2008-10-29 | 2022-08-09 | Flashback Technologies, Inc. | Rapid detection of bleeding following injury |
US11395634B2 (en) | 2008-10-29 | 2022-07-26 | Flashback Technologies, Inc. | Estimating physiological states based on changes in CRI |
US9473859B2 (en) * | 2008-12-31 | 2016-10-18 | Starkey Laboratories, Inc. | Systems and methods of telecommunication for bilateral hearing instruments |
CA2871608C (en) | 2011-07-22 | 2020-07-07 | Flashback Technologies, Inc. | Hemodynamic reserve monitor and hemodialysis control |
KR101909128B1 (en) * | 2012-01-13 | 2018-10-17 | 삼성전자주식회사 | Multimedia playing apparatus for outputting modulated sound according to hearing characteristic of a user and method for performing thereof |
US10524038B2 (en) | 2012-02-22 | 2019-12-31 | Snik Llc | Magnetic earphones holder |
US9769556B2 (en) | 2012-02-22 | 2017-09-19 | Snik Llc | Magnetic earphones holder including receiving external ambient audio and transmitting to the earphones |
JP6195617B2 (en) | 2012-07-05 | 2017-09-13 | ピー.シー.オー.エー. デバイシズ エルティーディー.P.C.O.A. Devices Ltd. | Drug dispenser |
US10992185B2 (en) | 2012-07-06 | 2021-04-27 | Energous Corporation | Systems and methods of using electromagnetic waves to wirelessly deliver power to game controllers |
US10992187B2 (en) | 2012-07-06 | 2021-04-27 | Energous Corporation | System and methods of using electromagnetic waves to wirelessly deliver power to electronic devices |
US12057715B2 (en) | 2012-07-06 | 2024-08-06 | Energous Corporation | Systems and methods of wirelessly delivering power to a wireless-power receiver device in response to a change of orientation of the wireless-power receiver device |
US10965164B2 (en) | 2012-07-06 | 2021-03-30 | Energous Corporation | Systems and methods of wirelessly delivering power to a receiver device |
US11502551B2 (en) | 2012-07-06 | 2022-11-15 | Energous Corporation | Wirelessly charging multiple wireless-power receivers using different subsets of an antenna array to focus energy at different locations |
US9135915B1 (en) * | 2012-07-26 | 2015-09-15 | Google Inc. | Augmenting speech segmentation and recognition using head-mounted vibration and/or motion sensors |
ES2744276T3 (en) | 2012-07-30 | 2020-02-24 | DosentRX Ltd | Container for containing and dispensing solid medicinal pills |
EP2744224A1 (en) * | 2012-12-14 | 2014-06-18 | Oticon A/s | Configurable hearing instrument |
US9619626B2 (en) * | 2013-01-08 | 2017-04-11 | Samsung Electronics Co., Ltd | Method and apparatus for identifying exercise information of user |
US9788128B2 (en) * | 2013-06-14 | 2017-10-10 | Gn Hearing A/S | Hearing instrument with off-line speech messages |
EP3917167A3 (en) * | 2013-06-14 | 2022-03-09 | Oticon A/s | A hearing assistance device with brain computer interface |
US9906872B2 (en) * | 2013-06-21 | 2018-02-27 | The Trustees Of Dartmouth College | Hearing-aid noise reduction circuitry with neural feedback to improve speech comprehension |
CN104330444A (en) * | 2013-07-22 | 2015-02-04 | 财团法人多次元智能It融合系统研究团 | Near-field-communication or rfid based electrochemical biosensor and method for an ingredient measurement using thereof |
US9713728B2 (en) | 2013-10-29 | 2017-07-25 | Physio-Control, Inc. | Variable sound system for medical devices |
EP2928211A1 (en) * | 2014-04-04 | 2015-10-07 | Oticon A/s | Self-calibration of multi-microphone noise reduction system for hearing assistance devices using an auxiliary device |
IL233295B (en) | 2014-06-22 | 2019-11-28 | Ilan Paz | A controlled pill-dispensing system |
CA2956795C (en) | 2014-08-03 | 2020-06-30 | PogoTec, Inc. | Wearable camera systems and apparatus and method for attaching camera systems or other electronic devices to wearable articles |
US20160033308A1 (en) * | 2014-08-04 | 2016-02-04 | Infineon Technologies Ag | Intelligent gauge devices and related systems and methods |
US10299050B2 (en) * | 2014-08-27 | 2019-05-21 | Auditory Labs, Llc | Mobile audio receiver |
US9780837B2 (en) * | 2014-08-29 | 2017-10-03 | Freelinc Technologies | Spatially enabled secure communications |
US9485591B2 (en) * | 2014-12-10 | 2016-11-01 | Starkey Laboratories, Inc. | Managing a hearing assistance device via low energy digital communications |
US9503437B2 (en) * | 2014-12-12 | 2016-11-22 | Gn Resound A/S | Apparatus for secure hearing device communication and related method |
SG11201705196QA (en) | 2014-12-23 | 2017-07-28 | Pogotec Inc | Wireless camera system and methods |
US10164685B2 (en) | 2014-12-31 | 2018-12-25 | Freelinc Technologies Inc. | Spatially aware wireless network |
US9883300B2 (en) * | 2015-02-23 | 2018-01-30 | Oticon A/S | Method and apparatus for controlling a hearing instrument to relieve tinitus, hyperacusis, and hearing loss |
US20160278647A1 (en) * | 2015-03-26 | 2016-09-29 | Intel Corporation | Misalignment detection of a wearable device |
US10292607B2 (en) | 2015-03-26 | 2019-05-21 | Intel Corporation | Sensor data transmissions |
IL238387B (en) | 2015-04-20 | 2019-01-31 | Paz Ilan | Medication dispenser depilling mechanism |
US11426592B2 (en) * | 2015-05-14 | 2022-08-30 | Cochlear Limited | Functionality migration |
JP6621848B2 (en) | 2015-06-05 | 2019-12-18 | アップル インコーポレイテッドApple Inc. | Changing the behavior of a companion communication device based on the state of the wearable device |
MX2017015898A (en) | 2015-06-10 | 2018-05-07 | Pogotec Inc | Eyewear with magnetic track for electronic wearable device. |
CN105185371B (en) * | 2015-06-25 | 2017-07-11 | 京东方科技集团股份有限公司 | A kind of speech synthetic device, phoneme synthesizing method, the osteoacusis helmet and audiphone |
US10536782B2 (en) * | 2015-07-02 | 2020-01-14 | Carl L. C. Kah, Jr. | External ear insert for hearing enhancement |
US9686392B2 (en) | 2015-07-03 | 2017-06-20 | teleCalm, Inc. | Telephone system for impaired individuals |
US9704497B2 (en) * | 2015-07-06 | 2017-07-11 | Apple Inc. | Method and system of audio power reduction and thermal mitigation using psychoacoustic techniques |
US9843853B2 (en) | 2015-08-29 | 2017-12-12 | Bragi GmbH | Power control for battery powered personal area network device system and method |
US10003881B2 (en) | 2015-09-30 | 2018-06-19 | Apple Inc. | Earbuds with capacitive touch sensor |
CN108289793A (en) | 2015-10-15 | 2018-07-17 | 东森塔克斯公司 | Image recognition based dosage form dispenser |
US10104458B2 (en) | 2015-10-20 | 2018-10-16 | Bragi GmbH | Enhanced biometric control systems for detection of emergency events system and method |
DE102015221187A1 (en) * | 2015-10-29 | 2017-05-04 | Sivantos Pte. Ltd. | Hearing aid system with sensor for collecting biological data |
TW201729610A (en) * | 2015-10-29 | 2017-08-16 | 帕戈技術股份有限公司 | Hearing aid adapted for wireless power reception |
WO2017077529A1 (en) | 2015-11-02 | 2017-05-11 | P.C.O.A. | Lockable advanceable oral dosage form dispenser containers |
US10321247B2 (en) * | 2015-11-27 | 2019-06-11 | Cochlear Limited | External component with inductance and mechanical vibratory functionality |
US9749766B2 (en) * | 2015-12-27 | 2017-08-29 | Philip Scott Lyren | Switching binaural sound |
CN108781338B (en) * | 2016-03-11 | 2021-09-17 | 索诺瓦公司 | Hearing assistance device and method with automatic safety control |
US10085082B2 (en) | 2016-03-11 | 2018-09-25 | Bragi GmbH | Earpiece with GPS receiver |
US11558538B2 (en) | 2016-03-18 | 2023-01-17 | Opkix, Inc. | Portable camera system |
US10052065B2 (en) * | 2016-03-23 | 2018-08-21 | Bragi GmbH | Earpiece life monitor with capability of automatic notification system and method |
EP3035710A3 (en) * | 2016-03-30 | 2016-11-02 | Oticon A/s | Monitoring system for a hearing device |
US10157037B2 (en) * | 2016-03-31 | 2018-12-18 | Bose Corporation | Performing an operation at a headphone system |
US9924255B2 (en) | 2016-03-31 | 2018-03-20 | Bose Corporation | On/off head detection using magnetic field sensing |
US10337783B2 (en) * | 2016-04-12 | 2019-07-02 | Abigail Weaver | Carry bag with insulated medicine compartment and related methods |
US10455306B2 (en) | 2016-04-19 | 2019-10-22 | Snik Llc | Magnetic earphones holder |
US10225640B2 (en) * | 2016-04-19 | 2019-03-05 | Snik Llc | Device and system for and method of transmitting audio to a user |
US11272281B2 (en) | 2016-04-19 | 2022-03-08 | Snik Llc | Magnetic earphones holder |
US10631074B2 (en) | 2016-04-19 | 2020-04-21 | Snik Llc | Magnetic earphones holder |
US10951968B2 (en) | 2016-04-19 | 2021-03-16 | Snik Llc | Magnetic earphones holder |
EP3291580A1 (en) * | 2016-08-29 | 2018-03-07 | Oticon A/s | Hearing aid device with speech control functionality |
US10349259B2 (en) * | 2016-09-23 | 2019-07-09 | Apple Inc. | Broadcasting a device state in a wireless communication network |
US10049184B2 (en) * | 2016-10-07 | 2018-08-14 | Bragi GmbH | Software application transmission via body interface using a wearable device in conjunction with removable body sensor arrays system and method |
US10062373B2 (en) | 2016-11-03 | 2018-08-28 | Bragi GmbH | Selective audio isolation from body generated sound system and method |
US10058282B2 (en) | 2016-11-04 | 2018-08-28 | Bragi GmbH | Manual operation assistance with earpiece with 3D sound cues |
US10063957B2 (en) | 2016-11-04 | 2018-08-28 | Bragi GmbH | Earpiece with source selection within ambient environment |
US10665243B1 (en) * | 2016-11-11 | 2020-05-26 | Facebook Technologies, Llc | Subvocalized speech recognition |
WO2018129281A1 (en) * | 2017-01-05 | 2018-07-12 | Ohio State Innovation Foundation | Systems and methods for wirelessly charging a hearing device |
WO2018147942A1 (en) | 2017-02-13 | 2018-08-16 | Starkey Laboratories, Inc. | Fall prediction system and method of using same |
US20180235540A1 (en) | 2017-02-21 | 2018-08-23 | Bose Corporation | Collecting biologically-relevant information using an earpiece |
EP3313092A1 (en) * | 2017-03-17 | 2018-04-25 | Oticon A/s | A hearing system for monitoring a health related parameter |
US10918325B2 (en) * | 2017-03-23 | 2021-02-16 | Fuji Xerox Co., Ltd. | Brain wave measuring device and brain wave measuring system |
WO2018183892A1 (en) | 2017-03-30 | 2018-10-04 | Energous Corporation | Flat antennas having two or more resonant frequencies for use in wireless power transmission systems |
US11462949B2 (en) | 2017-05-16 | 2022-10-04 | Wireless electrical Grid LAN, WiGL Inc | Wireless charging method and system |
US12074460B2 (en) | 2017-05-16 | 2024-08-27 | Wireless Electrical Grid Lan, Wigl Inc. | Rechargeable wireless power bank and method of using |
US12074452B2 (en) | 2017-05-16 | 2024-08-27 | Wireless Electrical Grid Lan, Wigl Inc. | Networked wireless charging system |
US10213157B2 (en) * | 2017-06-09 | 2019-02-26 | Bose Corporation | Active unipolar dry electrode open ear wireless headset and brain computer interface |
DE102017209816B3 (en) | 2017-06-09 | 2018-07-26 | Sivantos Pte. Ltd. | A method for characterizing a listener in a hearing aid, hearing aid and test device for a hearing aid |
US10848853B2 (en) * | 2017-06-23 | 2020-11-24 | Energous Corporation | Systems, methods, and devices for utilizing a wire of a sound-producing device as an antenna for receipt of wirelessly delivered power |
US10764668B2 (en) | 2017-09-07 | 2020-09-01 | Lightspeed Aviation, Inc. | Sensor mount and circumaural headset or headphones with adjustable sensor |
US10701470B2 (en) * | 2017-09-07 | 2020-06-30 | Light Speed Aviation, Inc. | Circumaural headset or headphones with adjustable biometric sensor |
US10344960B2 (en) | 2017-09-19 | 2019-07-09 | Bragi GmbH | Wireless earpiece controlled medical headlight |
US11272367B2 (en) | 2017-09-20 | 2022-03-08 | Bragi GmbH | Wireless earpieces for hub communications |
WO2019082060A1 (en) * | 2017-10-23 | 2019-05-02 | Cochlear Limited | Advanced assistance for prosthesis assisted communication |
US11342798B2 (en) | 2017-10-30 | 2022-05-24 | Energous Corporation | Systems and methods for managing coexistence of wireless-power signals and data signals operating in a same frequency band |
DE102018209801A1 (en) * | 2018-06-18 | 2019-12-19 | Sivantos Pte. Ltd. | Method for operating a hearing device system and hearing device system |
US10798497B2 (en) * | 2018-07-03 | 2020-10-06 | Tom Yu-Chi CHANG | Hearing aid device and a system for controlling a hearing aid device |
US11265643B2 (en) * | 2018-09-17 | 2022-03-01 | Starkey Laboratories, Inc. | Hearing device including a sensor and hearing system including same |
US11300857B2 (en) | 2018-11-13 | 2022-04-12 | Opkix, Inc. | Wearable mounts for portable camera |
US20210267464A1 (en) * | 2018-11-15 | 2021-09-02 | Kyocera Corporation | Biosensor |
KR102188913B1 (en) * | 2018-11-26 | 2020-12-09 | 재단법인 오송첨단의료산업진흥재단 | Core body temperature thermometer having an air pocket |
US11918386B2 (en) | 2018-12-26 | 2024-03-05 | Flashback Technologies, Inc. | Device-based maneuver and activity state-based physiologic status monitoring |
US11184052B2 (en) * | 2018-12-28 | 2021-11-23 | Samsung Electronics Co., Ltd. | Apparatus and method with near-field communication |
US20220157452A1 (en) | 2019-01-07 | 2022-05-19 | Cosinuss Gmbh | Method for Providing Data for an Interface |
US11539243B2 (en) | 2019-01-28 | 2022-12-27 | Energous Corporation | Systems and methods for miniaturized antenna for wireless power transmissions |
WO2020163574A1 (en) | 2019-02-06 | 2020-08-13 | Energous Corporation | Systems and methods of estimating optimal phases to use for individual antennas in an antenna array |
USD869445S1 (en) * | 2019-05-22 | 2019-12-10 | Shenzhen Qianhai Patuoxun Network And Technology Co., Ltd | Earphone |
US11304016B2 (en) * | 2019-06-04 | 2022-04-12 | Concha Inc. | Method for configuring a hearing-assistance device with a hearing profile |
WO2020264203A1 (en) * | 2019-06-28 | 2020-12-30 | Starkey Laboratories, Inc. | Direct informative communication through an ear-wearable device |
EP3761452A1 (en) | 2019-07-03 | 2021-01-06 | Gebauer & Griller Kabelwerke Gesellschaft m.b.H. | Electrical connection between an electric conductor and a contact element |
USD878337S1 (en) * | 2019-07-08 | 2020-03-17 | Shenzhen Ginto E-commerce Co., Limited | Earphone |
US11381118B2 (en) | 2019-09-20 | 2022-07-05 | Energous Corporation | Systems and methods for machine learning based foreign object detection for wireless power transmission |
WO2021055898A1 (en) | 2019-09-20 | 2021-03-25 | Energous Corporation | Systems and methods for machine learning based foreign object detection for wireless power transmission |
WO2021055900A1 (en) | 2019-09-20 | 2021-03-25 | Energous Corporation | Classifying and detecting foreign objects using a power amplifier controller integrated circuit in wireless power transmission systems |
US11411441B2 (en) | 2019-09-20 | 2022-08-09 | Energous Corporation | Systems and methods of protecting wireless power receivers using multiple rectifiers and establishing in-band communications using multiple rectifiers |
DE102019217398A1 (en) * | 2019-11-11 | 2021-05-12 | Sivantos Pte. Ltd. | Method for operating a hearing aid and hearing aid |
EP4059233A1 (en) * | 2019-11-14 | 2022-09-21 | Starkey Laboratories, Inc. | Ear-worn electronic device configured to compensate for hunched or stooped posture |
USD890724S1 (en) * | 2019-11-22 | 2020-07-21 | Stb International Limited | Earphone |
EP3833050A1 (en) * | 2019-12-06 | 2021-06-09 | GN Hearing A/S | Method for charging a battery of a hearing device |
EP4073905A4 (en) | 2019-12-13 | 2024-01-03 | Energous Corporation | Charging pad with guiding contours to align an electronic device on the charging pad and efficiently transfer near-field radio-frequency energy to the electronic device |
USD883260S1 (en) * | 2019-12-25 | 2020-05-05 | Shenzhen Qianhai Patuoxun Network And Technology Co., Ltd | Earphone |
US10985617B1 (en) | 2019-12-31 | 2021-04-20 | Energous Corporation | System for wirelessly transmitting energy at a near-field distance without using beam-forming control |
US12058495B2 (en) | 2020-01-27 | 2024-08-06 | Starkey Laboratories, Inc. | Using a camera for hearing device algorithm training |
USD934839S1 (en) * | 2020-03-05 | 2021-11-02 | Shenzhen Yamay Digital Electronics Co. Ltd | Combined wireless earbuds and charging case |
USD893462S1 (en) * | 2020-03-05 | 2020-08-18 | Shenzhen Humboldt Technology Co., Ltd | Headphone |
US11799324B2 (en) | 2020-04-13 | 2023-10-24 | Energous Corporation | Wireless-power transmitting device for creating a uniform near-field charging area |
USD890138S1 (en) * | 2020-04-30 | 2020-07-14 | Shenzhen Qianhai Patuoxun Network And Technology Co., Ltd | Earphones |
USD901457S1 (en) * | 2020-06-03 | 2020-11-10 | Shenzhen Wireless Cloud Image Electronics Co., Ltd. | Wireless headset |
US11758338B2 (en) | 2020-06-05 | 2023-09-12 | Starkey Laboratories, Inc. | Authentication and encryption key exchange for assistive listening devices |
USD956719S1 (en) * | 2020-09-30 | 2022-07-05 | Shenzhen Zio Communication Technology Co., Ltd. | Earphone |
US11394755B1 (en) * | 2021-06-07 | 2022-07-19 | International Business Machines Corporation | Guided hardware input prompts |
DE102021211259A1 (en) * | 2021-10-06 | 2023-04-06 | Sivantos Pte. Ltd. | Method of operating a hearing aid system |
US11916398B2 (en) | 2021-12-29 | 2024-02-27 | Energous Corporation | Small form-factor devices with integrated and modular harvesting receivers, and shelving-mounted wireless-power transmitters for use therewith |
US20230248321A1 (en) * | 2022-02-10 | 2023-08-10 | Gn Hearing A/S | Hearing system with cardiac arrest detection |
US20230410058A1 (en) * | 2022-06-21 | 2023-12-21 | Avaya Management L.P. | Virtual meeting participation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080267416A1 (en) * | 2007-02-22 | 2008-10-30 | Personics Holdings Inc. | Method and Device for Sound Detection and Audio Control |
US20100296668A1 (en) * | 2009-04-23 | 2010-11-25 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for automatic control of active noise cancellation |
US20130043735A1 (en) * | 2011-08-16 | 2013-02-21 | Qualcomm Incorporated | Systems, methods, and devices for multi-level signaling via a wireless power transfer field |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5721783A (en) | 1995-06-07 | 1998-02-24 | Anderson; James C. | Hearing aid with wireless remote processor |
US6786860B2 (en) | 2001-10-03 | 2004-09-07 | Advanced Bionics Corporation | Hearing aid design |
US6839446B2 (en) | 2002-05-28 | 2005-01-04 | Trevor I. Blumenau | Hearing aid with sound replay capability |
DE10228157B3 (en) | 2002-06-24 | 2004-01-08 | Siemens Audiologische Technik Gmbh | Hearing aid system with a hearing aid and an external processor unit |
US7012520B2 (en) * | 2003-06-17 | 2006-03-14 | Infraegis, Inc. | Global intelligent remote detection system |
US20060214789A1 (en) * | 2005-03-24 | 2006-09-28 | Joshua Posamentier | Tamper detection with RFID tag |
US8094848B1 (en) | 2006-04-24 | 2012-01-10 | At&T Mobility Ii Llc | Automatically configuring hearing assistive device |
JP2008097585A (en) * | 2006-09-11 | 2008-04-24 | Seiko Epson Corp | Contactless data communication system and contactless ic tag |
KR100826877B1 (en) * | 2006-09-28 | 2008-05-06 | 한국전자통신연구원 | RFID tag with LED and RF identification managing method using the same |
DE102006057644A1 (en) * | 2006-12-05 | 2008-06-12 | Deutsche Post Ag | Container for shipping objects and method for producing the containers |
US8157730B2 (en) | 2006-12-19 | 2012-04-17 | Valencell, Inc. | Physiological and environmental monitoring systems and methods |
DE102007008738A1 (en) | 2007-02-22 | 2008-08-28 | Siemens Audiologische Technik Gmbh | Method for improving spatial perception and corresponding hearing device |
KR20080084548A (en) * | 2007-03-14 | 2008-09-19 | 한국전자통신연구원 | Apparatus and method for transmitting sensor status of rfid tag |
JP5300205B2 (en) * | 2007-03-22 | 2013-09-25 | キヤノン株式会社 | Target substance detection element, target substance detection method, and method for manufacturing target substance detection element |
US20090076804A1 (en) | 2007-09-13 | 2009-03-19 | Bionica Corporation | Assistive listening system with memory buffer for instant replay and speech to text conversion |
WO2009095937A1 (en) * | 2008-01-28 | 2009-08-06 | Paolo Stefanelli | Container for fluid products, in particular perfumes, deodorants, creams and similar |
EP2247986B1 (en) * | 2008-01-30 | 2014-12-31 | Neology, Inc. | Rfid authentication architecture and methods for rfid authentication |
US7929722B2 (en) | 2008-08-13 | 2011-04-19 | Intelligent Systems Incorporated | Hearing assistance using an external coprocessor |
US20100045425A1 (en) * | 2008-08-21 | 2010-02-25 | Chivallier M Laurent | data transmission of sensors |
US8477029B2 (en) * | 2008-10-23 | 2013-07-02 | Whirlpool Corporation | Modular attribute sensing device |
US20100101317A1 (en) * | 2008-10-23 | 2010-04-29 | Whirlpool Corporation | Lid based amount sensor |
AU2010344006B2 (en) | 2010-02-01 | 2013-12-19 | T&W Engineering A/S | Portable EEG monitor system with wireless communication |
US8827171B2 (en) * | 2011-04-20 | 2014-09-09 | Honda Motor Co., Ltd. | Vehicular automatic temperature regulation system |
US9323893B2 (en) | 2011-06-23 | 2016-04-26 | Orca Health, Inc. | Using mobile consumer devices to communicate with consumer medical devices |
US9185501B2 (en) | 2012-06-20 | 2015-11-10 | Broadcom Corporation | Container-located information transfer module |
-
2012
- 2012-08-24 US US13/594,489 patent/US9185501B2/en not_active Expired - Fee Related
- 2012-09-20 US US13/623,435 patent/US20130343584A1/en not_active Abandoned
- 2012-09-20 US US13/623,545 patent/US20130343585A1/en not_active Abandoned
-
2015
- 2015-10-09 US US14/879,765 patent/US9730005B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080267416A1 (en) * | 2007-02-22 | 2008-10-30 | Personics Holdings Inc. | Method and Device for Sound Detection and Audio Control |
US20100296668A1 (en) * | 2009-04-23 | 2010-11-25 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for automatic control of active noise cancellation |
US20130043735A1 (en) * | 2011-08-16 | 2013-02-21 | Qualcomm Incorporated | Systems, methods, and devices for multi-level signaling via a wireless power transfer field |
Cited By (301)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9949039B2 (en) | 2005-05-03 | 2018-04-17 | Earlens Corporation | Hearing system having improved high frequency response |
US11979836B2 (en) | 2007-04-03 | 2024-05-07 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US10863286B2 (en) | 2007-10-12 | 2020-12-08 | Earlens Corporation | Multifunction system and method for integrated hearing and communication with noise cancellation and feedback management |
US10516950B2 (en) | 2007-10-12 | 2019-12-24 | Earlens Corporation | Multifunction system and method for integrated hearing and communication with noise cancellation and feedback management |
US10154352B2 (en) | 2007-10-12 | 2018-12-11 | Earlens Corporation | Multifunction system and method for integrated hearing and communication with noise cancellation and feedback management |
US11483665B2 (en) | 2007-10-12 | 2022-10-25 | Earlens Corporation | Multifunction system and method for integrated hearing and communication with noise cancellation and feedback management |
US11310605B2 (en) | 2008-06-17 | 2022-04-19 | Earlens Corporation | Optical electro-mechanical hearing devices with separate power and signal components |
US9961454B2 (en) | 2008-06-17 | 2018-05-01 | Earlens Corporation | Optical electro-mechanical hearing devices with separate power and signal components |
US10516949B2 (en) | 2008-06-17 | 2019-12-24 | Earlens Corporation | Optical electro-mechanical hearing devices with separate power and signal components |
US10511913B2 (en) | 2008-09-22 | 2019-12-17 | Earlens Corporation | Devices and methods for hearing |
US9949035B2 (en) | 2008-09-22 | 2018-04-17 | Earlens Corporation | Transducer devices and methods for hearing |
US10516946B2 (en) | 2008-09-22 | 2019-12-24 | Earlens Corporation | Devices and methods for hearing |
US10743110B2 (en) | 2008-09-22 | 2020-08-11 | Earlens Corporation | Devices and methods for hearing |
US10237663B2 (en) | 2008-09-22 | 2019-03-19 | Earlens Corporation | Devices and methods for hearing |
US11057714B2 (en) | 2008-09-22 | 2021-07-06 | Earlens Corporation | Devices and methods for hearing |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11900936B2 (en) | 2008-10-02 | 2024-02-13 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US10284964B2 (en) | 2010-12-20 | 2019-05-07 | Earlens Corporation | Anatomically customized ear canal hearing apparatus |
US11153697B2 (en) | 2010-12-20 | 2021-10-19 | Earlens Corporation | Anatomically customized ear canal hearing apparatus |
US10609492B2 (en) | 2010-12-20 | 2020-03-31 | Earlens Corporation | Anatomically customized ear canal hearing apparatus |
US11743663B2 (en) | 2010-12-20 | 2023-08-29 | Earlens Corporation | Anatomically customized ear canal hearing apparatus |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US9185501B2 (en) | 2012-06-20 | 2015-11-10 | Broadcom Corporation | Container-located information transfer module |
US9730005B2 (en) | 2012-06-20 | 2017-08-08 | Nxp Usa, Inc. | Container-located information transfer module |
US9191755B2 (en) * | 2012-12-14 | 2015-11-17 | Starkey Laboratories, Inc. | Spatial enhancement mode for hearing aids |
US9516431B2 (en) | 2012-12-14 | 2016-12-06 | Starkey Laboratories, Inc. | Spatial enhancement mode for hearing aids |
US11636869B2 (en) | 2013-02-07 | 2023-04-25 | Apple Inc. | Voice trigger for a digital assistant |
US12009007B2 (en) | 2013-02-07 | 2024-06-11 | Apple Inc. | Voice trigger for a digital assistant |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US11557310B2 (en) | 2013-02-07 | 2023-01-17 | Apple Inc. | Voice trigger for a digital assistant |
US11862186B2 (en) | 2013-02-07 | 2024-01-02 | Apple Inc. | Voice trigger for a digital assistant |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US20140270287A1 (en) * | 2013-03-15 | 2014-09-18 | Qualcomm Incorporated | Bluetooth hearing aids enabled during voice activity on a mobile phone |
US20140273824A1 (en) * | 2013-03-15 | 2014-09-18 | Medtronic, Inc. | Systems, apparatus and methods facilitating secure pairing of an implantable device with a remote device using near field communication |
US10841104B2 (en) | 2013-03-15 | 2020-11-17 | Poltorak Technologies Llc | System and method for secure relayed communications from an implantable medical device |
US10305695B1 (en) | 2013-03-15 | 2019-05-28 | Poltorak Technologies Llc | System and method for secure relayed communications from an implantable medical device |
US11588650B2 (en) | 2013-03-15 | 2023-02-21 | Poltorak Technologies Llc | System and method for secure relayed communications from an implantable medical device |
US9942051B1 (en) | 2013-03-15 | 2018-04-10 | Poltorak Technologies Llc | System and method for secure relayed communications from an implantable medical device |
US11930126B2 (en) | 2013-03-15 | 2024-03-12 | Piltorak Technologies LLC | System and method for secure relayed communications from an implantable medical device |
US9711166B2 (en) | 2013-05-23 | 2017-07-18 | Knowles Electronics, Llc | Decimation synchronization in a microphone |
US20140348345A1 (en) * | 2013-05-23 | 2014-11-27 | Knowles Electronics, Llc | Vad detection microphone and method of operating the same |
US9113263B2 (en) * | 2013-05-23 | 2015-08-18 | Knowles Electronics, Llc | VAD detection microphone and method of operating the same |
US10313796B2 (en) | 2013-05-23 | 2019-06-04 | Knowles Electronics, Llc | VAD detection microphone and method of operating the same |
US10020008B2 (en) * | 2013-05-23 | 2018-07-10 | Knowles Electronics, Llc | Microphone and corresponding digital interface |
US9712923B2 (en) * | 2013-05-23 | 2017-07-18 | Knowles Electronics, Llc | VAD detection microphone and method of operating the same |
US20150058001A1 (en) * | 2013-05-23 | 2015-02-26 | Knowles Electronics, Llc | Microphone and Corresponding Digital Interface |
US20150043755A1 (en) * | 2013-05-23 | 2015-02-12 | Knowles Electronics, Llc | Vad detection microphone and method of operating the same |
US11727219B2 (en) | 2013-06-09 | 2023-08-15 | Apple Inc. | System and method for inferring user intent from speech inputs |
US12073147B2 (en) | 2013-06-09 | 2024-08-27 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US12010262B2 (en) | 2013-08-06 | 2024-06-11 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US20150079900A1 (en) * | 2013-09-18 | 2015-03-19 | Plantronics, Inc. | Audio Delivery System for Headsets |
US9100775B2 (en) * | 2013-09-18 | 2015-08-04 | Plantronics, Inc. | Audio delivery system for headsets |
US9502028B2 (en) | 2013-10-18 | 2016-11-22 | Knowles Electronics, Llc | Acoustic activity detection apparatus and method |
US9830913B2 (en) | 2013-10-29 | 2017-11-28 | Knowles Electronics, Llc | VAD detection apparatus and method of operation the same |
US20150172832A1 (en) * | 2013-12-17 | 2015-06-18 | United Sciences, Llc | Iidentity confirmation using wearable computerized earpieces and related methods |
US20150172827A1 (en) * | 2013-12-17 | 2015-06-18 | United Sciences, Llc | Identity confirmation using wearable computerized earpieces and related methods |
US20150172828A1 (en) * | 2013-12-17 | 2015-06-18 | United Sciences, Llc | Iidentity confirmation using wearable computerized earpieces and related methods |
US20150189453A1 (en) * | 2013-12-30 | 2015-07-02 | Gn Resound A/S | Hearing device with position data and method of operating a hearing device |
JP2015144430A (en) * | 2013-12-30 | 2015-08-06 | ジーエヌ リザウンド エー/エスGn Resound A/S | Hearing device using position data, audio system and related method |
US9877116B2 (en) | 2013-12-30 | 2018-01-23 | Gn Hearing A/S | Hearing device with position data, audio system and related methods |
EP2890156A1 (en) * | 2013-12-30 | 2015-07-01 | GN Resound A/S | Hearing device with position data and method of operating a hearing device |
CN104754467A (en) * | 2013-12-30 | 2015-07-01 | Gn瑞声达A/S | Hearing device with position data and method of operating a hearing device |
US10154355B2 (en) * | 2013-12-30 | 2018-12-11 | Gn Hearing A/S | Hearing device with position data and method of operating a hearing device |
EP2908550A1 (en) * | 2014-02-13 | 2015-08-19 | Oticon A/s | A hearing aid device comprising a sensor member |
US9596551B2 (en) | 2014-02-13 | 2017-03-14 | Oticon A/S | Hearing aid device comprising a sensor member |
EP2908550B1 (en) | 2014-02-13 | 2018-07-25 | Oticon A/s | A hearing aid device comprising a sensor member |
EP4380190A3 (en) * | 2014-02-13 | 2024-08-07 | Oticon A/s | A hearing aid device comprising a sensor member |
US11128961B2 (en) | 2014-02-13 | 2021-09-21 | Oticon A/S | Hearing aid device comprising a sensor member |
US10524061B2 (en) | 2014-02-13 | 2019-12-31 | Oticon A/S | Hearing aid device comprising a sensor member |
US11533570B2 (en) | 2014-02-13 | 2022-12-20 | Oticon A/S | Hearing aid device comprising a sensor member |
US11889265B2 (en) | 2014-02-13 | 2024-01-30 | Oticon A/S | Hearing aid device comprising a sensor member |
US9826318B2 (en) | 2014-02-13 | 2017-11-21 | Oticon A/S | Hearing aid device comprising a sensor member |
EP3370435A1 (en) * | 2014-02-13 | 2018-09-05 | Oticon A/s | A hearing aid device comprising a sensor member |
US11317224B2 (en) | 2014-03-18 | 2022-04-26 | Earlens Corporation | High fidelity and reduced feedback contact hearing apparatus and methods |
US10034103B2 (en) | 2014-03-18 | 2018-07-24 | Earlens Corporation | High fidelity and reduced feedback contact hearing apparatus and methods |
US10142722B2 (en) * | 2014-05-20 | 2018-11-27 | Bugatone Ltd. | Aural measurements from earphone output speakers |
US20170094401A1 (en) * | 2014-05-20 | 2017-03-30 | Bugatone Ltd. | Aural measurements from earphone output speakers |
US10555102B2 (en) | 2014-05-20 | 2020-02-04 | Bugatone Ltd. | Aural measurements from earphone output speakers |
US11670289B2 (en) | 2014-05-30 | 2023-06-06 | Apple Inc. | Multi-command single utterance input method |
US9769858B2 (en) | 2014-05-30 | 2017-09-19 | Apple Inc. | Seamless connectivity between hearing aid and multiple devices |
US12067990B2 (en) | 2014-05-30 | 2024-08-20 | Apple Inc. | Intelligent assistant for home automation |
US11699448B2 (en) | 2014-05-30 | 2023-07-11 | Apple Inc. | Intelligent assistant for home automation |
US11810562B2 (en) | 2014-05-30 | 2023-11-07 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US20150351143A1 (en) * | 2014-05-30 | 2015-12-03 | Apple Inc. | Seamless connectivity between hearing aid and multiple devices |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US12118999B2 (en) | 2014-05-30 | 2024-10-15 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9763276B2 (en) * | 2014-05-30 | 2017-09-12 | Apple Inc. | Seamless connectivity between hearing aid and multiple devices |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11800303B2 (en) | 2014-07-14 | 2023-10-24 | Earlens Corporation | Sliding bias and peak limiting for optical hearing devices |
US11259129B2 (en) | 2014-07-14 | 2022-02-22 | Earlens Corporation | Sliding bias and peak limiting for optical hearing devices |
US10531206B2 (en) | 2014-07-14 | 2020-01-07 | Earlens Corporation | Sliding bias and peak limiting for optical hearing devices |
US9930458B2 (en) | 2014-07-14 | 2018-03-27 | Earlens Corporation | Sliding bias and peak limiting for optical hearing devices |
US20160038738A1 (en) * | 2014-08-07 | 2016-02-11 | Oticon A/S | Hearing assistance system with improved signal processing comprising an implanted part |
US9895535B2 (en) * | 2014-08-07 | 2018-02-20 | Oticon A/S | Hearing assistance system with improved signal processing comprising an implanted part |
US11265665B2 (en) | 2014-08-22 | 2022-03-01 | K/S Himpp | Wireless hearing device interactive with medical devices |
US11265664B2 (en) | 2014-08-22 | 2022-03-01 | K/S Himpp | Wireless hearing device for tracking activity and emergency events |
US11265663B2 (en) | 2014-08-22 | 2022-03-01 | K/S Himpp | Wireless hearing device with physiologic sensors for health monitoring |
EP3213529A4 (en) * | 2014-10-30 | 2018-06-06 | Smartear Inc. | Smart flexible interactive earplug |
US10170133B2 (en) | 2014-10-31 | 2019-01-01 | At&T Intellectual Property I, L.P. | Acoustic enhancement by leveraging metadata to mitigate the impact of noisy environments |
US9779752B2 (en) | 2014-10-31 | 2017-10-03 | At&T Intellectual Property I, L.P. | Acoustic enhancement by leveraging metadata to mitigate the impact of noisy environments |
US11115519B2 (en) * | 2014-11-11 | 2021-09-07 | K/S Himpp | Subscription-based wireless service for a hearing device |
US11252516B2 (en) | 2014-11-26 | 2022-02-15 | Earlens Corporation | Adjustable venting for hearing instruments |
US10516951B2 (en) | 2014-11-26 | 2019-12-24 | Earlens Corporation | Adjustable venting for hearing instruments |
US9924276B2 (en) | 2014-11-26 | 2018-03-20 | Earlens Corporation | Adjustable venting for hearing instruments |
US9830080B2 (en) | 2015-01-21 | 2017-11-28 | Knowles Electronics, Llc | Low power voice trigger for acoustic apparatus and method |
US10121472B2 (en) | 2015-02-13 | 2018-11-06 | Knowles Electronics, Llc | Audio buffer catch-up apparatus and method with two microphones |
US11842734B2 (en) | 2015-03-08 | 2023-12-12 | Apple Inc. | Virtual assistant activation |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US9438300B1 (en) * | 2015-03-10 | 2016-09-06 | Invensense, Inc. | Sensor fusion for antenna tuning |
US10396743B2 (en) | 2015-05-01 | 2019-08-27 | Nxp B.V. | Frequency-domain dynamic range control of signals |
US12001933B2 (en) | 2015-05-15 | 2024-06-04 | Apple Inc. | Virtual assistant in a communication session |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
EP3503589A1 (en) * | 2015-06-22 | 2019-06-26 | GN Hearing A/S | A hearing aid having combined antennas |
US11172315B2 (en) * | 2015-06-22 | 2021-11-09 | Gn Hearing A/S | Hearing aid having combined antennas |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
US9711144B2 (en) | 2015-07-13 | 2017-07-18 | Knowles Electronics, Llc | Microphone apparatus and method with catch-up buffer |
US9478234B1 (en) | 2015-07-13 | 2016-10-25 | Knowles Electronics, Llc | Microphone apparatus and method with catch-up buffer |
US11917375B2 (en) * | 2015-08-24 | 2024-02-27 | Cochlear Limited | Prosthesis functionality control and data presentation |
US20200267481A1 (en) * | 2015-08-24 | 2020-08-20 | Ivana Popovac | Prosthesis functionality control and data presentation |
US10798501B2 (en) * | 2015-09-02 | 2020-10-06 | Sonion Nederland B.V. | Augmented hearing device |
US20190387333A1 (en) * | 2015-09-02 | 2019-12-19 | Sonion Nederland B.V. | Augmented Hearing Device |
US10433077B2 (en) * | 2015-09-02 | 2019-10-01 | Sonion Nederland B.V. | Augmented hearing device |
US11550542B2 (en) | 2015-09-08 | 2023-01-10 | Apple Inc. | Zero latency digital assistant |
US11954405B2 (en) | 2015-09-08 | 2024-04-09 | Apple Inc. | Zero latency digital assistant |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11126400B2 (en) | 2015-09-08 | 2021-09-21 | Apple Inc. | Zero latency digital assistant |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US12051413B2 (en) | 2015-09-30 | 2024-07-30 | Apple Inc. | Intelligent device identification |
US10292601B2 (en) | 2015-10-02 | 2019-05-21 | Earlens Corporation | Wearable customized ear canal apparatus |
US20210186343A1 (en) * | 2015-10-02 | 2021-06-24 | Earlens Corporation | Drug delivery customized ear canal apparatus |
US20170095202A1 (en) * | 2015-10-02 | 2017-04-06 | Earlens Corporation | Drug delivery customized ear canal apparatus |
US11058305B2 (en) | 2015-10-02 | 2021-07-13 | Earlens Corporation | Wearable customized ear canal apparatus |
WO2017059240A1 (en) * | 2015-10-02 | 2017-04-06 | Earlens Corporation | Drug delivery customized ear canal apparatus |
US10021494B2 (en) | 2015-10-14 | 2018-07-10 | Sonion Nederland B.V. | Hearing device with vibration sensitive transducer |
EP3157270A1 (en) * | 2015-10-14 | 2017-04-19 | Sonion Nederland B.V. | Hearing device with vibration sensitive transducer |
US10453450B2 (en) | 2015-10-20 | 2019-10-22 | Bragi GmbH | Wearable earpiece voice command control system and method |
WO2017068004A1 (en) * | 2015-10-20 | 2017-04-27 | Bragi GmbH | Wearable earpiece voice command control system and method |
US11809886B2 (en) | 2015-11-06 | 2023-11-07 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US10993027B2 (en) | 2015-11-23 | 2021-04-27 | Goodix Technology (Hk) Company Limited | Audio system controller based on operating condition of amplifier |
US10063979B2 (en) | 2015-12-08 | 2018-08-28 | Gn Hearing A/S | Hearing aid with power management |
EP3179741A1 (en) * | 2015-12-08 | 2017-06-14 | GN ReSound A/S | Hearing aid with power management |
US11496827B2 (en) * | 2015-12-21 | 2022-11-08 | Bragi GmbH | Microphone natural speech capture voice dictation system and method |
US12088985B2 (en) | 2015-12-21 | 2024-09-10 | Bragi GmbH | Microphone natural speech capture voice dictation system and method |
US11853647B2 (en) | 2015-12-23 | 2023-12-26 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11070927B2 (en) | 2015-12-30 | 2021-07-20 | Earlens Corporation | Damping in contact hearing systems |
US11337012B2 (en) | 2015-12-30 | 2022-05-17 | Earlens Corporation | Battery coating for rechargable hearing systems |
US10492010B2 (en) | 2015-12-30 | 2019-11-26 | Earlens Corporations | Damping in contact hearing systems |
US11516602B2 (en) | 2015-12-30 | 2022-11-29 | Earlens Corporation | Damping in contact hearing systems |
US10306381B2 (en) | 2015-12-30 | 2019-05-28 | Earlens Corporation | Charging protocol for rechargable hearing systems |
US10779094B2 (en) | 2015-12-30 | 2020-09-15 | Earlens Corporation | Damping in contact hearing systems |
US10178483B2 (en) | 2015-12-30 | 2019-01-08 | Earlens Corporation | Light based hearing systems, apparatus, and methods |
US11350226B2 (en) | 2015-12-30 | 2022-05-31 | Earlens Corporation | Charging protocol for rechargeable hearing systems |
US10524068B2 (en) | 2016-01-07 | 2019-12-31 | Sonova Ag | Hearing assistance device transducers and hearing assistance devices with same |
US20170215010A1 (en) * | 2016-01-25 | 2017-07-27 | Sean Lineaweaver | Device Monitoring for Program Switching |
US10244332B2 (en) * | 2016-01-25 | 2019-03-26 | Cochlear Limited | Device monitoring for program switching |
WO2017157409A1 (en) * | 2016-03-14 | 2017-09-21 | Sonova Ag | Wireless body worn personal device with loss detection functionality |
CN108781320A (en) * | 2016-03-14 | 2018-11-09 | 索诺瓦公司 | Wireless body Worn type personal device with loss detection function |
US10455333B2 (en) | 2016-03-14 | 2019-10-22 | Sonova Ag | Wireless body worn personal device with loss detection functionality |
US9937346B2 (en) | 2016-04-26 | 2018-04-10 | Cochlear Limited | Downshifting of output in a sense prosthesis |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11657820B2 (en) | 2016-06-10 | 2023-05-23 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11749275B2 (en) | 2016-06-11 | 2023-09-05 | Apple Inc. | Application integration with a digital assistant |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US11540065B2 (en) | 2016-09-09 | 2022-12-27 | Earlens Corporation | Contact hearing systems, apparatus and methods |
US11102594B2 (en) | 2016-09-09 | 2021-08-24 | Earlens Corporation | Contact hearing systems, apparatus and methods |
US10540994B2 (en) | 2016-10-13 | 2020-01-21 | International Business Machines Corporation | Personal device for hearing degradation monitoring |
US10339960B2 (en) | 2016-10-13 | 2019-07-02 | International Business Machines Corporation | Personal device for hearing degradation monitoring |
WO2018075170A1 (en) * | 2016-10-20 | 2018-04-26 | Qualcomm Incorporated | Systems and methods for in-ear control of remote devices |
US20180113673A1 (en) * | 2016-10-20 | 2018-04-26 | Qualcomm Incorporated | Systems and methods for in-ear control of remote devices |
US10678502B2 (en) * | 2016-10-20 | 2020-06-09 | Qualcomm Incorporated | Systems and methods for in-ear control of remote devices |
US11671774B2 (en) | 2016-11-15 | 2023-06-06 | Earlens Corporation | Impression procedure |
US11166114B2 (en) | 2016-11-15 | 2021-11-02 | Earlens Corporation | Impression procedure |
US11490208B2 (en) | 2016-12-09 | 2022-11-01 | The Research Foundation For The State University Of New York | Fiber microphone |
US10674290B2 (en) * | 2017-02-03 | 2020-06-02 | Widex A/S | Communication channels between a personal communication device and at least one head-worn device |
EP3358812A1 (en) * | 2017-02-03 | 2018-08-08 | Widex A/S | Communication channels between a personal communication device and at least one head-worn device |
US20180227684A1 (en) * | 2017-02-03 | 2018-08-09 | Widex A/S | Communication channels between a personal communication device and at least one head-worn device |
US11599331B2 (en) | 2017-05-11 | 2023-03-07 | Apple Inc. | Maintaining privacy of personal information |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11837237B2 (en) | 2017-05-12 | 2023-12-05 | Apple Inc. | User-specific acoustic models |
US11862151B2 (en) | 2017-05-12 | 2024-01-02 | Apple Inc. | Low-latency intelligent automated assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11380310B2 (en) | 2017-05-12 | 2022-07-05 | Apple Inc. | Low-latency intelligent automated assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11538469B2 (en) | 2017-05-12 | 2022-12-27 | Apple Inc. | Low-latency intelligent automated assistant |
US12014118B2 (en) | 2017-05-15 | 2024-06-18 | Apple Inc. | Multi-modal interfaces having selection disambiguation and text modification capability |
US12026197B2 (en) | 2017-05-16 | 2024-07-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
EP3657816A4 (en) * | 2017-07-21 | 2020-08-19 | Sony Corporation | Sound output device |
US11405712B2 (en) | 2017-07-21 | 2022-08-02 | Sony Corporation | Sound output apparatus |
US10617842B2 (en) | 2017-07-31 | 2020-04-14 | Starkey Laboratories, Inc. | Ear-worn electronic device for conducting and monitoring mental exercises |
EP3439327A1 (en) * | 2017-07-31 | 2019-02-06 | Starkey Laboratories, Inc. | Ear-worn electronic device for conducting and monitoring mental exercises |
US11517708B2 (en) | 2017-07-31 | 2022-12-06 | Starkey Laboratories, Inc. | Ear-worn electronic device for conducting and monitoring mental exercises |
US11012793B2 (en) | 2017-08-25 | 2021-05-18 | Starkey Laboratories, Inc. | Cognitive benefit measure related to hearing-assistance device use |
US11612320B2 (en) | 2017-08-25 | 2023-03-28 | Starkey Laboratories, Inc. | Cognitive benefit measure related to hearing-assistance device use |
US11812226B2 (en) | 2017-10-31 | 2023-11-07 | Starkey Laboratories, Inc. | Hearing device including a sensor and a method of forming same |
EP3477968A3 (en) * | 2017-10-31 | 2019-07-31 | Starkey Laboratories, Inc. | Hearing device including a sensor and a method of forming same |
US11463827B2 (en) | 2017-10-31 | 2022-10-04 | Starkey Laboratories, Inc. | Hearing device including a sensor and a method of forming same |
EP3493556A1 (en) * | 2017-12-01 | 2019-06-05 | Semiconductor Components Industries, LLC | All-in-one method for wireless connectivity and contactless battery charging of small wearables |
CN109996164A (en) * | 2017-12-29 | 2019-07-09 | 大北欧听力公司 | Hearing instrument including parasitic battery antenna element |
US11570559B2 (en) | 2017-12-29 | 2023-01-31 | Gn Hearing A/S | Hearing instrument comprising a parasitic battery antenna element |
US11395076B2 (en) | 2018-02-28 | 2022-07-19 | Starkey Laboratories, Inc. | Health monitoring with ear-wearable devices and accessory devices |
US11716580B2 (en) | 2018-02-28 | 2023-08-01 | Starkey Laboratories, Inc. | Health monitoring with ear-wearable devices and accessory devices |
US10659859B2 (en) | 2018-02-28 | 2020-05-19 | Starkey Laboratories, Inc. | Portable case for modular hearing assistance devices |
US10939216B2 (en) | 2018-02-28 | 2021-03-02 | Starkey Laboratories, Inc. | Health monitoring with ear-wearable devices and accessory devices |
WO2019169142A1 (en) * | 2018-02-28 | 2019-09-06 | Starkey Laboratories, Inc. | Health monitoring with ear-wearable devices and accessory devices |
US11019417B2 (en) | 2018-02-28 | 2021-05-25 | Starkey Laboratories, Inc. | Modular hearing assistance device |
US10728642B2 (en) | 2018-02-28 | 2020-07-28 | Starkey Laboratories, Inc. | Portable case for modular hearing assistance devices |
EP3759942A1 (en) * | 2018-02-28 | 2021-01-06 | Starkey Laboratories, Inc. | Portable case for modular hearing assistance devices |
US11516603B2 (en) | 2018-03-07 | 2022-11-29 | Earlens Corporation | Contact hearing device and retention structure materials |
DE102018204260A1 (en) | 2018-03-20 | 2019-09-26 | Zf Friedrichshafen Ag | Evaluation device, apparatus, method and computer program product for a hearing-impaired person for the environmental perception of a sound event |
DE102018204260B4 (en) * | 2018-03-20 | 2019-11-21 | Zf Friedrichshafen Ag | Evaluation device, apparatus, method and computer program product for a hearing-impaired person for the environmental perception of a sound event |
US11710482B2 (en) | 2018-03-26 | 2023-07-25 | Apple Inc. | Natural assistant interaction |
US11212626B2 (en) | 2018-04-09 | 2021-12-28 | Earlens Corporation | Dynamic filter |
US11564044B2 (en) | 2018-04-09 | 2023-01-24 | Earlens Corporation | Dynamic filter |
US20210105566A1 (en) * | 2018-04-11 | 2021-04-08 | Gn Resound A/S | Hearing aid housing with an integrated antenna |
US11770662B2 (en) * | 2018-04-11 | 2023-09-26 | Gn Hearing A/S | Hearing aid housing with an integrated antenna |
US12028689B2 (en) | 2018-04-11 | 2024-07-02 | Gn Hearing A/S | Hearing aid housing with an integrated antenna |
US11907436B2 (en) | 2018-05-07 | 2024-02-20 | Apple Inc. | Raise to speak |
US11487364B2 (en) | 2018-05-07 | 2022-11-01 | Apple Inc. | Raise to speak |
US11169616B2 (en) | 2018-05-07 | 2021-11-09 | Apple Inc. | Raise to speak |
US11900923B2 (en) | 2018-05-07 | 2024-02-13 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11854539B2 (en) | 2018-05-07 | 2023-12-26 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11630525B2 (en) | 2018-06-01 | 2023-04-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US12080287B2 (en) | 2018-06-01 | 2024-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US12067985B2 (en) | 2018-06-01 | 2024-08-20 | Apple Inc. | Virtual assistant operations in multi-device environments |
US12061752B2 (en) | 2018-06-01 | 2024-08-13 | Apple Inc. | Attention aware virtual assistant dismissal |
US11431642B2 (en) | 2018-06-01 | 2022-08-30 | Apple Inc. | Variable latency device coordination |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US11360577B2 (en) | 2018-06-01 | 2022-06-14 | Apple Inc. | Attention aware virtual assistant dismissal |
US20190387330A1 (en) * | 2018-06-18 | 2019-12-19 | Sivantos Pte. Ltd. | Method for controlling the transmission of data between at least one hearing aid and a peripheral device of a hearing aid system and also a hearing aid |
CN112956214A (en) * | 2018-08-23 | 2021-06-11 | 奥杜斯有限公司 | Method, system and hearing device for enhancing an ambient audio signal of the hearing device |
WO2020040638A1 (en) | 2018-08-23 | 2020-02-27 | Audus B.V. | Method, system, and hearing device for enhancing an environmental audio signal of such a hearing device |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US12125504B2 (en) | 2018-11-02 | 2024-10-22 | BriefCam Ltd. | Method and system for automatic pre-recordation video redaction of objects |
US11984141B2 (en) | 2018-11-02 | 2024-05-14 | BriefCam Ltd. | Method and system for automatic pre-recordation video redaction of objects |
US11527265B2 (en) * | 2018-11-02 | 2022-12-13 | BriefCam Ltd. | Method and system for automatic object-aware video or audio redaction |
US11259130B2 (en) * | 2018-12-14 | 2022-02-22 | Widex A/S | Hearing assistive system with sensors |
US11277697B2 (en) * | 2018-12-15 | 2022-03-15 | Starkey Laboratories, Inc. | Hearing assistance system with enhanced fall detection features |
WO2020124022A3 (en) * | 2018-12-15 | 2020-07-23 | Starkey Laboratories, Inc. | Hearing assistance system with enhanced fall detection features |
US20220248153A1 (en) * | 2018-12-15 | 2022-08-04 | Starkey Laboratories, Inc. | Hearing assistance system with enhanced fall detection features |
US11330380B2 (en) | 2018-12-21 | 2022-05-10 | Starkey Laboratories, Inc. | Modularization of components of an ear-wearable device |
US10911878B2 (en) | 2018-12-21 | 2021-02-02 | Starkey Laboratories, Inc. | Modularization of components of an ear-wearable device |
US11638563B2 (en) | 2018-12-27 | 2023-05-02 | Starkey Laboratories, Inc. | Predictive fall event management system and method of using same |
US11893997B2 (en) * | 2019-01-05 | 2024-02-06 | Starkey Laboratories, Inc. | Audio signal processing for automatic transcription using ear-wearable device |
US11869505B2 (en) | 2019-01-05 | 2024-01-09 | Starkey Laboratories, Inc. | Local artificial intelligence assistant system with ear-wearable device |
US20220148599A1 (en) * | 2019-01-05 | 2022-05-12 | Starkey Laboratories, Inc. | Audio signal processing for automatic transcription using ear-wearable device |
US11607170B2 (en) | 2019-02-01 | 2023-03-21 | Starkey Laboratories, Inc. | Detection of physical abuse or neglect using data from ear-wearable devices |
WO2020160288A1 (en) * | 2019-02-01 | 2020-08-06 | Starkey Laboratories, Inc. | Efficient wellness measurement in ear-wearable devices |
US11317863B2 (en) | 2019-02-01 | 2022-05-03 | Starkey Laboratories, Inc. | Efficient wellness measurement in ear-wearable devices |
US11992340B2 (en) | 2019-02-01 | 2024-05-28 | Starkey Laboratories, Inc. | Efficient wellness measurement in ear-wearable devices |
US11937943B2 (en) | 2019-02-01 | 2024-03-26 | Starkey Laboratories, Inc. | Detection of physical abuse or neglect using data from ear-wearable devices |
WO2020172580A1 (en) * | 2019-02-22 | 2020-08-27 | Starkey Laboratories, Inc. | Sharing of health-related data based on data exported by ear-wearable device |
US11783815B2 (en) | 2019-03-18 | 2023-10-10 | Apple Inc. | Multimodality in digital assistant systems |
US11011182B2 (en) * | 2019-03-25 | 2021-05-18 | Nxp B.V. | Audio processing system for speech enhancement |
EP3684079A1 (en) * | 2019-03-29 | 2020-07-22 | Sonova AG | Hearing device for orientation estimation and method of its operation |
US11213688B2 (en) | 2019-03-30 | 2022-01-04 | Advanced Bionics Ag | Utilization of a non-wearable coil to remotely power a cochlear implant from a distance |
US11705130B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | Spoken notifications |
US11675491B2 (en) | 2019-05-06 | 2023-06-13 | Apple Inc. | User configurable task triggers |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
WO2020232121A1 (en) * | 2019-05-13 | 2020-11-19 | Starkey Laboratories, Inc. | Ear-worn devices for communication with medical devices |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11786694B2 (en) | 2019-05-24 | 2023-10-17 | NeuroLight, Inc. | Device, method, and app for facilitating sleep |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11076243B2 (en) * | 2019-06-20 | 2021-07-27 | Samsung Electro-Mechanics Co., Ltd. | Terminal with hearing aid setting, and setting method for hearing aid |
US12095940B2 (en) | 2019-07-19 | 2024-09-17 | Starkey Laboratories, Inc. | Hearing devices using proxy devices for emergency communication |
US10832535B1 (en) * | 2019-09-26 | 2020-11-10 | Bose Corporation | Sleepbuds for parents |
US11902745B2 (en) | 2019-10-09 | 2024-02-13 | Jacoti Bv | System of processing devices to perform an algorithm |
WO2021069715A1 (en) * | 2019-10-09 | 2021-04-15 | Jacoti Bv | System of processing devices to perform an algorithm |
CN114503605A (en) * | 2019-10-11 | 2022-05-13 | 大北欧听力公司 | Hearing device with magnetic induction coil field |
WO2021069434A1 (en) * | 2019-10-11 | 2021-04-15 | Gn Hearing A/S | A hearing device having a magnetic induction coil |
EP3806493A1 (en) * | 2019-10-11 | 2021-04-14 | GN Hearing A/S | A hearing device having a magnetic induction coil |
US12035109B2 (en) | 2019-10-11 | 2024-07-09 | Gn Hearing A/S | Hearing device having a magnetic induction coil |
US20230247373A1 (en) * | 2019-10-14 | 2023-08-03 | Starkey Laboratories, Inc. | Hearing assistance system with automatic hearing loop memory |
US12126962B2 (en) * | 2019-10-14 | 2024-10-22 | Starkey Laboratories, Inc. | Hearing assistance system with automatic hearing loop memory |
WO2021127228A1 (en) * | 2019-12-17 | 2021-06-24 | Starkey Laboratories, Inc. | Hearing assistance systems and methods for monitoring emotional state |
EP3890344A1 (en) * | 2020-03-30 | 2021-10-06 | Sonova AG | Hearing devices and methods for implementing automatic sensor-based on/off control of a hearing device |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11924254B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Digital assistant hardware abstraction |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11750962B2 (en) | 2020-07-21 | 2023-09-05 | Apple Inc. | User identification using headphones |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11653158B2 (en) | 2020-07-27 | 2023-05-16 | Gn Hearing A/S | Head-wearable hearing instrument with improved co-existence of multiple communication interfaces |
US11917374B2 (en) | 2020-07-27 | 2024-02-27 | Gn Hearing A/S | Head-wearable hearing instrument with improved co-existence of multiple communication interfaces |
CN112218221A (en) * | 2020-10-21 | 2021-01-12 | 歌尔智能科技有限公司 | Hearing aid adapter and control method |
US12136419B2 (en) | 2023-08-31 | 2024-11-05 | Apple Inc. | Multimodality in digital assistant systems |
Also Published As
Publication number | Publication date |
---|---|
US20130344806A1 (en) | 2013-12-26 |
US20130343585A1 (en) | 2013-12-26 |
US9185501B2 (en) | 2015-11-10 |
US20160037288A1 (en) | 2016-02-04 |
US9730005B2 (en) | 2017-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130343584A1 (en) | Hearing assist device with external operational support | |
US11671773B2 (en) | Hearing aid device for hands free communication | |
US11348580B2 (en) | Hearing aid device with speech control functionality | |
US11825272B2 (en) | Assistive listening device systems, devices and methods for providing audio streams within sound fields | |
US9510112B2 (en) | External microphone array and hearing aid using it | |
US20240292164A1 (en) | Hearing aid system for estimating acoustic transfer functions | |
US11638106B2 (en) | Hearing system comprising a hearing aid and a processing device | |
US11893997B2 (en) | Audio signal processing for automatic transcription using ear-wearable device | |
US11356783B2 (en) | Hearing device comprising an own voice processor | |
US20220103952A1 (en) | Hearing aid comprising a record and replay function | |
US20230292064A1 (en) | Audio processing using ear-wearable device and wearable vision device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BENNETT, JAMES D.;WALLEY, JOHN;SIGNING DATES FROM 20120926 TO 20121126;REEL/FRAME:029363/0672 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 |
|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001 Effective date: 20170119 |