US20150078575A1 - Audio apparatus and methods - Google Patents

Audio apparatus and methods Download PDF

Info

Publication number
US20150078575A1
US20150078575A1 US14/463,559 US201414463559A US2015078575A1 US 20150078575 A1 US20150078575 A1 US 20150078575A1 US 201414463559 A US201414463559 A US 201414463559A US 2015078575 A1 US2015078575 A1 US 2015078575A1
Authority
US
United States
Prior art keywords
computing device
output
audio
mobile computing
audio signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/463,559
Other versions
US9344793B2 (en
Inventor
Aaron Alexander Selig
Varun Srinivasan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mimi Hearing Technologies GmbH
Original Assignee
SYMPHONIC AUDIO TECHNOLOGIES CORP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/178,034 external-priority patent/US20140309549A1/en
Priority claimed from US14/178,068 external-priority patent/US9203699B2/en
Priority claimed from US14/206,499 external-priority patent/US9319019B2/en
Priority to US14/463,559 priority Critical patent/US9344793B2/en
Application filed by SYMPHONIC AUDIO TECHNOLOGIES CORP filed Critical SYMPHONIC AUDIO TECHNOLOGIES CORP
Assigned to SYMPHONIC AUDIO TECHNOLOGIES CORP. reassignment SYMPHONIC AUDIO TECHNOLOGIES CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SELIG, AARON ALEXANDER, SRINIVASAN, VARUN
Publication of US20150078575A1 publication Critical patent/US20150078575A1/en
Priority to US15/132,785 priority patent/US20160234589A1/en
Publication of US9344793B2 publication Critical patent/US9344793B2/en
Application granted granted Critical
Assigned to Mimi Hearing Technologies GmbH reassignment Mimi Hearing Technologies GmbH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SYMPHONIC AUDIO TECHNOLOGIES CORP.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1025Accumulators or arrangements for charging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
    • H04R2201/107Monophonic and stereophonic headphones with microphone for two-way hands free communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2410/00Microphones
    • H04R2410/05Noise reduction with a separate noise microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics

Definitions

  • the processor 140 can predict a number or activity of people proximal the user or a sound response of the room currently occupied by the user, and the processor 140 can plug such predicted or estimated information into the audio output profile to adjust audio output adjustments (i.e., volume, EQ settings, effects, etc.) accordingly, thereby accommodating the detected changing environmental condition for the user such that the user's perceived listening experience is substantially unchanged despite the changing environmental condition.
  • the processor 140 can cooperate with the microphone 120 to determine that the user has walked from a noisy room into quiet room and decrease a volume of the audio output accordingly.
  • the processor 140 can interface with an accelerometer and/or gyroscope to predict a change in the user's current action or activity and then modify the audio output profile accordingly.
  • the processor 140 can correlate an accelerometer and/or gyroscope output at a first time with walking and an accelerometer and/or gyroscope output at a second time with running and then increase volume bass output parameters of the audio output profile in response to the user's detected transition from walking to running.
  • the smart headphone device can include any other component or feature and interface with any one or more other devices or peripheral to recharge the battery 160 .
  • the smart headphone device can exclude a battery, and components within the smart headphone device can be powered directly by the connected mobile computing device.
  • the apparatus 200 can modify the input audio signal during a telephone call to remove ambient noise (e.g., wind noise, traffic noise) from the input audio signal before communicating the processed input audio signal into the mobile computing device during a telephone call supported on the mobile computing device such that an individual on the telephone call with the user perceives that the user is in a quiet(er) space during the telephone call.
  • ambient noise e.g., wind noise, traffic noise

Abstract

An apparatus for augmenting audio capabilities of a mobile computing device, including: a housing transiently encasing a portion of the mobile computing device; a communication module arranged within the housing, downloading a hearing profile of a user from the mobile computing device, and receiving an output audio signal from the mobile computing device; a processor arranged within the housing and processing the output audio signal according to the hearing profile; a secondary audio output module arranged within the housing adjacent an integrated loudspeaker within the mobile computing device and outputting a processed form of the output audio signal in place of output of the output audio signal by the integrated loudspeaker; a secondary microphone arranged within the housing adjacent an integrated microphone within the mobile computing device, the communication module transmitting an input audio signal from the secondary microphone to the mobile computing device.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 61/867,464, filed on 19 Aug. 2013, and to U.S. Provisional Application No. 61/880,405, filed on 20 Sep. 2013, both of which are incorporated in their entireties by this reference.
  • This application is related to U.S. patent application Ser. No. 14/178,034, filed 11 Feb. 2014, U.S. Provisional Application No. 61/763,163, filed on 11 Feb. 2013, U.S. Provisional Application No. 61/831,796, filed on 6 Jun. 2013, U.S. Provisional Application No. 61/867,436, filed on 19 Aug. 2013, and U.S. Provisional Application No. 61/880,367, filed on 20 Sep. 2013, all of which are incorporated in their entireties by this reference.
  • This application is further related to U.S. patent application Ser. No. 14/178,068, filed 11 Feb. 2014, U.S. Provisional Application No. 61/763,182, filed on 11 Feb. 2013, U.S. Provisional Application No. 61/867,449, filed on 19 Aug. 2013, and U.S. Provisional Application No. 61/880,377, filed on 20 Sep. 2013, all of which are incorporated in their entireties by this reference.
  • This application is further related to U.S. patent application Ser. No. 14/206,499, filed 12 Mar. 2014, U.S. Provisional Application No. 61/ 778,188, filed on 12 Mar. 2013, U.S. Provisional Application No. 61/831,798, filed on 6 Jun. 2013, U.S. Provisional Application No. 61/867,480, filed on 19 Aug. 2013, and U.S. Provisional Application No. 61/880,395, filed on 20 Sep. 2013, all of which are incorporated in their entireties by this reference.
  • TECHNICAL FIELD
  • This invention relates generally to the field of personal audio devices, and more specifically to a new and useful audio apparatus and method in the field of personal audio devices.
  • BRIEF DESCRIPTION OF THE FIGURES
  • FIG. 1 is a schematic representation of a smart headphone device in accordance with the invention;
  • FIG. 2 is a schematic representation of one variation of the smart headphone device;
  • FIG. 3 is a schematic representation of one variation of the smart headphone device;
  • FIG. 4 is a schematic representation of an apparatus in accordance with the invention;
  • FIG. 5 is a schematic representation of one variation of the apparatus;
  • FIG. 6 is a flowchart representation of a method in accordance with the invention; and
  • FIG. 7 is a schematic representation of one variation of the apparatus.
  • DESCRIPTION OF THE EMBODIMENTS
  • The following description of the embodiments of the invention is not intended to limit the invention to these embodiments, but rather to enable any person skilled in the art to make and use this invention.
  • 1. Smart Headphone Device
  • As shown in FIG. 1, a smart headphone device includes: a left speaker 112; a right speaker 110; a microphone 120; a jack 130 including a first input pin, a second input pin, and an output pin; a processor 140 coupled to the first input pin and to the second input pin and is configured to receive an audio output profile from a computing device via the first input pin, to receive a stereo audio signal via the first input pin and the second input pin, and to modify the stereo audio signal according to the audio output profile; a cable 150 configured to communicate a left component of the stereo audio signal to the left speaker 112 and to communicate a right component of the stereo audio signal to the right speaker 110; and a battery 160 configured to power the processor 140.
  • Generally, the smart headphone device functions to apply an audio output profile to an audio signal to augment a user's hearing experience. For example, the smart headphone device can apply the audio output profile to an audio signal to compensate for a user's hearing ability, as defined by a user hearing profile and determined through a hearing test, as described in U.S. patent application Ser. No. 14/178,034, U.S. patent application Ser. No. 14/178,067, and U.S. patent application Ser. No. 14/206,499. The smart headphone device can also apply the audio output profile to the audio signal to compensate for ambient conditions (e.g., noise), local environmental conditions (e.g., sound response profile of a room occupied by the smart headphone device), a user action or activity (e.g., exercising), an output response of the smart headphone device, etc.
  • Due to the vast number of computing devices used by consumers to playback audio files and communicate audio data with other devices, such as different smartphones, MP3 and other personal music players, tablets, laptop and desktop computers, personal data assistants, car stereo systems, phone conferencing systems, etc. models from the same and different manufacturers, as well as the vast number of third-party native applications that execute on the various computing devices and control audio output, application of a user's hearing profile in the form of an audio output profile within multiple native applications across multiple difference devices can be cumbersome for a user attempting to synchronize audio output profiles across devices. Such functionality, implemented within each discrete device or within each native application, may also require developers to integrate cross-platform and inter-app communication to maintain updated audio output profiles and may require hardware manufacturers to integrate wired or wireless systems and communication protocols to support synchronization and/or communication of audio output profiles across various devices. However, the smart headphone device may bypass such hurdles by defining a singular computing-device-agnostic personal audio device that receives an audio output profile, implement the audio output profile to process an audio signal, and outputs the processed audio signal through one or more speakers for a user. The singular smart headphone device can therefore be implemented across various computing devices to output audio of various types from various native applications across various platforms, etc.
  • The smart headphone device includes a left speaker 112 and a right speaker 110. Each speaker can be incorporated into an earbud, such as the left speaker 112 in a left-ear-specific earbud and the right speaker 110 in a right-ear-specific earbud. Alternatively, the left and right speakers can be incorporated in over-ear housings. Yet alternatively, the smart headphone device can include a singular speaker, such as implemented in a Bluetooth headset. However, the smart headphone device can include the left speaker 112 and/or the right speaker no of any other type and incorporated in any other way.
  • The microphone 120 of the smart headphone device is configured to output an audio signal corresponding to local sound, such as the voice of a user. The microphone 120 can be arranged on the smart headphone device such that the microphone 120 is substantially near the user's mouth when the smart headphone device is worn by the user (e.g., the left and right earbuds containing the left and right speakers are inserted into the user's ears). Alternatively, the microphone 120 can be arranged on one (e.g., the left) earbud of the smart headphone device. In this implementation, smart headphone device can include a second microphone arranged on a second (e.g., the right) earbud of the smart headphone device. However, the microphone 120 can be arranged in any other way and can be configured to output any other suitable signal.
  • The jack 130 of the smart headphone device includes a first input pin, a second input pin, and an output pin. The jack 130 can be in the form of a 3-5 mm male stereo plug or standard (i.e., ubiquitous) male headphone plug. The first input pin can correspond to a left audio channel, the second input pin can correspond to a right audio channel, the output pin can correspond to a microphone line out channel, and the jack 130 can include a fourth pin that corresponds to a ground. In this configuration, a stereo audio signal can be transmitted into the smart headphone device via the first and second input pins, and digital data can be transmitted into the smart headphone device over the stereo audio signal, such in the form of high frequency serial blips discernible by the processor 140 but difficult for a human to detect. The jack 130 can alternatively take the form of a digital data plug, such as a micro or mini USB plug. In this implementation, the stereo audio signal (and a microphone signal) can be transmitted in analog or digital format from a connected device (e.g., a computing device) to the smart headphone device. However the jack 130 can be of any other form and connect the smart headphone device to an external computing device or audio device in any other suitable way.
  • The processor 140 of the smart headphone device is coupled to the first input pin and to the second input pin of the jack 130 and is configured to receive an audio output profile from a computing device via the first input pin, to receive a stereo audio signal via the first input pin and the second input pin, and to modify the stereo audio signal according to the audio output profile. Generally, the processor 140 functions to modify an audio input signal (e.g., a stereo audio signal) according to the audio output profile to accommodate a hearing ability, a hearing need, and/or a hearing preferences of the user and/or to adjust to the audio output profile to accommodate the user's current ambient environment. For example, the processor 140 can include a signal processing unit and an audio driver, wherein the signal processing unit receives the audio input signal and manipulates the audio input signal according to the audio output profile, and wherein the audio driver outputs a (relatively) higher-power signal capable of driver the left speaker 112 and/or the right speaker 110.
  • The processor 140 can receive the audio output profile from a connected device, such as over a wired or wireless connection. For example, the jack 130 can be plugged into a corresponding female jack within a smartphone (or tablet, laptop, desktop computer, etc.), and the smartphone can transmit the audio output profile to the processor 140 as a series of timed analog blips (e.g., representing digital bits, or is and os) over the first and/or second input pins (e.g., according to a frequency-shift keying technique), wherein the processor 140 decodes the blips into the audio output profile. The smartphone can transmit—and the processor 140 can receive and decode—such analog blips while an audio signal is transmitted over the first and second input pins, or the smartphone can transmit the analog blips independently of an audio signal, such as prior to beginning playback of a song track into the smart headphone device. Alternatively, the smart headphone device can include a wireless communication module (e.g., wireless transceiver) configured to sync with the smartphone (or other computing device), to receive the audio output profile from the smartphone, and to communicate the received audio output profile to the processor 140 for subsequent implementation. For example, the wireless communication module can communicate with the smartphone over Bluetooth, Wi-Fi or other radio communication protocol.
  • The processor 140 can also store the audio output profile such that the smart headphone device can be used across multiple computing devices without communicating with the other computing devices to receive additional audio output profiles. For example, the wireless communication module can sync the smart headphone device with a master computing device (e.g., the user's smartphone) and receive the audio output profile from the master computing device, and the processor 140 can apply the audio output profile received from the master computing device to audio signals received from various other audio playback devices connected to the smart headphone device. The processor 140 can additionally or alternatively apply a dynamically apply the audio output profile to an audio signal (e.g., a prerecorded song) based on a quality of the audio signal, such as a based on bit rate, metadata (e.g., artist, genre), environmental conditions (e.g., ambient noise, headphone profile), etc.
  • As described in U.S. patent application Ser. No. 14/178,034, U.S. patent application Ser. No. 14/178,067, and U.S. patent application Ser. No. 14/206,499, the audio output profile can correspond to the user's hearing profile such that, when processor applies the audio output profile to the audio input signal, the processed audio signal accommodates the user's hearing ability, such as by boosting frequencies that the user does not hear well and attenuating frequencies for which the user is particularly sensitive. The audio output profile can also include a composite of one or more of the user's hearing profile, an output response of the connected device, an output response of the smart headphone device, an audio playback preference of the user, a sound response of a room occupied by the user, etc., as described below.
  • The processor 140 can apply the audio output profile to modify an equalizer (EQ) setting for one or both channels of the audio signal, to independently or collectively increase or decrease the volume of each channel of the audio device, apply a sound effect (e.g., reverb, chorusing, distortion, etc.) to one or more channels of the audio input signal, or modify the audio input signal in any other way according to the audio output profile. Audio output adjustments defined by the audio output profile can be static, that is, constant through application of the audio output profile over time.
  • Alternatively, audio output adjustments defined by the audio output profile can be dynamic. In this implementation, the processor 140 can interface with one or more sensors within the smart headphone device, with one or more sensors within the connected device, or with the connected device generally to detect and respond to a changing ambient or local environmental condition and/or a changing user action, activity, condition, etc. In one example, the processor 140 can cooperate with the microphone 120 to collect ambient sound and to estimate a local environment based on the ambient sound. In this example, the processor 140 can predict a number or activity of people proximal the user or a sound response of the room currently occupied by the user, and the processor 140 can plug such predicted or estimated information into the audio output profile to adjust audio output adjustments (i.e., volume, EQ settings, effects, etc.) accordingly, thereby accommodating the detected changing environmental condition for the user such that the user's perceived listening experience is substantially unchanged despite the changing environmental condition. Similarly, the processor 140 can cooperate with the microphone 120 to determine that the user has walked from a noisy room into quiet room and decrease a volume of the audio output accordingly. In another example, the processor 140 can interface with an accelerometer and/or gyroscope to predict a change in the user's current action or activity and then modify the audio output profile accordingly. In this example, the processor 140 can correlate an accelerometer and/or gyroscope output at a first time with walking and an accelerometer and/or gyroscope output at a second time with running and then increase volume bass output parameters of the audio output profile in response to the user's detected transition from walking to running.
  • In this implementation, the processor 140 can additionally or alternatively receive user or environment data from the connected device, such as over a wired or wireless connection, as described above. For example, the processor 140 can receive a GPS location, a user action or activity determined from an accelerometer or gyroscope output, a calendar event, etc. from a connected smartphone and modify parameters of the audio output profile accordingly. Alternatively, the connected device can update the audio output profile in response to data collected by the connected device, and the processor 140 can receive the updated audio output profile and implement the updated audio output profile accordingly. For example, the processor 140 can transition from a previous audio output profile to a new audio output profile once a new audio output profile download completes, such as by transitioning from parameter settings of the old audio output profile to parameter settings of the new audio output profile over a period of ten seconds. However, the processor 140 can function in any other way to dynamically adjust parameters of the audio output profile or to implement updated audio output profiles over time.
  • The processor 140 can also interface with the microphone 120 to implement noise cancellation. However, the processor 140 can manipulate or modify an audio input signal in any other way to improve, augment, or control the user's listening experience.
  • The cable 150 of the smart headphone device is configured to communicate a left component of the stereo audio signal to the left speaker 112 and to communicate a right component of the stereo audio signal to the right speaker no. Generally, the cable 150 connects a right output channel of the processor 140 (e.g., a right audio driver) to a right speaker 110 and a left output channel of the processor 140 (e.g., a left audio driver) to left speaker 112. The cable 150 can also define a junction that splits between the speakers and the processor 140 to feed discrete channels of the processed audio signal to the left and right speakers.
  • The battery 160 of the smart headphone device is configured to power the processor 140. The battery 160 can be a rechargeable battery, such as a lithium-ion or a nickel-cadmium rechargeable battery.
  • In one implementation, the battery 160 is recharged through the jack 130, wherein a negative terminal of the battery 160 is connected to the ground pin of the jack 130, and wherein the positive terminal of the battery 160 is connected to one or both of the input pins of the jack 130. In one example, a native charging application executes on a connected device (e.g., a smartphone), wherein the native charging application handles distribution of power from the connected device into the smart headphone device via the jack 130 to charge the battery 160. In this example, when the user selects the native charging application or selects a “charge” feature within the native charging application, the native application can check that the smart headphone device is connected, such as by transmitting a series of high-frequency pulses over the first and/or second input pin and receiving a confirming signal from the processor 140 over the output pin in the jack 130. The native charging application can then implement a similar technique to transmit a command to the processor 140 to disconnect or turn off an audio output to the left and right speakers and to connect the first and second input pins to a Vcc input of battery charging circuit 162 or directly to the positive terminal of the battery 160. For example, the processor 140 can toggle a transistor arranged between the positive terminal of the battery 160 and an audio output pin of the left audio driver within the processor 140. The native charging application can then output a sinusoidal audio signal at a peak power level (or peak long-duration power level) into the first and second input pins within the jack 130. A rectifier within the smart headphone device can then convert the sinusoidal audio signal into a DC signal, a boost circuit within the smart headphone device can boost the low voltage DC signal into a higher voltage signal to feed into a battery charging circuit 162, and the battery 160 charging circuit can monitor voltage and/or current into the battery 160 during the charge cycle. Alternatively, the processor 140 can monitor battery voltage, transmit battery voltage data to the connected device (e.g., over the output pin), and the native charging application can adjust the frequency and/or amplitude of the output signal to complete a battery voltage-dependent charging cycle.
  • In another implementation, the battery 160 is recharged through a USB adapter. For example, the smart headphone device can be part of a kit including a USB charging adapter. In this example, the USB charging adapter can include a female audio jack and a male USB jack, the female audio jack configured to receive the jack 130 of the smart headphone device, the male USB jack configured to engage a computer, wall adapter, or other female USB jack to source power to charge the battery 160 within the smart headphone device. When the user inserts the jack 130 into the USB charging adapter and inserts the USB charging adapter into a computer, the USB charging adapter can transmit a request to the computer—through male USB jack—to source higher current. The USB charging adapter can also transmit a request to the processor 140 to turn off the speakers and connect the battery 160 to the first and/or second input pins, such as described above, to transition the smart headphone device from an audio playback mode to a charging mode. Alternatively, the smart headphone device can include a hard button or other input region 180 selectable by the user to toggle between an audio playback mode to a charging mode. Once the smart headphone device has entered the charging mode and is connected to the USB charging device, a battery charging circuit 162 within the USB charging adapter can monitor the voltage and current of a power signal fed into the jack 130 to charge the battery 160, or a battery charging circuit 162 within the smart headphone device can condition a power signal from the USB charging adapter to achieve a proper battery recharge cycle.
  • In a similar implementation, the smart headphone device can include an additional port, such as a female mini-USB or micro-USB port configured to receive a male jack. The additional port can be connected to the battery 160 or to a battery charging circuit 162 and thus feed power from a connect device or power adapter into the smart headphone device to charge the battery 160.
  • However, the smart headphone device can include any other component or feature and interface with any one or more other devices or peripheral to recharge the battery 160. Alternatively, the smart headphone device can exclude a battery, and components within the smart headphone device can be powered directly by the connected mobile computing device.
  • The smart headphone device can also include a housing 170 configured to contain one or more components of the smart headphone device. In one implementation, the battery 160, the processor 140, and the microphone 120 are arranged in the same, singular housing. In this implementation, the cable 150—of some length—extends out of the housing 170 and connects to the left and right speakers. A second cable also extends out of the housing 170 and connects to the jack 130, as shown in FIG. 1. In this implementation, the length of the cable 150 can be such that, when the left and right speakers are worn in the user's ears, the housing 170 rests near the user's sternum and/or just below (e.g., within three inches of) the user's chin.
  • In another implementation, the battery 160, the processor 140, and the jack 130 are arranged within or on to the same singular housing. In this implementation, the cable 150 extends from the housing 170 and connects to the microphone 120, the left speaker 112, and the right speaker 110. For example, the microphone 120 can be arranged in a second housing arranged on the cable 150 between the housing 170 and the left speaker 112. In this implementation, when the smart headphone device is connected to a computing device (e.g., a smartphone), the housing 170 can rest against an exterior service of the computing device, as shown in FIG. 2.
  • In yet another implementation, the battery 160 and processor are arranged in a first housing, the microphone 120 is arranged in a second housing, and the jack 130 is coupled to the end of a second cable. However, the processor 140, the cable 150, the jack 130, the microphone 120, and/or the battery 160, etc. can be arranged within or without the housing 170, a second housing, etc. in any other suitable configuration.
  • As shown in FIG. 3, one variation of the smart headphone device excludes the left and right speakers and instead includes a female audio jack. In this variation, the processor 140, the jack 130, and the battery 160 can be arranged within a housing defining a discrete device that “sits” between third party headphones and a computing device (e.g., smartphone) and functions to process an audio signal received over the first and second input pins of the jack 130 and to output the processed audio signal through the female audio jack. In this variation, the smart headphone device can function as a “black box” external audio processor that is both computing device- and audio output device-agnostic, enabling a user to plug the smart headphone device into any computing device or audio playback device and enabling the user to plug any third-party headphone, speaker, etc. into the smart headphone device.
  • This variation, the processor 140 can implement functionalities described above to modify an audio signal according to the audio output profile. The processor 140 can also modify or adjust (parameters or) the audio output profile based on the type of third-party headphone and/or computing device connected to the smart headphone device. For example, the processor 140 can automatically detect the connected third-party headphones (or other audio output device, as described in U.S. patent application Ser. No. 14/178,034) and adjust the audio output profile accordingly. Alternatively, the user can open a native audio output profile application (or access a native audio output profile website through a web browser) executing on the computing device and select the make and/or model of the connected headphone from a dropdown menu, and the native audio output profile application can adjust the audio output profile according to a sound response associated with the selected audio output device and push the updated audio output profile to the processor 140, such as over the first and/or second input pins of the jack 130.
  • In this variation, the smart headphone device can also incorporate an internal microphone. In one implementation, the processor 140 defaults to passing an output signal from the internal microphone through the output pin into the computing device. However, the processor 140 can test for an integrated microphone in a connected third-party headphone and, if an integrated microphone is detected, route an output signal from the microphone 120 within the headphone to the output pin of the jack 130—rather than the output signal from the internal microphone. Alternatively, if a third-party headphone with an integrated microphone is detected, the processor 140 can combine audio signals from the internal microphone and from the integrated microphone into a signal audio output signal. Similarly, the processor 140 can toggle an audio output into the connected computing device (e.g., a smartphone) through the output pin of the jack 130 between the internal microphone output signal and the integrated microphone output signal, and a native application executing on the computing device can combine the packets or “snippets” of received audio data into two discrete microphone channels. However, the smart headphone device can be of any other form, incorporate any other suitable component or device, and communicate data and/or audio signals with or between any other computing device and/or third-party headphone in any other suitable way.
  • In one variation of the smart headphone device, the processor 140 further processes a microphone output signal according to a voice profile of the user. In this variation, the processor 140 can receive a voice profile and apply the voice profile to the microphone 120 output signal through methods and processes similar to those described above for the audio output profile and audio signal(s). In one example, a native application executing on the user's smartphone collects audio signals through a microphone within the smartphone and/or a microphone in a connected device (e.g., in the smart headphone device or in a third-party headphone). In this example, the native application can identify and qualify user's voice in the received audio signals and accordingly generate a voice profile for user. The voice profile can define how the microphone 120 output signal is processed to augment the user's voice, such as to aid other persons in hearing and/or understanding the user during a phone call. For example, the voice profile can define EQ settings to boost some frequencies and attenuate others, a volume setting, and/or an effects setting to be applied to a microphone output signal, such as while the user is talking and/or while a phone call is underway on the smartphone. The native application can also implement machine learning and/or pattern recognition methods to “learn” the user's voice over time and can thus adjust the voice profile accordingly and transmit updated voice profiles to the smart headphone device when pertinent or convenient.
  • The native application can also apply voice profiles and corresponding ambient noise data, environment data, user activity data, etc. of other users to the user's voice profile to improve a static or dynamic component of the user's voice profile, such as similar to the audio output profile described above.
  • In this variation, the processor 140 can process output signals from the microphone 120 according to the voice profile and then pass the processed microphone signal to computing device via the output pin of the jack 130, such as during a phone call. For example, a smartphone connected to the smart headphone device can trigger the processor 140 to apply the voice profile to the microphone 120 output signal by transmitting a process command over the first and/or second output pin of the jack 130 when a phone call is received or placed on the smartphone, and the smartphone can trigger the processor 140 to cancel application of the voice profile to the microphone 120 output signal by transmitting a process cancel command over the first and/or second output pin of the jack 130 when the phone call ends. Alternatively, the smart headphone device can include a hard switch or other input mechanism to enable the user to manually turn microphone processing (or a microphone output) on and off.
  • The voice profile can define static adjustment to the output signal of the microphone 120, or the voice profile can be dynamic, such as based on a detected or known sound response of a room, a location, a time of day, a user action or activity, etc., and the processor 140 can thus apply dynamic changes to the voice profile similar to the audio output profile described above. For example, the processor 140 can apply a first EQ setting to the microphone 120 output signal when the user is on the phone and speaking into the microphone 120 of the device, and the processor 140 can apply a second EQ setting to the microphone 120 output when the user is giving a talk or lecture and using the smart headphone device and the connected computing device to broadcast his voice to a public address system in a lecture hall. In this example, the second EQ setting can also accommodate a known or detected sound response profile of the lecture hall, and the user can access a native application executing on the computing device to manually switch between a phone call setting and a lecture setting, to manually select a type of lecture hall (e.g., 500+seats, <100 seats, full, empty, long and narrow, wide and shallow, balcony, etc.), or enter other details or make other manual adjustments to the voice profile. Alternatively, the native application can predict the type of lecture hall and modify the voice profile accordingly or access a setting saved by the user, such as if the user commonly lectures in a particular lecture hall. Therefore, the native application can modify and/or implement a voice profile of the user based on the user's location, an environmental condition, and/or a demographic of the user, etc. The processor 140 can alternatively implement similar methods within the smart headphone device directly, such as in response to receiving location data from the connected computing device.
  • The processor 140 can also implement the voice profile to identify the user's voice in a microphone output signal and attenuate the output signal when the user is speaking or selective attenuate the user's voice within the microphone output signal when the user's voice and one or more other voices or sounds are present in the microphone 120 output signal. For example, the processor 140 can identify voices in the microphone 120, selectively attenuate the user's voice based on the user's voice profile, and boost portions of the output signal corresponding to the other identified voices in order to aid the user in hearing and comprehending conversation(s) around him without overwhelming the user with his own voice. In this example, the processor 140 can feed the modified microphone output signal directly back into the left and right speakers and/or pass the modified microphone output signal to the connected computing device.
  • Alternatively, the smart headphone device can collect an audio signal through one or more internal microphones and transmit the audio signal to the computing device. The computing device can then implement the voice profile and/or any other foregoing method or technique to process the audio device and then transmit the processed audio signal back to the smart headphone device, wherein the smart headphone device then outputs the processed audio signal substantially in real-time.
  • However, the smart headphone device can function in any other to modify an audio input signal and/or an audio output signal communicated over the jack 130 between the processor 140 and a connected computing device.
  • The foregoing systems and methods can be similarly implemented in other audio devices. In one example, the foregoing systems and methods are implemented in a single portable speaker with one or more speakers. In another example, the foregoing systems and methods are implemented in a set of speakers, such as a left and right speaker set or a surround-sound system with left, right, center, and rear speakers. The foregoing systems and methods can also be implemented in a smartphone, a laptop, a tablet, an automobile or other vehicle, or any other suitable device to process and/or output audio.
  • 2. Apparatus and Applications
  • As shown in FIG. 4, an apparatus 200 for augmenting audio capabilities of a mobile computing device is described herein. The mobile computing device can include an integrated loudspeaker proximal a first end of the mobile computing device, an integrated microphone proximal a second end of the mobile computing device opposite the first end, an integrated battery, and an integrated charging port electrically coupled to the battery. The apparatus 200 includes: a housing 210 transiently encasing a portion of the mobile computing device; a communication module 220 arranged within the housing 210, downloading a hearing profile of a user from the mobile computing device, and receiving an output audio signal from the mobile computing device; a processor 230 arranged within the housing 210 and processing the output audio signal according to the hearing profile; a secondary audio output module 240 arranged within the housing 210 adjacent the integrated loudspeaker and outputting a processed form of the output audio signal in place of output of the output audio signal by the integrated loudspeaker; a secondary microphone 250 arranged within the housing 210 adjacent the integrated microphone, the communication module 220 transmitting an input audio signal from the secondary microphone to the mobile computing device; and a secondary charging port 270 arranged within the housing 210, electrically coupled to the integrated charging port, and receiving a charging connector to recharge the mobile computing device through the integrated charging port.
  • Generally, the apparatus 200 functions as an aftermarket device (transiently) installable on a mobile computing device to augment audio input and output capabilities of the mobile computing device. In particular, the apparatus 200 removes (substantially all) audio output processes from a loudspeaker(s) within the mobile computing device to a loudspeaker(s) within the apparatus 200, removes (substantially all) audio input processes from a microphone(s) within the mobile computing device to a microphone(s) within the apparatus 200, and processes output audio signals locally on the apparatus 200 to compensate for the user's hearing needs and/or ambient noise conditions substantially in real-time before outputting a processed form of the audio output signals through an internal loudspeaker(s) when the apparatus 200 is installed on the mobile computing device. For example, the apparatus 200 can store a hearing profile (e.g., of a user associated with the mobile computing device connected to the apparatus 200) locally and apply the hearing profile to audio output signals received from the mobile computing device such that (substantially) all audio output from the apparatus 200 is customized for the user's personal hearing abilities and disabilities. The apparatus 200 can also store audio output profiles specific to different audio output types (e.g., telephone calls, music, a personal data assistant), applications executing on the mobile computing device (e.g., a native telephone call application, a native maps application, a native music application), and/or a type of peripheral audio device connected to the apparatus 200 (e.g., headphones, a home stereo, a car stereo), etc., and the apparatus 200 can selectively apply one or more such audio output profiles to an audio output signal from the mobile computing device to further compensate for the user's current listening conditions.
  • The apparatus 200 can similarly processes output audio signals locally on the apparatus 200 to compensate for the user's voice and/or ambient noise conditions substantially in real-time before outputting a processed form of the input audio signals into the mobile computing device when the apparatus 200 is installed there over. For example, the apparatus 200 can apply a voice profile of the user to an input audio signal collected through a microphone within the apparatus 200 during a telephone call supported on the mobile computing device before communicating the processed input audio signal into the mobile computing device in order to improve comprehension of the user's voice for an individual conversing with the user over the telephone call. Similarly, the apparatus 200 can modify the input audio signal during a telephone call to remove ambient noise (e.g., wind noise, traffic noise) from the input audio signal before communicating the processed input audio signal into the mobile computing device during a telephone call supported on the mobile computing device such that an individual on the telephone call with the user perceives that the user is in a quiet(er) space during the telephone call.
  • The apparatus 200 can be configured for (transient) installation on a smartphone, a mobile phone, a tablet, a personal data assistant (PDA), a laptop computer, a radio, a personal navigation device, a personal media player, a camera, a watch, or any other suitable type of mobile computing device. Because the mobile computing device may output an audio signal and/or receive an input audio signal while simultaneously executing one or more other processes, and because the mobile computing device may prioritize one native process over another—such as by prioritizing a native calendaring application over processing an output audio signal—the mobile computing device may fail to process an output audio signal according to the user's hearing profile and/or may fail to process an output audio signal according to the user's hearing profile with sufficient speed to output a processed form of the output audio signal (e.g., through an integrated loudspeaker or through a loudspeaker in a connected peripheral device) in real-time as perceived by the user. In particular, the apparatus 200 can handle audio processing locally to limit latency of a processed output audio signal provided to the user, such as to less than fifteen milliseconds after real-time when output through an internal loudspeaker within the apparatus 200. Therefore, once connected (e.g., installed over) the mobile computing device, the apparatus 200 can remove audio processing from the mobile computing device and instead process audio locally on a dedicated processor within the apparatus 200, thereby substantially preserving customization of output audio signals (and input audio signals) specifically for the user and in real-time regardless of which processes are executed on the mobile computing device and in what order. Similarly, the apparatus 200 can collect input audio signals—such as ambient noise or the user's voice—through a microphone installed locally in the apparatus 200 rather than retrieving a microphone signal from the mobile computing device to substantially limit a latency between detecting ambient noise proximal the apparatus 200 and processing the input audio signal to compensate for the ambient noise and/or to limit a latency between collecting the input audio signal and processing the input audio signal according to the user's voice profile.
  • In one example application, the apparatus 200 includes a hard case for a smartphone and installs around a perimeter and back of the smartphone. In this example application, the apparatus 200 includes a male plug that engages a female receptacle within the smartphone to enable (digital and/or analog) communication between the smartphone and the apparatus 200, such as communication of the user's hearing profile to the apparatus 200. The apparatus 200 can additionally or alternatively include a wireless communication module (e.g., a Bluetooth module) over which the apparatus 200 communicates with the smartphone, such as to receive an output audio signal from the smartphone and to transmit an (processed form of an) input audio signal to the smartphone during a telephone call supported on the smartphone. In this example application, once the apparatus 200 is installed on the smartphone, all output audio signals from the mobile computing device (e.g., incoming telephone signals, music) can be directed to the apparatus 200 and played through one or more audio output modules (e.g., a loudspeaker) within the apparatus 200 rather than through a loudspeaker within the smartphone. Similarly, once the apparatus 200 is installed on the smartphone, all input audio signals (e.g., a microphone input) can be collected at the apparatus 200 (and processed) and communicated into the smartphone rather than being collected at the smartphone. Therefore, as in this example application, the apparatus 200 can function to handle (substantially) all audio processing immediately before output to a user and can handle (substantially) all audio collection during operation of the smartphone.
  • 2.2 Housing
  • The housing 210 of the apparatus 200 transiently encases a portion of the mobile computing device. Generally, the housing 210 defines a removal enclosure that receives a portion of the mobile computing device and contains various other elements of the apparatus 200. For example, the housing 210 can include a first portion 211 and a second portion 212, as shown in FIG. 5; wherein the first portion 211 of the housing 210 contains the communication module 220, the processor 230, and the secondary microphone (hereinafter “the microphone 250”); and wherein the second portion 212 of the housing 210 contains the secondary audio output module (hereinafter “the audio output module 240”) and transiently engages the first portion 211 to substantially encase the mobile computing device. Thus, in this example, the housing 210 can include a clamshell of a substantially rigid material(s) that encases the sides and back of the mobile computing device. The first portion 211 and the second portion 212 of the housing 210 can also transiently lock together to form an impact-resistant encasement for the mobile computing device. Alternatively, the housing 210 can be of a flexible and/or elastic material or any other suitable material of any other form.
  • 2.3 Communication Module
  • The communication module 220 of the apparatus 200 is arranged within the housing 210, downloads a hearing profile of a user from the mobile computing device, and receives an output audio signal from the mobile computing device. Generally, the communication module 220 functions to communicate data—in digital and/or analog format—between the apparatus 200 and the mobile computing device. In particular, the communication module 220 functions to collect the user's hearing profile and audio output-related data from the mobile computing device and to receive an (digital or analog) output audio signal from the mobile computing device; the processor 230 can then apply the user's hearing profile and the audio output-related data to the output audio signal before the output audio signal is broadcast—now in processed form—to the user, such as through the audio output module 240 or through a peripheral audio device coupled to the apparatus 200. For example, the communication module 220 can download or otherwise receive: hearing data of a user (e.g., collected during a hearing test completed by the user); a hearing profile of the user; audio output profiles corresponding to various types of audio, various locations, various connected peripheral audio devices, and/or various audio-related native applications executing on the mobile computing device, etc.; data corresponding to events occurring or detected at the mobile computing device (e.g., native applications opened, GPS or geofenced locations, volume commands, etc.); and/or any other data from the mobile computing device. In particular, the communication module 220 can download: a hearing profile of the user, as described in U.S. patent application Ser. No. 14/178,034; a sound profile corresponding to a particular audio type, as described in U.S. patent application Ser. No. 14/178,067; and/or location data or location-based audio output profiles, as described in U.S. patent application Ser. No. 14/206,499, from the mobile computing device and store any of these data locally on the apparatus 200, such as on in a discrete flash memory module arranged within the apparatus 200 or locally on the processor 230 within the apparatus 200.
  • In one implementation, the communication module 220 includes a male audio jack plug that engages a female audio jack receptacle integrated into the mobile computing device when the apparatus 200 is installed over the mobile computing device, and the communication module 220 receives both the output audio signal in analog form and the hearing profile of the user, etc. in digital form from the mobile computing device over the female audio jack, as described above. Thus, the communication can receive data in digital and analog form substantially simultaneously over an audio jack integrated into the mobile computing device. In this implementation, the communication module 220 can also communicate an analog microphone—in original or processed form—into the mobile computing device through a pin of the female audio jack receptacle.
  • In another implementation, the communication module 220 includes a male data port plug that engages a female data port receptacle integrated into the mobile computing device when the apparatus 200 is installed over the mobile computing device, as shown in FIG. 5, and the communication module 220 receives output audio signals, the user hearing profile, audio output profiles, and/or location data, etc. in digital form over one or more data pins in the female data port receptacle. Alternatively, the communication module 220 can receive the output audio signal from the mobile computing device in analog format via an analog pin in the female data port receptacle. In this implementation, the communication module 220 can also communicate an analog or digital microphone signal≦in original or processed form—into the mobile computing device through a pin of the female audio jack receptacle.
  • In yet another implementation, the communication module 220 includes a wireless communication module that communicates with the mobile computing device via a wireless communication protocol, such as Bluetooth, as shown in FIGS. 4 and 5. In this implementation, once the apparatus 200 is installed over the mobile computing device and activated, the wireless communication module can pair with the mobile computing device. For example, the wireless communication module can automatically pair with the mobile computing device in response to a tapping event (e.g., a “double-tap”) on the apparatus 200-mobile computing device unit. Alternatively, the wireless communication module can pair with the mobile computing device once the apparatus 200 is installed there over based on the strength of a signal from a corresponding wireless radio within the mobile computing device. As in these examples, the wireless communication module can thus remain paired with the mobile computing device for as long as the apparatus 200 is installed over the mobile computing device. Once paired with the mobile computing device, the wireless communication module can download the hearing profile, audio output profiles, and/or location data, etc. in digital format from the mobile computing device. In this implementation, the wireless communication module can also transmit a (raw or processed) input audio signal from the microphone 250 to the mobile computing device over wireless communication protocol. Similarly, the wireless communication module can transmit the processed form of the output audio signal from the apparatus 200 to a connected peripheral audio device, such as a Bluetooth headset or a pair of Bluetooth-enabled headphones also paired to the apparatus 200. However, the communication module 220 can communicate with the mobile computing device in any other way and over any other wired or wireless communication protocol.
  • The communication module 220 can also support any combination of wired and/or wireless communication protocols with the mobile computing device, such as in different operating modes of the apparatus 200 and/or of the mobile computing device. In one example, the communication module 220 includes a wireless receiver (e.g., a Bluetooth transceiver) and a wired receiver (e.g., a male data port plug), the wireless receiver receives a first output audio signal from the mobile computing device in a first mode, and the wired receiver receives a second output audio signal from the mobile computing device through an integrated data port in the mobile computing device in a second mode. In this example, the processor 230 can processes the first output audio signal according to a first hearing profile of the user in the first mode, such as during a telephone call, and then process the second output audio signal according to a second hearing profile of the user in the second mode, such as during music playback through the mobile computing device. During the first mode in which a telephone call involving the mobile computing device is underway, the processor 230 can also process an input audio signal from the microphone 250 according to a voice profile of the user, as described above, and the communication module 220 can then wirelessly transmit the processed form of the input audio signal to the mobile computing device substantially in real-time while also receiving the first output audio signal from the mobile computing device. Thus, the communication module 220 can include multiple communication subcomponents supporting multiple wired and/or wireless communication protocols, and the communication module 220 can select between the communication subcomponents to download and/or upload data between the apparatus 200 and the mobile computing device during various modes of the apparatus 200, such as modes specific to audio processing of incoming or outgoing audio data during telephone calls, specific to music playback, specific to intelligent personal assistant commands and responses, specific to voice navigation prompts, etc.
  • 2.4 Microphone
  • The microphone 250 is arranged within the housing 210 adjacent the integrated microphone. Generally, the microphone 250 is arranged within the apparatus 200 such that the microphone 250 is adjacent a microphone integrated into the mobile computing device with the housing 210 is installed thereover. The microphone 250 functions to replicate the functionality of the integrated microphone but routes input audio data to the processor 230 within the apparatus 200 to enable the processor 230 to detect an ambient noise condition from the input audio signal substantially in real-time and/or to enable the processer to process the input audio signal to compensate for ambient noise before transmission into the computing device substantially in real-time. The microphone 250 can therefore output a (analog or digital) signal corresponding to local noise, such as ambient noise or the user's voice proximal the microphone 250, and to enable the processor to access this input audio signal.
  • The apparatus 200 can also include multiple microphones, such as a first microphone 250 and a second microphone arranged within the housing 210, and the processor can analyze outputs from the first and second microphones to distinguish ambient sounds originating near the user from ambient sounds originating further away from the user, to determine the former to be relevant noise and the latter to be background noise, boost the relevant noise and attenuate the background noise in signals from the first and second microphones, and combine the processor signals from the first and second microphones before outputting the processed audio signal through the secondary audio output module substantially in real-time to aid the user in audibly discerning local relevant sounds from background noise. The microphone 250 can further be arranged near an end of the housing to place the microphone substantially proximal a user's mouth when the mobile computing device-apparatus assembly is in use as a cellular phone.
  • However, the apparatus 200 can include any other number of microphones arranged in any other way within the housing 210, such as two microphones supported within the housing 210 in a stereo configuration.
  • 2.5 Processor
  • The processor 230 of the apparatus 200 is arranged within the housing 210 and processes the output audio signal according to the hearing profile. Generally, the processor 230 functions to process output audio signal—received from the mobile computing device—according to user, device, and/or ambient parameters before outputting a processed form of the output audio signal to the user through an internal or external audio output module (e.g., loudspeaker). The processor 230 can similarly process input audio signal—from the microphone 250—according to user, device, and/or ambient parameters before outputting a processed form of the input output audio signal to the mobile computing device via the communication module 220. The processor 230 can further handle ambient noise detection, battery monitoring, hearing profile generation and/or selection, and/or audio output device detection and corresponding audio output profile selection, etc.
  • In one implementation, the processor 230 generates the user's hearing profile from hearing test results received from the mobile computing device over the communication module 220. For example, the communication module 220 can receive results of a hearing test completed by the user through a native hearing test application executed on the mobile computing device, and processor can manipulate these data to generate a hearing profile specific to the user, such as including actual and/or estimated abilities of the user to hear at discrete audible frequencies or over various audible frequency ranges, such as described in U.S. patent application Ser. No. 14/178,034. Alternatively, the processor 230 can collect a hearing profile of the user from the mobile computing device. For example, the hearing profile can be generated locally on the mobile computing device or generated remotely on a remote server and uploaded to the mobile computing device, and the communication module 220 can download the hearing profile from the mobile computing device and store the hearing profile locally in memory within the apparatus 200; the processor 230 can then access the hearing profile to process output audio signals subsequently received from the mobile computing device. The communication module 220 can also download new hearing profiles of the user as such new hearing profiles become available, such as after the user completes a hearing test on a native hearing test application executing on the mobile computing device (e.g., once per six-month period), such as when the user indicates (through the mobile computing device) that the current hearing profile stored on the apparatus 200 is no longer sufficient, or such as when a different user assumes use of the apparatus 200 and/or the apparatus 200-mobile computing device unit.
  • The processor 230 can then apply the received or locally-generated hearing profile of the user to the output audio signal received from the mobile computing device to process the output audio signal substantially in real-time before outputting the processed form of the output audio signal through the audio output device arranged in the apparatus 200 and/or through a connected peripheral audio device. For example, the processor 230 can boost a particular frequency or a particular range of frequencies to compensate for the user's difficulty in hearing this particular frequency or particular range of frequencies, as specified in the user's hearing profile. Similarly, the processor 230 can attenuate another particular frequency or another particular range of frequencies to compensate for the user's sensitivity to this other particular frequency or other particular range of frequencies, as specified in the user's hearing profile, such as described in U.S. patent application Ser. No. 14/178,034.
  • The processor 230 can also apply location-based audio output profiles to the audio output signal to compensate for known ambient noise conditions and/or known sound responses of a particular location. For example, a native application executing on the mobile computing device can interface with a location module within the mobile computing device, such as a GPS sensor or a Wi-Fi module, to detect a location of the mobile computing device and to select a particular audio output profile corresponding to the detected location. In this example, the native application can select the particular audio output profile from a local database of locations and corresponding audio output profiles generated on the mobile computing device from location and audio data previously collected on mobile computing device. Alternatively, the native application can select the particular audio output profile from a remote database of locations and corresponding audio output profiles, such as generated from location and audio data previously collected by multiple mobile computing devices, such as through similar native applications executing on a multiple of similar mobile computing devices. The native application executing on the mobile computing device can then upload the particular location-based audio output profile to the apparatus 200 via the communication module 220, and the processor 230 can modify the audio output signal according to the particular location-based audio output profile, such as by attenuating and/or boosting particular frequencies of the output audio signal to compensate for common noise levels at the location and/or a local frequency response at the location. In this example, the native application can also elect the particular location-based audio output profile based on a time of day, a proximity of other mobile computing devices executing a similar native application to the mobile computing device, and/or previous (e.g., recent) audio-related inputs entered into neighboring mobile computing devices, etc., as described in U.S. patent application Ser. No. 14/178,067. Alternatively, the apparatus 200 can store various location-based audio output profiles locally, and the processor 230 can elect a particular location-based audio output profile based on location data received from the mobile computing device. For example, the communication module 220 can receive a location datum from the mobile computing device, and the processor 230 can select a particular audio output profile, from a set of audio output profiles stored locally in memory arranged within the housing 210, based on the location datum. In this example, the processor 230 can then process the output audio signal according to both the user's hearing profile and the particular audio output profile elected for the current location of the apparatus 200-mobile computing device unit. Yet alternatively, the apparatus 200 can further include a location module (e.g., a GPS sensor), and the processor 230 can elect the particular location-based audio output profile based on an output of the internal location module. However, the processor 230 can function in any other way to apply location-related audio parameters to the output audio signal received from the mobile computing device.
  • The processor 230 can additionally or alternatively detect real-time ambient noise conditions proximal the mobile computing device and modify the output audio signal to improve the user's listening experience. In particular, the processor 230 can morph the output audio signal into a more audibly-discernible form by boosting and/or attenuating particular frequencies and/or particular frequency ranges of the output audio signal in real-time in response to detected changes ambient noise conditions proximal the mobile computing device. In one example, the processor 230 samples an output of the microphone 250 within the apparatus 200—such as continuously (e.g., at a sampling rate of 20 Hz) while the apparatus 200 is in use or intermittently (e.g., for a five second interval during every minute of user of the apparatus 200)—and characterizes an ambient noise condition proximal the mobile computing device based on the output of the microphone 250. In this example, the processor 230 can characterize an input audio signal from the microphone 250 as one of crowd noise, crowd noise in an open space (e.g., an amphitheatre), crowd noise in a confined space (e.g., a restaurant), road noise, street noise, machine noise, rain, thunder, and/or hail, ocean waves, the user's voice, a single or low number of voices, music, etc.; the processor 230 can then elect a particular audio output profile correspond to the characterization of the input audio signal (e.g., an audio output profile for crowd noise in a confined space) and then apply the elected audio output profile to the output audio signal to modify the output audio signal accordingly before output to the user through an audio output module within the apparatus 200 or within a peripheral audio device connected thereto. Alternatively, the processor 230 can characterize an amplitude of the ambient noise at particular frequencies or particular frequency ranges and directly boost corresponding frequencies or frequency ranges in the output audio signal substantially in real-time according to frequency-specific amplitudes of the noise, thereby compensating for the ambient noise and enabling the user to hear substantially all frequencies of the output audio signal, even frequencies of the output audio signal coincident the ambient noise. In this example, the processor 230 can also apply a sound response profile of the microphone 250 within the apparatus 200 to normalize the input audio signal for real noise levels proximal the mobile computing device.
  • In the foregoing implementation, the processor 230 can continuously sample the microphone 250 (e.g., at a rate of 20 Hz, for a period of five seconds per minute) and then modify the output audio signal (or an audio output profile applied to the output audio signal) substantially in real-time based on each new audio sample received from the microphone 250. The processor 230 can also collect ambient noise data from the microphone 250 over a time interval (e.g., one minute), average these ambient noise data over the time interval to update or generate an audio output profile, and then apply the audio output profile to the output audio signal, such as upon conclusion of the corresponding time interval. The processor 230 can also sample the microphone 250 and update the profile applied to the output audio signal to compensate for ambient noise only when conditions of the apparatus 200-mobile computing device unit are suitable (e.g., when particular conditions are met). For example, the apparatus 200 can further include an accelerometer arranged within the housing 210, and the processer can sample the accelerometer to determine if the mobile computing device is accelerating (and/or a magnitude and direction of the acceleration). Thus, the processor 230 can only sample the microphone 250 and modify the audio output profile applied to the output audio signal when the apparatus 200 is not accelerating (aside from acceleration due to gravity) or is accelerating at a rate less than a threshold magnitude. In particular, the processor 230 can restrict updates to the audio output profile applied to the output audio signal—to compensate for ambient noise—to periods during which the apparatus 200-mobile computing device unit is substantially static. In this example, the processor 230 can thus reduce or eliminate inadvertent compensation for pocket noise—which is not characteristic of true ambient noise conditions around the user—collected through the microphone 250 in the apparatus 200 when the apparatus 200-mobile computing device unit is carried in the user's pocket (or purse, etc.) while the user is walking, running, or moving in any other way. Rather, the processor 230 can apply ambient noise to the output audio signal only when an input audio signal from the microphone 250 is characteristic of true ambient noise conditions around the user, such as when the user is not moving, when the user is moving minimally, and/or when the apparatus 200-mobile computing device unit is not carried in a pocket, purse, etc. The processor 230 can therefore calculate an acceleration of the mobile computing device based on an output of the accelerometer, withhold application of an ambient audio signal from the secondary microphone to the output audio signal in response to the acceleration exceeding a threshold acceleration, and process the output audio signal according to the hearing profile with compensation for the ambient audio signal in response to the acceleration falling below the threshold acceleration, such as for a threshold period of time. The processor 230 can additionally or alternatively execute sound recognition techniques to correlate an input audio signal from the microphone 250 with pocket noise, and the processor 230 can thus reject the input audio signal for application to the output audio signal to compensate for ambient noise conditions until the pocket noise in the input audio signal drops below a threshold amplitude and/or is substantially eliminated from the input audio signal. The processor 230 can also receive the input audio signal from a microphone arranged in a connected peripheral audio device, such as a microphone integrated into a headset or into a pair of headphones coupled to the apparatus 200 via an analog audio input jack. However, the processor 230 can receive an input audio signal in any other format and can function in any other way to process the output audio signal to compensate for ambient noise detected in the input audio signal.
  • The processor 230 can include an audio codec, a controls processor, and/or any other one or more discrete processing modules that handle particular subroutines or sub-processes of or within the apparatus 200. For example, the processor 230 can include the audio codec that processes the output audio signal according to the user's hearing profile, and the controls processor can handle audio output profile selection, user hearing profile generation, ambient noise detection or characterization from an input audio signal, etc. Therefore, the audio codec and the controls processor can function independently and/or cooperate to execute various functions of the apparatus 200 as described herein.
  • The processor 230 and the communication module 220 can also be physically coextensive. The processor 230 and the communication module 220 can further define various discrete subcomponents, such as a wireless transmitter module, a wireless receive module, a wired (e.g., I2C) communication module, an audio codec, and/or a controls processor, etc.
  • 2.6 Audio Output Module
  • The audio output module 240 of the apparatus 200 is arranged within the housing 210 adjacent the integrated loudspeaker and outputs a processed form of the output audio signal in place of output of the output audio signal by the integrated loudspeaker. Generally, the audio output module 240 functions to output a processed form of the audio output signal—to be audibly discerned by the user—proximal a region of the apparatus 200 adjacent an audio output module (e.g., loudspeaker) integrated into the connected mobile computing device, the audio output module 240 of the apparatus 200 thus replacing and replicating the functionality of the audio output module 240 integrated into the mobile computing device. In particular, the audio output module 240 of the apparatus 200 audibly outputs the processed form of the output audio signal (processed to compensate for the user's hearing abilities, ambient noise conditions, etc.) substantially directly from the processor 230 such that audio output is substantially real-time rather than transmitting the processed form of the output audio signal back into the mobile computing device, which may delay audible output of the output audio signal through the integrated audio output module, such as due to processing priorities of the mobile computing device.
  • In one implementation, the audio output module 240 includes one or more loudspeakers arranged within a portion of the housing 210 (e.g., the second position of the housing 210) adjacent the integrated audio output module of the mobile computing device when the housing 210 is installed over the mobile computing device.
  • The audio output module 240 can thus output the processed form of the output audio signal at various volumes levels audibly detectable by the user near the apparatus 200. The audio output module 240 can additionally or alternatively include an audio receiver that outputs the processed form of the output audio signal that is audibly discernible to the user when the user holds the apparatus 200-mobile computing device unit to his face with the audio receiver adjacent his ear. The audio output module 240 can yet additionally or alternatively include a remote loudspeaker, and the housing 210 can define an internal megaphone profile that amplifies sound output from the remote loudspeaker. In another implementation, the audio output module 240 within the apparatus includes an audio induction loop adjacent the integrated loudspeaker and broadcasting a magnetic field corresponding to the processed form of the output audio signal. In this implementation, the audio induction loop can transmit a magnetic signal based on the processed form of the output audio signal and detectable by a hearing aid proximal the apparatus 200, and the hearing aid can locally translate the magnetic signal into an audibly-discernible signal. However, the audio output module 240 can include any other one or more devices of any one or more types suitable for outputting an audio signal to the user.
  • The audio output module 240 can be intransiently (e.g., substantially permanently) integrated into the housing 210. The audio output module 240 can additionally or alternatively include one or more loudspeakers, audio receivers, and/or audio induction loops, etc. that are transiently arranged within (the second portion 212 of) the housing 210. In one example implementation, the audio output module 240 can include a loudspeaker removably connected to the housing 210 via a wire and receiving the processed form of the output audio signal from the processor 230 via the wire. For example, the loudspeaker can be arranged within an earbud that can be transiently installed into a receiver defined in the housing 210 in a “speakerphone mode,” and the earbud can default to outputting sound though the loudspeaker at a first default volume in the speakerphone mode. However, the earbud can then be removed from the housing 210 and placed in the user's ear in a “private playback mode,” and the earbud can default to outputting sound though the loudspeaker at a second default volume—less than the first default volume—in the private playback mode. Similarly, the audio output module 24 can include a loudspeaker that wirelessly pairs with a wireless communication component of the communication module 220 and that receives the processed form of the output audio signal wirelessly from the communication module 220 before outputting the same. For example, the audio output module 240 can include a wireless-enabled earbud that receives the processed form of the output audio signal via wired or wireless communication protocol when installed in the housing 210 and that receives the processed form of the output audio signal over wireless (e.g., Bluetooth) communication protocol when removed from the housing 210 (and installed in the user's ear), as shown in FIG. 7. In this example, the secondary audio output module can include a discrete housing that contains a tertiary battery 282 and the loudspeaker, and a secondary battery 280 (described below) arranged within the housing 210 can charge the tertiary battery 282 of the secondary audio output module when the secondary audio output module is installed back into the housing, such as through a pair of prongs in the discrete housing of the secondary audio output module that engage a matching pair of prongs within the housing 210. However, the audio output module 240 can include any other one or more audio output devices transiently or intransiently coupled to the housing 210 of the apparatus 200 and receiving the processed form of the output audio signal over any other suitable communication protocol.
  • As shown in FIG. 5, one variation of the apparatus 200 further includes an audio jack 222 arranged within the housing 210, the processor 230 selectively routing the processed form of the output audio signal from the secondary audio output module to the audio jack 222 in response to connection of a peripheral audio device to the audio jack 222. Generally, the audio jack 222 functions to route the processed form of the output audio signal to a connected peripheral audio device (e.g., a headset, a pair of headphones, a car stereo, a home stereo, etc.) for playback to the user. In this variation, the processor 230 can further predict a type of the peripheral audio device connected to the audio jack 222, such as based on an impedance of the peripheral audio device connected to the apparatus 200 via the audio jack 222, and the processor 230 can thus process the output audio signal according to an audio output profile corresponding to the type of the peripheral audio device. For example, the processor 230 can apply a headphone audio output profile to the output audio signal—in conjunction with the user's hearing profile, a location-based audio profile, etc.—if an impedance across the audio jack 222 falls within an impedance range corresponding to headphones, and the processor 230 can apply a car stereo audio output profile to the output audio signal if an impedance across the audio jack 222 falls within an impedance range corresponding to car stereos and different from impedance range corresponding to headphones.
  • 2.7 Battery and Charging Port
  • The secondary charging port (hereinafter the “charging port”) of the apparatus 200 is arranged within the housing 210, is electrically coupled to the integrated charging port, and receives a charging connector to recharge the mobile computing device through the integrated charging port. Generally, the charging port 270 is arranged within the housing 210, receives a charging connector, and distributes power from the charging connector into the mobile computing device to charge the mobile computing device. Therefore, the charging port 270 can function as a via passing from a perimeter of the housing 210 of the apparatus 200 into the mobile computing device to enable the mobile computing device to be charged with the apparatus 200 in place over the mobile computing device.
  • As shown in FIG. 5, one variation of the apparatus 200 includes a secondary battery 280 (hereinafter the “battery 280”) arranged within the housing 210 and powering the processor 230 and the audio output module 240, the charging port 270 electrically coupled to the battery 280 and receiving the charging connector to recharge the battery 280. Generally, the battery 280 functions to supply (electrical) power to various components of the apparatus 200 during operation thereof. In this variation, the battery 280 can also siphon power from the charging port 270 when a charging connector is installed therein, the battery 280 thus charging substantially simultaneously with the mobile computing device and via the same charging connector.
  • In one implementation, the processor 230 selectively diverts electrical power—received from the charging connector through the charging port 270 of the apparatus 200—between the charging port 270 integrated in the mobile computing device and the battery 280 in the apparatus 200 to simultaneously charge the battery 280 in the apparatus 200 and the battery integrated into the mobile computing device. For example, the processor 230 can selectively divert electrical power between the secondary battery 280 and the integrated battery, via the integrated charging port, to maintain the secondary battery 280 and the integrated battery in substantially similar states of charge. In this example, the processor 230 can couple to a high voltage, high current line in the mobile computing device—such as via a wired connector of the communicate module—to directly read a voltage of the battery within the mobile computing device; the processor 230 can then control a charging circuitry within the apparatus 200 to match the voltage (indicative of a stage of charge) of the battery 280 within the apparatus 200 to the voltage of the battery within the mobile computing device. Alternatively, the processor 230 can routinely request a stage of charge of the integrated battery from mobile computing device, such as over the wireless or wired digital communication protocol supported by the communication module 220, and the apparatus 200 can selectively divert or partition power received from the charging connector, via the charging port 270, between the battery 280 within the apparatus 200 and the battery within the mobile computing device accordingly.
  • In the foregoing implementation, the processor 230 can therefore selectively divert or partition power from the charging connector to the battery 280 in the apparatus 200 and the battery in the mobile computing device to match states of charge and/or to match a battery life (in time until fully discharged) of the batteries. The processor 230 can also selectively divert power from the battery 280 in the apparatus 200 to the battery in the mobile computing device, such as via a data port of the communication module 220, to maintain similar states of charge and/or battery life of the battery 280 in the apparatus 200 and the battery in the mobile computing device during operation of the apparatus 200-mobile computing device unit. By maintaining the states of charge of the batteries in unison until full discharge, the processor 230 can substantially prevent a scenario in which the battery 280 within the apparatus 200 retains a charge while the battery within the mobile computing device is fully discharged, the apparatus 200 thus not useful in customizing audio output from the mobile computing device as the mobile computing device has shut down. Similarly, the processor 230 can thus substantially prevent a scenario in which the battery within the mobile computing device retains a charge while the battery 280 within the apparatus 200 is fully discharged, the apparatus 200 thus no longer able to customize audio output from the mobile computing device because it has shut down, and the apparatus 200 possibly obscuring audio output from a loudspeaker in the mobile computing device and requiring removal from the mobile computing device to enable audio functionality of the mobile computing device.
  • Yet alternatively, the processor 230 can selectively divert power from the battery 280 in the apparatus 200 to the battery in the mobile computing device during operation thereof to extend a battery life of the mobile computing device even when the apparatus 200 has powered down. The processor 230 can also selectively divert or portion power from a charging connector engaged in the charging port 270 to preferentially charge the battery within the mobile computing device. For example, the processor 230 can initially divert 80% of power from the charging connector into the mobile computing device and 20% of power from the charging connector into the battery 280 in the apparatus 200 and then divert 100% of power from the charging connector to the battery 280 within the apparatus 200 once the battery within the mobile computing device is fully charged. However, the processor 230 can implement any other charging schedule, power distribution schema, etc. to charge and maintain the battery 280 in the apparatus 200 and the battery in the mobile computing device. Alternatively, a native application executing in the mobile computing device can implement any of these methods or techniques to charge and maintain the battery 280 in the apparatus 200 and the battery in the mobile computing device.
  • 2.8 Methods
  • The apparatus 200 can further implement various methods or techniques to handle various output audio types supported on the mobile computing device and to switch therebetween.
  • As shown in FIG. 6, in one implementation, the apparatus 200 executes a method for augmenting audio capabilities of a mobile computing device, including: at a peripheral device (i.e., the apparatus 200), receiving a hearing profile of a user from a computing device coupled to the peripheral device in Block Silo; and generating a voice profile of the user based on audio data collected through a microphone coupled to the peripheral device in Block S120. The method can further include, during a telephone call established with the computing device: at the peripheral device, processing an output audio signal according to the hearing profile of the user, the output audio signal received from the computing device in Block S130; outputting a processed form of the output audio signal through an audio output module coupled to the peripheral device in Block S140; processing an input audio signal from the microphone 250 according to the voice profile of the user in Block S150; and transmitting a processed form of the input audio signal to the computing device in Block S160.
  • Generally, the apparatus 200 can execute the method to switch between communication protocols for communications between the apparatus 200 and the connected mobile computing device based on a type of the output audio signal. In particular, the apparatus 200 can implement the method to communicate input (e.g., microphone) and output (e.g., incoming call) audio data between the apparatus 200 and the mobile computing device over one submodule of the communication module 220 during a telephone call engaged through the mobile computing device and to receive audio data for other audio types (e.g., voice navigation, prerecorded audio playback, alarms) over a second submodule of the communication module 220. For example, as described above, the apparatus 200 can implement the method locally on the apparatus 200 to receive an output audio signal from the mobile computing device and to transmit a processed form of the input audio signal (from the microphone 250 within the apparatus 200) over Bluetooth (or other) wireless communication protocol while a telephone call is ongoing on the mobile computing device, and the apparatus 200 can receive an output audio signal from the mobile computing device over a wired connection to the mobile computing device (e.g., over a data port of the mobile computing device) as other audio-related functions are executed on the mobile computing device. The apparatus 200 can also apply a voice profile of the user to the input audio signal from the microphone 250 and transmit this processed input audio signal to the mobile computing device substantially in real-time during a telephone call but switch to extracting ambient noise conditions from the input audio signal from the microphone 250 and processing the output audio signal according, as described above, during other audio playback functions of the mobile computing device.
  • In one implementation, the processor 230 within the apparatus 200 collects user voice data from the microphone 250 over time and generates a voice profile characterizing the user's voice according. For example, the voice profile can define an overall loudness level of the user's voice (i.e., an average audible amplitude of the user's voice across the audible range and/or over time), a common vocal range of the user, frequencies or ranges of frequencies over which the amplitude of the user's voice increases or decreases, and/or a tonality or timbre of the user's voice, etc. Alternatively, such as voice profile can be generated locally on the mobile computing device or remotely, such as on a remote server, and the apparatus 200 can download the voice profile from the mobile computing device. During a telephone call (or other mobile computing device function handling audio input), the processor 230 can thus process an input audio signal from the microphone 250 within the apparatus 200 to improve comprehension of the user's words by another individual, such as during a telephone call or voice recording, and/or to improve comprehension of the user's voice by voice recognition software executing on the mobile computing device or connected computer network. For example, if the user often speaks quietly, the processor 230 can apply the user's voice profile to automatically boost the amplitude of the input audio signal uniformly across the audible range to compensate for the user's soft voice. In another, if the user's voice typically falls within a limited frequency band of 90 Hz to 170 Hz, as specified in the user's voice profile, the processor 230 can process the input audio signal from the microphone 250 during a phone call on the mobile computing device by attenuating frequencies outside of this band and boosting frequencies within the band before transmitting the processed form of the input audio signal from the apparatus 200 into the mobile computing device (substantially in real-time). However, the apparatus 200 can execute the method to process the input audio signal in any other way.
  • The processor 230 can continuously process the input audio signal according to user's voice profile during operation of the apparatus 200-mobile computing device unit and thus continuously pass the processed input audio signal into the mobile computing device. Alternatively, the processor 230 can selectively process the input audio signal according to user's voice profile, such as in response to receiving a command for the input audio signal from the mobile computing device (e.g., via the communication module 220 within the apparatus 200). Yet alternatively, the processor 230 can selectively process the input audio signal according to user's voice profile according to a type of communication protocol over which the input audio signal is received. For example, the processor 230 can process the input audio signal according to the user's voice profile and communicate the processed input audio signal to the mobile computing device only when an output audio signal is received from the mobile computing device over wireless (e.g., Bluetooth) communication protocol.
  • The apparatus 200 can implement similar methods or techniques to select between particular hearing profiles and/or audio output profiles with which an output audio signal is processed before being output to the user via the audio output module 24 o within the apparatus 200. For example, the apparatus 200 can apply a first hearing profile of the user specific to telephone calls when a telephone call is in process on the mobile computing device (e.g., as indicated by the audio output signal that is received over wireless communication protocol), and the apparatus 200 can apply a second hearing profile of the user otherwise. However, the apparatus 200 can apply any other number and type of hearing and/or audio output profiles the output audio signal (substantially in real-time) and in any other suitable way.
  • The systems and methods of the embodiments can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the application, applet, host, server, network, website, communication service, communication interface, hardware/firmware/software elements of a user computer or mobile device, or any suitable combination thereof. Other systems and methods of the embodiments can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated by computer-executable components integrated with apparatuses and networks of the type described above. The computer-readable medium can be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component can be a processor, though any suitable dedicated hardware device can (alternatively or additionally) execute the instructions.
  • As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the embodiments of the invention without departing from the scope of this invention as defined in the following claims.

Claims (23)

We claim:
1. An apparatus for augmenting audio capabilities of a mobile computing device, the mobile computing device comprising an integrated loudspeaker proximal a first end of the mobile computing device, an integrated microphone proximal a second end of the mobile computing device opposite the first end, an integrated battery, and an integrated charging port electrically coupled to the battery, the apparatus comprising:
a housing transiently encasing a portion of the mobile computing device;
a communication module arranged within the housing, downloading a hearing profile of a user from the mobile computing device, and receiving an output audio signal from the mobile computing device;
a processor arranged within the housing and processing the output audio signal according to the hearing profile;
a secondary audio output module arranged within the housing adjacent the integrated loudspeaker and outputting a processed form of the output audio signal in place of output of the output audio signal by the integrated loudspeaker;
a secondary microphone arranged within the housing adjacent the integrated microphone, the communication module transmitting an input audio signal from the secondary microphone to the mobile computing device; and
a secondary charging port arranged within the housing, electrically coupled to the integrated charging port, and receiving a charging connector to recharge the mobile computing device through the integrated charging port.
2. The apparatus of claim 1, wherein the housing comprises a first portion and a second portion; the first portion of the housing containing the communication module, the processor, and the secondary microphone; the second portion of the housing containing the secondary audio output module and transiently engaging the first portion to substantially encase the mobile computing device.
3. The apparatus of claim 2, wherein a portion of the secondary audio output module is transiently arranged within the second portion of the housing, the communication module wireless transmitting the processed form of the output audio signal to the portion of the secondary audio output module in response to removal of the portion of the secondary audio output module from the second portion of the housing.
4. The apparatus of claim 3, further comprising a secondary battery arranged within the housing, and wherein the secondary audio output module comprises a tertiary battery, the secondary battery charging the tertiary battery 282 in response to installation of the secondary audio output module into the housing.
5. The apparatus of claim 1, wherein the communication module comprises a wireless receiver and a wired receiver, the wireless receiver receiving a first output audio signal from the mobile computing device in a first mode, and the wired receiver receiving a second output audio signal from the mobile computing device through an integrated data port in the mobile computing device in a second mode.
6. The apparatus of claim 5, wherein the communication module downloads a first hearing profile of the user, a second hearing profile of the user, and a voice profile of the user from the mobile computing device; wherein the processor processes the first output audio signal according to the first hearing profile in the first mode, processes the second output audio signal according to the second hearing profile in the second mode, and processes the input audio signal from the secondary microphone according to the voice profile in the first mode; and wherein the wireless receiver comprises a wireless transceiver wirelessly transmitting processed form of the input audio signal to the mobile computing device and receiving the first output audio signal from the mobile computing device substantially simultaneously in the first mode.
7. The apparatus of claim 1, wherein the secondary audio output module comprises an audio induction loop adjacent the integrated loudspeaker and broadcasting a magnetic field corresponding to the processed form of the output audio signal.
8. The apparatus of claim 1, wherein the processor detects an ambient noise condition based on an output of the secondary microphone and processes the output audio signal according to the hearing profile to compensate for the ambient noise condition.
9. The apparatus of claim 8, further comprising an accelerometer arranged within the housing, the processer calculating an acceleration of the mobile computing device based on an output of the accelerometer, withholding application of an ambient audio signal from the secondary microphone to the output audio signal in response to the acceleration that exceeds a threshold acceleration, and processing the output audio signal according to the hearing profile to compensate for the ambient audio signal in response to the acceleration that falls below the threshold acceleration for a threshold period of time.
10. The apparatus of claim 8, wherein the processor comprises an audio codec and a controls processor, the audio codec processing the output audio signal according to the hearing profile.
11. apparatus of claim 1, further comprising a secondary battery arranged within the housing and powering the processor and the secondary audio output module, the secondary charging port electrically coupled to the secondary battery and receiving the charging connector to recharge the battery.
12. The apparatus of claim 11, wherein the processor selectively diverts electrical power, received from the charging connector through the charging port, between the integrated charging port and the secondary battery to simultaneously charge the integrated battery and the secondary battery.
13. The apparatus of claim 12, wherein the processor selectively diverts electrical power between the secondary battery and the integrated battery, via the integrated charging port, to maintain the secondary battery and the integrated battery in substantially similar states of charge.
14. The apparatus of claim 1, further comprising an audio jack, the processor selectively routing the processed form of the output audio signal from the secondary audio output module to the audio jack in response to connection of a peripheral audio device to the audio jack.
15. The apparatus of claim 14, wherein the processor predicts a type of the peripheral audio device connected to the audio jack based on an impedance of the peripheral audio device and processes the output audio signal according to an audio output profile corresponding to the type of the peripheral audio device.
16. The apparatus of claim 1, wherein the communication module receives a location datum from the mobile computing device; and wherein the processor selects a particular audio output profile, from a set of audio output profiles stored locally in memory arranged within the housing, based on the location datum, the processor processing the output audio signal according to the hearing profile and the particular audio output profile.
17. The apparatus of claim 1, further comprising a proximity sensor arranged within the housing adjacent the secondary audio output module, and wherein the processor reduces a maximum audio output volume from the secondary audio output module in response to detection of a surface proximal the housing based on an output of the proximity sensor.
18. An apparatus for augmenting audio capabilities of a mobile computing device, the mobile computing device comprising an integrated loudspeaker proximal a first end of the mobile computing device, an integrated microphone proximal a second end of the mobile computing device opposite the first end, an integrated battery, and an integrated charging port electrically coupled to the battery, the apparatus comprising:
a housing transiently encasing a portion of the mobile computing device;
a communication module arranged within the housing, downloading hearing data of a user from the mobile computing device, and receiving an output audio signal from the mobile computing device;
a processor arranged within the housing, generating a hearing profile of the user based on the hearing data, and processing the output audio signal according to the hearing profile;
a secondary audio output module arranged within the housing adjacent the integrated loudspeaker and outputting a processed form of the output audio signal in place of output of the output audio signal by the integrated loudspeaker;
a secondary microphone arranged within the housing adjacent the integrated microphone, the communication module transmitting an input audio signal from the secondary microphone to the mobile computing device; and
a secondary charging port arranged within the housing and communicating electrical current into the integrated charging port to recharge the mobile computing device.
19. The apparatus of claim 18, wherein the housing comprises a first portion and a second portion; the first portion of the housing containing the communication module, the processor, and the secondary microphone; the second portion of the housing containing the secondary audio output module and transiently engaging the first portion to substantially encase the mobile computing device.
20. The apparatus of claim 18, wherein the secondary charging port comprises an induction coil that harvests electrical current from a local electromagnetic field and a plug that engages the integrated charging port to communicate electrical current into the mobile computing device.
20. A method for augmenting audio capabilities of a mobile computing device, the method comprising:
at a peripheral device, receiving a hearing profile of a user from the computing device coupled to the peripheral device;
generating a voice profile of the user based on audio data collected through a microphone coupled to the peripheral device;
during a telephone call established with the computing device:
at the peripheral device, processing an output audio signal according to the hearing profile of the user, the output audio signal received from the computing device;
outputting a processed form of the output audio signal through an audio output module coupled to the peripheral device;
processing an input audio signal from the microphone according to the voice profile of the user; and
transmitting a processed form of the input audio signal to the computing device.
21. The method of claim 20, wherein outputting the processed form of the output audio signal comprises outputting the processed form of the output audio signal, through the audio output module integrated into the peripheral device, substantially in real-time during the telephone call; and wherein transmitting the processed form of the input audio signal to the computing device comprises transmitting the processed form of the input audio signal to the computing over wireless communication protocol substantially in real-time during the telephone call.
22. The method of claim 20, wherein receiving the hearing profile of the user from the computing device comprises receiving a first hearing profile of the user and a second hearing profile of the user from the computing device over a wired connection; wherein processing the output audio signal comprises processing a first audio output profile, received from the computing device over wireless communication protocol during the phone call, according to the first hearing profile; and further comprising processing a second audio output profile, received from the computing device over a wired connection during recorded audio playback, according to the second hearing profile.
US14/463,559 2013-02-11 2014-08-19 Audio apparatus and methods Active 2034-09-02 US9344793B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/463,559 US9344793B2 (en) 2013-02-11 2014-08-19 Audio apparatus and methods
US15/132,785 US20160234589A1 (en) 2013-02-11 2016-04-19 Audio apparatus and methods

Applications Claiming Priority (17)

Application Number Priority Date Filing Date Title
US201361763163P 2013-02-11 2013-02-11
US201361763182P 2013-02-11 2013-02-11
US201361778188P 2013-03-12 2013-03-12
US201361831796P 2013-06-06 2013-06-06
US201361831798P 2013-06-06 2013-06-06
US201361867464P 2013-08-19 2013-08-19
US201361867436P 2013-08-19 2013-08-19
US201361867480P 2013-08-19 2013-08-19
US201361867449P 2013-08-19 2013-08-19
US201361880395P 2013-09-20 2013-09-20
US201361880377P 2013-09-20 2013-09-20
US201361880405P 2013-09-20 2013-09-20
US201361880367P 2013-09-20 2013-09-20
US14/178,068 US9203699B2 (en) 2014-02-11 2014-02-11 Constructing and verifying switch fabric cabling schemes
US14/178,034 US20140309549A1 (en) 2013-02-11 2014-02-11 Methods for testing hearing
US14/206,499 US9319019B2 (en) 2013-02-11 2014-03-12 Method for augmenting a listening experience
US14/463,559 US9344793B2 (en) 2013-02-11 2014-08-19 Audio apparatus and methods

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/132,785 Continuation US20160234589A1 (en) 2013-02-11 2016-04-19 Audio apparatus and methods

Publications (2)

Publication Number Publication Date
US20150078575A1 true US20150078575A1 (en) 2015-03-19
US9344793B2 US9344793B2 (en) 2016-05-17

Family

ID=52667995

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/463,559 Active 2034-09-02 US9344793B2 (en) 2013-02-11 2014-08-19 Audio apparatus and methods
US15/132,785 Abandoned US20160234589A1 (en) 2013-02-11 2016-04-19 Audio apparatus and methods

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/132,785 Abandoned US20160234589A1 (en) 2013-02-11 2016-04-19 Audio apparatus and methods

Country Status (1)

Country Link
US (2) US9344793B2 (en)

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140372109A1 (en) * 2013-06-13 2014-12-18 Motorola Mobility Llc Smart volume control of device audio output based on received audio input
CN104867512A (en) * 2015-03-26 2015-08-26 加一联创电子科技有限公司 Music data obtaining method, earphone, loudspeaker box, and music player
US9319019B2 (en) 2013-02-11 2016-04-19 Symphonic Audio Technologies Corp. Method for augmenting a listening experience
US9344815B2 (en) 2013-02-11 2016-05-17 Symphonic Audio Technologies Corp. Method for augmenting hearing
US9344793B2 (en) * 2013-02-11 2016-05-17 Symphonic Audio Technologies Corp. Audio apparatus and methods
US20160227310A1 (en) * 2015-01-29 2016-08-04 Otojoy LLC Telecoil adapter
US20160261963A1 (en) * 2015-03-04 2016-09-08 William S. Woods Techniques for increasing processing capability in hear aids
US20160360350A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Wireless audio output devices
US9712934B2 (en) 2014-07-16 2017-07-18 Eariq, Inc. System and method for calibration and reproduction of audio signals based on auditory feedback
US20170264991A1 (en) * 2016-03-11 2017-09-14 Voxx International Corporation Headphone or earphone assembly having a pivotable usb charging connector integrated into a housing thereof
US9807491B2 (en) * 2015-04-03 2017-10-31 Pinn, Inc. Electronic device with wireless earbud
US20170316803A1 (en) * 2016-04-29 2017-11-02 Nokia Technologies Oy Apparatus, electronic device, system, method and computer program for capturing audio signals
US20180007461A1 (en) * 2016-07-04 2018-01-04 Tack Ching Wun In-Ear Headphone For Gaming, High Fidelity Music and 3D Effect
US9876893B2 (en) * 2016-01-08 2018-01-23 Lg Electronics Inc. Mobile terminal altering operation when coupled with audio sub module
US20180041826A1 (en) * 2015-04-27 2018-02-08 Pugz Gear Ab Wearable device for listening to sound
US20180124517A1 (en) * 2016-10-28 2018-05-03 Avnera Corporation Headset charger node
WO2018091856A1 (en) * 2016-11-21 2018-05-24 Institut Mines Telecom Improved audio headphones device
US10008244B2 (en) * 2016-05-13 2018-06-26 Jumplux Technology Co., Ltd. Method for synchronously accessing audio data and audio playback system using the same
US20180324535A1 (en) * 2017-05-03 2018-11-08 Bragi GmbH Hearing aid with added functionality
EP3383063A4 (en) * 2015-12-16 2018-12-19 Huawei Technologies Co., Ltd. Method and device for adjusting earphone sound volume
US10194187B2 (en) * 2000-02-17 2019-01-29 Audible Magic Corporation Method and apparatus for identifying media content presented on a media playing device
US10455066B2 (en) 2015-04-03 2019-10-22 Pinn, Inc. Mobile system with wireless earbud
US10536792B2 (en) * 2013-10-23 2020-01-14 Voyetra Turtle Beach, Inc. Electronic device charging via an audio jack
US20200065058A1 (en) * 2018-08-22 2020-02-27 Mimi Hearing Technologies GmbH Systems and methods for sound enhancement in audio systems
US10638210B1 (en) * 2019-03-29 2020-04-28 Sonova Ag Accelerometer-based walking detection parameter optimization for a hearing device user
US10681446B2 (en) 2015-09-30 2020-06-09 Apple Inc. Earbud case with pairing button
US10698952B2 (en) 2012-09-25 2020-06-30 Audible Magic Corporation Using digital fingerprints to associate data with a work
EP3678388A1 (en) * 2019-01-04 2020-07-08 Harman International Industries, Incorporated Customized audio processing based on user-specific and hardware-specific audio information
US20200221223A1 (en) * 2019-01-07 2020-07-09 Kikago Limited Audio device and audio processing method
WO2020150030A1 (en) * 2019-01-14 2020-07-23 Yamaha Guitar Group A microphone that functions as either a digital wireless microphone or a wired passive microphone
US20200280814A1 (en) * 2019-03-01 2020-09-03 Bose Corporation Augmented reality audio playback control
US20200349955A1 (en) * 2012-02-24 2020-11-05 Cirrus Logic International Semiconductor Ltd. System and method for speaker recognition on mobile devices
US10966033B2 (en) 2018-07-20 2021-03-30 Mimi Hearing Technologies GmbH Systems and methods for modifying an audio signal using custom psychoacoustic models
US10993049B2 (en) 2018-07-20 2021-04-27 Mimi Hearing Technologies GmbH Systems and methods for modifying an audio signal using custom psychoacoustic models
US11019422B2 (en) * 2014-11-21 2021-05-25 Samsung Electronics Co., Ltd. Earphones with activity controlled output
US11188292B1 (en) 2019-04-03 2021-11-30 Discovery Sound Technology, Llc System and method for customized heterodyning of collected sounds from electromechanical equipment
US20220084537A1 (en) * 2020-09-17 2022-03-17 Orcam Technologies Ltd. Systems and methods for selectively attenuating a voice
US11288036B2 (en) 2020-06-03 2022-03-29 Microsoft Technology Licensing, Llc Adaptive modulation of audio content based on background noise
US20220201416A1 (en) * 2020-12-18 2022-06-23 Bose Corporation Sensor management for wireless devices
US11415428B2 (en) * 2019-03-06 2022-08-16 Nain Inc. Audio information providing system, control method, and non-transitory computer readable medium
US20220262362A1 (en) * 2020-02-04 2022-08-18 Soundhound, Inc. Wake suppression for audio playing and listening devices
WO2023137203A1 (en) * 2022-01-14 2023-07-20 Chromatic Inc. Enhancing speech of target speaker from audio signal in an ear-worn device using voice signatures
US11812225B2 (en) 2022-01-14 2023-11-07 Chromatic Inc. Method, apparatus and system for neural network hearing aid
US11818523B2 (en) 2022-01-14 2023-11-14 Chromatic Inc. System and method for enhancing speech of target speaker from audio signal in an ear-worn device using voice signatures
US11832061B2 (en) 2022-01-14 2023-11-28 Chromatic Inc. Method, apparatus and system for neural network hearing aid
US11877125B2 (en) 2022-01-14 2024-01-16 Chromatic Inc. Method, apparatus and system for neural network enabled hearing aid
US11902747B1 (en) 2022-08-09 2024-02-13 Chromatic Inc. Hearing loss amplification that amplifies speech and noise subsignals differently

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10687155B1 (en) * 2019-08-14 2020-06-16 Mimi Hearing Technologies GmbH Systems and methods for providing personalized audio replay on a plurality of consumer devices
US10719292B2 (en) * 2018-05-11 2020-07-21 Google Llc Sound enhancement adapter
SE543233C2 (en) * 2018-10-31 2020-10-27 Daniel Lauritsen Portable wireless speaker arrangement comprising replaceable casing with integrated battery

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6813490B1 (en) * 1999-12-17 2004-11-02 Nokia Corporation Mobile station with audio signal adaptation to hearing characteristics of the user
US20070255435A1 (en) * 2005-03-28 2007-11-01 Sound Id Personal Sound System Including Multi-Mode Ear Level Module with Priority Logic
US20090076825A1 (en) * 2007-09-13 2009-03-19 Bionica Corporation Method of enhancing sound for hearing impaired individuals
US20090180631A1 (en) * 2008-01-10 2009-07-16 Sound Id Personal sound system for display of sound pressure level or other environmental condition
US20100145134A1 (en) * 2008-12-02 2010-06-10 Oticon A/S Device for Treatment of Stuttering and Its Use
US20120306631A1 (en) * 2011-06-03 2012-12-06 Apple Inc. Audio Conversion To Vibration Patterns
US20140314261A1 (en) * 2013-02-11 2014-10-23 Symphonic Audio Technologies Corp. Method for augmenting hearing
US20140334644A1 (en) * 2013-02-11 2014-11-13 Symphonic Audio Technologies Corp. Method for augmenting a listening experience
US20140379343A1 (en) * 2012-11-20 2014-12-25 Unify GmbH Co. KG Method, device, and system for audio data processing

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6973669B2 (en) 1993-03-29 2005-12-06 Microsoft Corporation Pausing television programming in response to selection of hypertext link
US6853850B2 (en) 2000-12-04 2005-02-08 Mobigence, Inc. Automatic speaker volume and microphone gain control in a portable handheld radiotelephone with proximity sensors
AU2002255568B8 (en) 2001-02-20 2014-01-09 Adidas Ag Modular personal network systems and methods
US20030003864A1 (en) 2001-06-28 2003-01-02 Antony Locke Cover for a mobile telephone handset
US6944474B2 (en) 2001-09-20 2005-09-13 Sound Id Sound enhancement for mobile phones and other products producing personalized audio for users
CA2653767A1 (en) 2006-04-04 2007-10-11 Cleartone Technologies Limited Calibrated digital headset and audiometric test methods therewith
DE102006042084A1 (en) 2006-09-07 2008-03-27 Siemens Audiologische Technik Gmbh Gender specific hearing aid fitting
US20100119093A1 (en) 2008-11-13 2010-05-13 Michael Uzuanis Personal listening device with automatic sound equalization and hearing testing
US20110099507A1 (en) 2009-10-28 2011-04-28 Google Inc. Displaying a collection of interactive elements that trigger actions directed to an item
WO2011109790A1 (en) 2010-03-04 2011-09-09 Thx Ltd. Electronic adapter unit for selectively modifying audio or video data for use with an output device
US8379871B2 (en) 2010-05-12 2013-02-19 Sound Id Personalized hearing profile generation with real-time feedback
US9736600B2 (en) 2010-05-17 2017-08-15 Iii Holdings 4, Llc Devices and methods for collecting acoustic data
US8611570B2 (en) 2010-05-25 2013-12-17 Audiotoniq, Inc. Data storage system, hearing aid, and method of selectively applying sound filters
US8639516B2 (en) 2010-06-04 2014-01-28 Apple Inc. User-specific noise suppression for voice quality improvements
US8761421B2 (en) 2011-01-14 2014-06-24 Audiotoniq, Inc. Portable electronic device and computer-readable medium for remote hearing aid profile storage
US20130052956A1 (en) 2011-08-22 2013-02-28 James W. McKell Hand-Held Mobile Device Dock
US8965017B2 (en) 2012-01-06 2015-02-24 Audiotoniq, Inc. System and method for automated hearing aid profile update
US9344793B2 (en) * 2013-02-11 2016-05-17 Symphonic Audio Technologies Corp. Audio apparatus and methods

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6813490B1 (en) * 1999-12-17 2004-11-02 Nokia Corporation Mobile station with audio signal adaptation to hearing characteristics of the user
US20070255435A1 (en) * 2005-03-28 2007-11-01 Sound Id Personal Sound System Including Multi-Mode Ear Level Module with Priority Logic
US20090076825A1 (en) * 2007-09-13 2009-03-19 Bionica Corporation Method of enhancing sound for hearing impaired individuals
US20090180631A1 (en) * 2008-01-10 2009-07-16 Sound Id Personal sound system for display of sound pressure level or other environmental condition
US20100145134A1 (en) * 2008-12-02 2010-06-10 Oticon A/S Device for Treatment of Stuttering and Its Use
US20120306631A1 (en) * 2011-06-03 2012-12-06 Apple Inc. Audio Conversion To Vibration Patterns
US20140379343A1 (en) * 2012-11-20 2014-12-25 Unify GmbH Co. KG Method, device, and system for audio data processing
US20140314261A1 (en) * 2013-02-11 2014-10-23 Symphonic Audio Technologies Corp. Method for augmenting hearing
US20140334644A1 (en) * 2013-02-11 2014-11-13 Symphonic Audio Technologies Corp. Method for augmenting a listening experience

Cited By (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10194187B2 (en) * 2000-02-17 2019-01-29 Audible Magic Corporation Method and apparatus for identifying media content presented on a media playing device
US20200349955A1 (en) * 2012-02-24 2020-11-05 Cirrus Logic International Semiconductor Ltd. System and method for speaker recognition on mobile devices
US11545155B2 (en) * 2012-02-24 2023-01-03 Cirrus Logic, Inc. System and method for speaker recognition on mobile devices
US10698952B2 (en) 2012-09-25 2020-06-30 Audible Magic Corporation Using digital fingerprints to associate data with a work
US9319019B2 (en) 2013-02-11 2016-04-19 Symphonic Audio Technologies Corp. Method for augmenting a listening experience
US9344815B2 (en) 2013-02-11 2016-05-17 Symphonic Audio Technologies Corp. Method for augmenting hearing
US9344793B2 (en) * 2013-02-11 2016-05-17 Symphonic Audio Technologies Corp. Audio apparatus and methods
US9787273B2 (en) * 2013-06-13 2017-10-10 Google Technology Holdings LLC Smart volume control of device audio output based on received audio input
US20140372109A1 (en) * 2013-06-13 2014-12-18 Motorola Mobility Llc Smart volume control of device audio output based on received audio input
US10701500B2 (en) * 2013-10-23 2020-06-30 Voyetra Turtle Beach, Inc. Electronic device charging via an audio jack
US11825275B2 (en) 2013-10-23 2023-11-21 Voyetra Turtle Beach, Inc. Electronic device charging via an audio jack
US10869145B2 (en) * 2013-10-23 2020-12-15 Voyetra Turtle Beach, Inc. Electronic device charging via an audio jack
US11363394B2 (en) * 2013-10-23 2022-06-14 Voyetra Turtle Beach, Inc. Electronic device charging via an audio jack
US10536792B2 (en) * 2013-10-23 2020-01-14 Voyetra Turtle Beach, Inc. Electronic device charging via an audio jack
US9712934B2 (en) 2014-07-16 2017-07-18 Eariq, Inc. System and method for calibration and reproduction of audio signals based on auditory feedback
US11032632B2 (en) 2014-11-21 2021-06-08 Samsung Electronics Co., Ltd. Earphones with activity controlled output
US11418867B2 (en) 2014-11-21 2022-08-16 Samsung Electronics Co., Ltd. Earphones with activity controlled output
US11019422B2 (en) * 2014-11-21 2021-05-25 Samsung Electronics Co., Ltd. Earphones with activity controlled output
US20170118550A1 (en) * 2015-01-29 2017-04-27 Otojoy LLC Telecoil adapter
US10219065B2 (en) * 2015-01-29 2019-02-26 Otojoy LLC Telecoil adapter
US20160227310A1 (en) * 2015-01-29 2016-08-04 Otojoy LLC Telecoil adapter
US20160261963A1 (en) * 2015-03-04 2016-09-08 William S. Woods Techniques for increasing processing capability in hear aids
US10129661B2 (en) * 2015-03-04 2018-11-13 Starkey Laboratories, Inc. Techniques for increasing processing capability in hear aids
CN104867512A (en) * 2015-03-26 2015-08-26 加一联创电子科技有限公司 Music data obtaining method, earphone, loudspeaker box, and music player
US10019222B2 (en) * 2015-03-26 2018-07-10 1More Inc. Method for obtaining music data, earphone and music player
US20170046118A1 (en) * 2015-03-26 2017-02-16 1More Inc. Method for obtaining music data, earphone and music player
US10701197B2 (en) 2015-04-03 2020-06-30 Pinn, Inc. Mobile system with wireless earbud
US11616871B2 (en) * 2015-04-03 2023-03-28 Pinn, Inc. Mobile system with wireless earbud
US20210352170A1 (en) * 2015-04-03 2021-11-11 Pinn, Inc. Mobile System with Wireless Earbud
US11102340B2 (en) 2015-04-03 2021-08-24 Pinn, Inc. Mobile system with wireless earbud
US10455066B2 (en) 2015-04-03 2019-10-22 Pinn, Inc. Mobile system with wireless earbud
US9807491B2 (en) * 2015-04-03 2017-10-31 Pinn, Inc. Electronic device with wireless earbud
US10609198B2 (en) 2015-04-03 2020-03-31 Pinn, Inc. Personal media system including base station and wireless earbud
US20180041826A1 (en) * 2015-04-27 2018-02-08 Pugz Gear Ab Wearable device for listening to sound
US10219062B2 (en) * 2015-06-05 2019-02-26 Apple Inc. Wireless audio output devices
US20160360350A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Wireless audio output devices
US11026010B2 (en) 2015-09-30 2021-06-01 Apple Inc. Portable listening device with sensors
US11690428B2 (en) 2015-09-30 2023-07-04 Apple Inc. Portable listening device with accelerometer
US10681446B2 (en) 2015-09-30 2020-06-09 Apple Inc. Earbud case with pairing button
US11026011B2 (en) 2015-09-30 2021-06-01 Apple Inc. Wireless earbud
US11944172B2 (en) 2015-09-30 2024-04-02 Apple Inc. Portable listening device with sensors
US10880630B2 (en) 2015-09-30 2020-12-29 Apple Inc. Wireless earbud
US10904652B2 (en) 2015-09-30 2021-01-26 Apple Inc. Earbud case with insert
EP3383063A4 (en) * 2015-12-16 2018-12-19 Huawei Technologies Co., Ltd. Method and device for adjusting earphone sound volume
US11005439B2 (en) 2015-12-16 2021-05-11 Huawei Technologies Co., Ltd. Earphone volume adjustment method and apparatus
US9876893B2 (en) * 2016-01-08 2018-01-23 Lg Electronics Inc. Mobile terminal altering operation when coupled with audio sub module
US20170264991A1 (en) * 2016-03-11 2017-09-14 Voxx International Corporation Headphone or earphone assembly having a pivotable usb charging connector integrated into a housing thereof
US10991392B2 (en) * 2016-04-29 2021-04-27 Nokia Technologies Oy Apparatus, electronic device, system, method and computer program for capturing audio signals
US20170316803A1 (en) * 2016-04-29 2017-11-02 Nokia Technologies Oy Apparatus, electronic device, system, method and computer program for capturing audio signals
US10008244B2 (en) * 2016-05-13 2018-06-26 Jumplux Technology Co., Ltd. Method for synchronously accessing audio data and audio playback system using the same
US20180007461A1 (en) * 2016-07-04 2018-01-04 Tack Ching Wun In-Ear Headphone For Gaming, High Fidelity Music and 3D Effect
WO2018081599A1 (en) * 2016-10-28 2018-05-03 Avera Corporation Headset charger node
US20180124517A1 (en) * 2016-10-28 2018-05-03 Avnera Corporation Headset charger node
US11102580B2 (en) * 2016-10-28 2021-08-24 Avnera Corporation Headset charger node
WO2018091856A1 (en) * 2016-11-21 2018-05-24 Institut Mines Telecom Improved audio headphones device
FR3059191A1 (en) * 2016-11-21 2018-05-25 Institut Mines Telecom PERFECTLY AUDIO HELMET DEVICE
US20180324535A1 (en) * 2017-05-03 2018-11-08 Bragi GmbH Hearing aid with added functionality
US10708699B2 (en) * 2017-05-03 2020-07-07 Bragi GmbH Hearing aid with added functionality
US10993049B2 (en) 2018-07-20 2021-04-27 Mimi Hearing Technologies GmbH Systems and methods for modifying an audio signal using custom psychoacoustic models
US10966033B2 (en) 2018-07-20 2021-03-30 Mimi Hearing Technologies GmbH Systems and methods for modifying an audio signal using custom psychoacoustic models
US10871940B2 (en) * 2018-08-22 2020-12-22 Mimi Hearing Technologies GmbH Systems and methods for sound enhancement in audio systems
US20200065058A1 (en) * 2018-08-22 2020-02-27 Mimi Hearing Technologies GmbH Systems and methods for sound enhancement in audio systems
US11134353B2 (en) 2019-01-04 2021-09-28 Harman International Industries, Incorporated Customized audio processing based on user-specific and hardware-specific audio information
EP3678388A1 (en) * 2019-01-04 2020-07-08 Harman International Industries, Incorporated Customized audio processing based on user-specific and hardware-specific audio information
US20200221223A1 (en) * 2019-01-07 2020-07-09 Kikago Limited Audio device and audio processing method
GB2595132A (en) * 2019-01-14 2021-11-17 Yamaha Guitar Group A microphone that functions as either a digital wireless microphone or a wired passive microphone
WO2020150030A1 (en) * 2019-01-14 2020-07-23 Yamaha Guitar Group A microphone that functions as either a digital wireless microphone or a wired passive microphone
US10924847B2 (en) * 2019-01-14 2021-02-16 Yamaha Guitar Group, Inc. Microphone that functions as either a digital wireless microphone or a wired passive microphone
US20200280814A1 (en) * 2019-03-01 2020-09-03 Bose Corporation Augmented reality audio playback control
US11415428B2 (en) * 2019-03-06 2022-08-16 Nain Inc. Audio information providing system, control method, and non-transitory computer readable medium
US10638210B1 (en) * 2019-03-29 2020-04-28 Sonova Ag Accelerometer-based walking detection parameter optimization for a hearing device user
US11188292B1 (en) 2019-04-03 2021-11-30 Discovery Sound Technology, Llc System and method for customized heterodyning of collected sounds from electromechanical equipment
US20220262362A1 (en) * 2020-02-04 2022-08-18 Soundhound, Inc. Wake suppression for audio playing and listening devices
US11922939B2 (en) * 2020-02-04 2024-03-05 Soundhound Ai Ip, Llc Wake suppression for audio playing and listening devices
US11288036B2 (en) 2020-06-03 2022-03-29 Microsoft Technology Licensing, Llc Adaptive modulation of audio content based on background noise
EP4009322A3 (en) * 2020-09-17 2022-06-15 Orcam Technologies Ltd. Systems and methods for selectively attenuating a voice
US20220084537A1 (en) * 2020-09-17 2022-03-17 Orcam Technologies Ltd. Systems and methods for selectively attenuating a voice
US11929087B2 (en) * 2020-09-17 2024-03-12 Orcam Technologies Ltd. Systems and methods for selectively attenuating a voice
US20220201416A1 (en) * 2020-12-18 2022-06-23 Bose Corporation Sensor management for wireless devices
US11671777B2 (en) * 2020-12-18 2023-06-06 Bose Corporation Sensor management for wireless devices
US11818523B2 (en) 2022-01-14 2023-11-14 Chromatic Inc. System and method for enhancing speech of target speaker from audio signal in an ear-worn device using voice signatures
US11818547B2 (en) 2022-01-14 2023-11-14 Chromatic Inc. Method, apparatus and system for neural network hearing aid
US11832061B2 (en) 2022-01-14 2023-11-28 Chromatic Inc. Method, apparatus and system for neural network hearing aid
US11877125B2 (en) 2022-01-14 2024-01-16 Chromatic Inc. Method, apparatus and system for neural network enabled hearing aid
US11812225B2 (en) 2022-01-14 2023-11-07 Chromatic Inc. Method, apparatus and system for neural network hearing aid
WO2023137203A1 (en) * 2022-01-14 2023-07-20 Chromatic Inc. Enhancing speech of target speaker from audio signal in an ear-worn device using voice signatures
US11950056B2 (en) 2022-01-14 2024-04-02 Chromatic Inc. Method, apparatus and system for neural network hearing aid
US11902747B1 (en) 2022-08-09 2024-02-13 Chromatic Inc. Hearing loss amplification that amplifies speech and noise subsignals differently

Also Published As

Publication number Publication date
US9344793B2 (en) 2016-05-17
US20160234589A1 (en) 2016-08-11

Similar Documents

Publication Publication Date Title
US9344793B2 (en) Audio apparatus and methods
CN108564949B (en) Wrist strap type AI voice interaction device and system
US10521512B2 (en) Dynamic text-to-speech response from a smart speaker
US9756422B2 (en) Noise estimation in a mobile device using an external acoustic microphone signal
CN104521247B (en) Bluetooth headset hearing aid and anti-noise method and apparatus
US8290537B2 (en) Sidetone adjustment based on headset or earphone type
WO2015026859A1 (en) Audio apparatus and methods
EP3217686A1 (en) System and method for enhancing performance of audio transducer based on detection of transducer status
US9326067B2 (en) Multiplexing audio system and method
US20120282976A1 (en) Cellphone managed Hearing Eyeglasses
US20100172522A1 (en) Programmable earphone device with customizable controls and heartbeat monitoring
CN104243662B (en) Terminal prompt mode adjusting method and terminal
US20150023516A1 (en) System and method for using a headset jack to control electronic device functions
CN105049977A (en) Automatic earphone volume adjusting method and device
CN106464998A (en) Collaboratively processing audio between headset and source to mask distracting noise
CN104717594B (en) Hearing assistance system, sound booster handset and its hearing-aid method
US20170245065A1 (en) Hearing Eyeglass System and Method
CN110187859A (en) A kind of denoising method and electronic equipment
CN105163216A (en) Automatic music playing method and earphone
CN104966521A (en) Method and apparatus for adjusting play mode of music
CN108766468A (en) A kind of intelligent sound detection method, wireless headset, TWS earphones and terminal
EP2560413A1 (en) Audio device and audio producing method
CN109121059A (en) Loudspeaker plug-hole detection method and Related product
CN113411726A (en) Audio processing method, device and system
CN113411417A (en) Wireless sound amplification system and terminal

Legal Events

Date Code Title Description
AS Assignment

Owner name: SYMPHONIC AUDIO TECHNOLOGIES CORP., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SELIG, AARON ALEXANDER;SRINIVASAN, VARUN;REEL/FRAME:033906/0505

Effective date: 20140920

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: MIMI HEARING TECHNOLOGIES GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SYMPHONIC AUDIO TECHNOLOGIES CORP.;REEL/FRAME:045972/0269

Effective date: 20180514

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 8