US20230209244A1 - Headphone Audio Controller - Google Patents

Headphone Audio Controller Download PDF

Info

Publication number
US20230209244A1
US20230209244A1 US17/889,561 US202217889561A US2023209244A1 US 20230209244 A1 US20230209244 A1 US 20230209244A1 US 202217889561 A US202217889561 A US 202217889561A US 2023209244 A1 US2023209244 A1 US 2023209244A1
Authority
US
United States
Prior art keywords
audio
transfer function
headphone
microphone
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/889,561
Other versions
US11689844B1 (en
Inventor
Ole Mattis Nielsen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bose Corp
Original Assignee
Bose Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bose Corp filed Critical Bose Corp
Priority to US17/889,561 priority Critical patent/US11689844B1/en
Priority to US18/195,645 priority patent/US12028675B2/en
Application granted granted Critical
Publication of US11689844B1 publication Critical patent/US11689844B1/en
Publication of US20230209244A1 publication Critical patent/US20230209244A1/en
Priority to US18/675,480 priority patent/US20240314489A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1781Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions
    • G10K11/17821Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions characterised by the analysis of the input signals only
    • G10K11/17827Desired external signals, e.g. pass-through audio such as music or speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17879General system configurations using both a reference signal and an error signal
    • G10K11/17881General system configurations using both a reference signal and an error signal the reference signal being an acoustic signal, e.g. recorded with a microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/10Applications
    • G10K2210/108Communication systems, e.g. where useful sound is kept and noise is cancelled
    • G10K2210/1081Earphones, e.g. for telephones, ear protectors or headsets
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3026Feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3027Feedforward
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/15Determination of the acoustic seal of ear moulds or ear tips of hearing devices

Definitions

  • This disclosure relates to controlling an audio headphone.
  • Headphones can be controlled with the aim of providing a particularly equalized sound.
  • Headphones with active noise reduction sometimes include a transparency or aware mode where external sounds are sensed by an external microphone and reproduced to the user.
  • Such headphones can also be controlled to provide a desired transparency sound profile.
  • aspects and examples are directed to determining audio controllers for one or both of headphone equalization (EQ) and headphone aware mode.
  • the controllers are calculated during use (on the fly) based at least in part on an audio transfer function that is measured between an acoustic transducer of the headphones and a microphone that senses the transducer output (e.g., a feedback microphone in ANR headphones), and further based on this same transfer function but determined with a feedback controller turned on.
  • EQ and aware mode controllers are customized for the particular user, without any action needing to be taken by the user or others. This provides a more consistent listening experience across large populations of users.
  • a method of determining an audio controller for a headphone that is configured to use an acoustic transducer to develop sound that is delivered to an ear of a user and that includes a feedback microphone that is configured to sense sound developed by the acoustic transducer includes measuring a first audio transfer function between the acoustic transducer and the feedback microphone, determining a second audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied, and calculating the audio controller based on both the first audio transfer function and the second audio transfer function.
  • measuring the first audio transfer function comprises providing an audio signal that is configured to operate the acoustic transducer to generate sound, sensing the sound with the feedback microphone, and calculating the first audio transfer function based on the audio signal and the sensed sound.
  • determining the second audio transfer function comprises measuring an audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied.
  • measuring an audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied comprises providing an audio signal that is configured to operate the acoustic transducer to generate sound, sensing the sound with the feedback microphone, and calculating the second audio transfer function based on the audio signal, the sensed sound, and the feedback controller.
  • determining the second audio transfer function comprises calculating the second audio transfer function based on both the first audio transfer function and the feedback controller.
  • the audio controller comprises an equalization (EQ) controller.
  • the audio controller comprises a controller for a headphone use aware mode wherein sound external to the headphone is reproduced by the acoustic transducer.
  • the method further includes providing a measured power spectrum for a microphone located in an ear canal of a person, and providing a measured power spectrum for a microphone located on the person's head.
  • the power spectra are measured on multiple different people. A compilation or sort of average of the values from this dataset can then be used in the headphones.
  • the calculation of the audio controller is further based on both the measured power spectrum for a microphone located in an ear canal of a person and the measured power spectrum for a microphone located on the person's head.
  • the calculation of the audio controller is further based on a third audio transfer function between an acoustic transducer and a microphone located in an ear canal of a person.
  • the method further includes providing a third audio transfer function between a first location of a feedback microphone in an ear canal of a person and a second location on the person's head.
  • the method still further includes providing a fourth audio transfer function between the acoustic transducer and the first location of a feedback microphone in an ear canal of a person.
  • the second, third, and fourth audio transfer functions are each calculated by providing an audio signal to an acoustic transducer, sensing transduced sounds with a microphone, and calculating the transfer function based on the audio signal and the sensed sound.
  • an audio transfer function is measured on the user in real time.
  • data derived from measurements made on multiple people in a controlled environment are used together with the measured transfer function to calculate one or both of the aware mode and EQ audio controllers.
  • the method further includes providing first and second constant values.
  • the first and second constant values are calculated based on both the third and fourth audio transfer functions.
  • the first and second constant values are calculated based on both the third and fourth audio transfer functions at multiple different fits of the headphone on multiple different people.
  • the first and second constant values represent frequency-dependent complex quantities.
  • a computer program product having a non-transitory computer-readable medium including computer program logic encoded thereon that, when performed on a headphone that is configured to use an acoustic transducer to develop sound that is delivered to an ear of a user and that includes a feedback microphone that is configured to sense sound developed by the acoustic transducer, causes the headphone to measure a first audio transfer function between the acoustic transducer and the feedback microphone, determine a second audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied, and calculate the audio controller based on both the first audio transfer function and the second audio transfer function.
  • the audio controller comprises at least one of an equalization (EQ) controller and a controller for a headphone aware use mode wherein sound external to the headphone is reproduced by the acoustic transducer.
  • EQ equalization
  • the first audio transfer function is measured by providing an audio signal that is configured to operate the acoustic transducer to generate sound, sensing the sound with the feedback microphone, and calculating the first audio transfer function based on the audio signal and the sensed sound, and further wherein the second audio transfer function is calculated based on both the first audio transfer function and the feedback controller.
  • the computer program product further includes providing a measured power spectrum for a microphone located in an ear canal of a person, and providing a measured power spectrum for a microphone located on the person's head, wherein the audio controller calculation is further based on the measured power spectrum for a microphone located in an ear canal of a person, the measured power spectrum for a microphone located on the person's head, and a third audio transfer function between an acoustic transducer and a microphone located in an ear canal of a person.
  • FIG. 1 is a partial cross-sectional view of a headphone.
  • FIG. 2 is a block diagram of aspects of a headphone.
  • FIG. 3 is a schematic diagram of a person wearing headphones.
  • FIG. 4 is a flow chart illustrating a method for calculating an audio controller.
  • FIG. 5 is a plot of the logarithmic standard deviation of third octave smoothed aware mode insertion gain in ANR earbuds with and without exemplary audio controllers.
  • FIG. 6 is a plot of the logarithmic standard deviation of third octave smoothed EQ mode insertion gain in ANR earbuds with and without exemplary audio controllers.
  • references to examples, components, elements, acts, or functions of the computer program products, systems and methods herein referred to in the singular may also embrace embodiments including a plurality, and any references in plural to any example, component, element, act, or function herein may also embrace examples including only a singularity. Accordingly, references in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements.
  • the use herein of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms.
  • the audio controllers can be one or both of headphone equalization (EQ) and headphone aware mode controllers.
  • the controllers are calculated during use of the headphones, using existing headphone components and processing. The calculations are based at least in part on an audio transfer function that is measured between the acoustic transducer of the headphones and a microphone that senses the transducer output (e.g., a feedback microphone in ANR headphones, where the feedback microphone is typically located between the transducer and the user's eardrum).
  • the calculations are further based on this same transfer function, but determined with a feedback controller turned on; this determination can be calculated based on the measured transfer function.
  • a result of this real-time controller calculation is that the EQ and aware mode controllers are customized for the particular user, during use of the headphones, based on a single measured audio transfer function. This provides a more consistent listening experience across large populations of users of the subject headphones.
  • a first audio transfer function is determined by operating the acoustic transducer of the headphones and sensing the sound with the feedback microphone.
  • the first audio transfer function is calculated based on the audio signal provided to the transducer and based on the sensed sound.
  • the second audio transfer function is determined by measuring an audio transfer function between the acoustic transducer and the feedback microphone, but this time with the headphone feedback controller applied.
  • the second transfer function is calculated based on the audio signal, the sensed sound, and the feedback controller.
  • the second audio transfer function is determined by calculating it based on both the first audio transfer function and the feedback controller.
  • the calculation of the audio controller is based on data obtained during design of the controller calculation scheme.
  • data can be measured in a lab or another controlled environment, across multiple different people and multiple fittings of headphone use on both ears of each person. Measuring on different people provides data relative to many different ear geometries.
  • this data includes a measured power spectrum for a microphone located in the ear canal (which approximates the ear drum), and a separate measured power spectrum for a microphone located on the person's head in a location where it does not interfere with the headphones.
  • the dataset can be developed by placing microphones in the ears of human subjects, and placing a microphone on the subjects' heads. Measurements are made with and without headphones.
  • the driver in the headphones is used to measure the transfer functions from it to the feedback and canal microphones, while wearing the headset.
  • the other two measurements are made by playing sound from speakers in the measurement room. This is done both with and without the headset worn.
  • EQ only the driver measurement and the open room noise measurement is used.
  • EQ only the driver measurement and the open room noise measurement is used.
  • EQ only the driver measurement and the open room noise measurement is used.
  • EQ only the driver measurement and the open room noise measurement is used.
  • all three are used since the response of the outside (feedforward) microphone when the headset is worn is needed.
  • lab data is represented by two constant values. These constant values can be derived as further described below.
  • Earbuds generally include an electro-acoustic transducer for producing sound, and are configured to deliver the sound directly into the user's ear canal. Earbuds can be wireless or wired.
  • a headphone refers to a device that typically fits around, on, or in an ear and that radiates acoustic energy directly or indirectly into the ear canal. Headphones are sometimes referred to as earphones, earpieces, headsets, earbuds, or sport headphones, and can be wired or wireless.
  • a headphone includes a driver (acoustic transducer) to transduce electrical audio signals to acoustic energy.
  • the driver may or may not be housed in an earcup or in a housing that is configured to be located on the head or on the ear, or to be inserted directly into the user's ear canal.
  • a headphone may be a single stand-alone unit or one of a pair of headphones (each including at least one acoustic driver), one for each ear.
  • a headphone may be connected mechanically to another headphone, for example by a headband and/or by leads that conduct audio signals to an acoustic driver in the headphone.
  • a headphone may include components for wirelessly receiving audio signals.
  • a headphone may include components of an ANR system, which may include an internal microphone within the headphone housing and an external microphone that picks up sound outside the housing. Headphones may also include other functionality, such as additional microphones for an ANR system, or one or more microphones that are used to pick up the user's voice.
  • One or more of the systems and methods described herein, in various examples and combinations, may be used in a wide variety of headphones in various form factors.
  • One such form factor is an earbud.
  • Another is an on-ear or over-ear headphone.
  • the headphone includes an electro-acoustic transducer that is configured to develop sound for a user, a housing that holds the transducer, and a feedback microphone that is configured to detect sound in the housing before it reaches the eardrum.
  • a processor system of the headphone is programmed to accomplish methods of determining an audio controller, such as an equalization (EQ) controller and an aware mode controller.
  • FIG. 1 is a perspective view of a wireless in-ear earbud 10 .
  • An earbud is a non-limiting example of a headphone device.
  • Earbud 10 includes body or housing 12 that houses the active components of the earbud.
  • Housing 12 encloses electro-acoustic transducer (audio driver) 14 that generates sound via movable diaphragm 16 .
  • Housing 12 comprises front housing portion 22 and rear housing portion 23 .
  • Diaphragm 16 is driven in order to create sound pressure in front housing cavity 18 . Sound is also created in rear housing cavity 20 . Sound pressure is directed from cavity 18 out of front housing portion 22 via sound outlet 24 .
  • Internal microphone 32 is located inside of housing 12 . In an example microphone 32 is in housing portion 22 , as shown in FIG. 1 .
  • External microphone 34 is configured to sense sound external to housing 12 .
  • exterior microphone 34 is located inside of the housing and is acoustically coupled to the external environment via housing openings 36 that let environmental sound reach microphone 34 .
  • interior microphone 32 is used as a feedback microphone for active noise reduction (ANR)
  • exterior microphone 34 is used as a feed-forward microphone for ANR, and/or for transparency mode operation where environmental sound is sensed and then reproduced to the user so the user is more environmentally aware and can hear others speaking and the like.
  • An earbud typically also includes a pliable tip (not shown) that is engaged with neck 25 of housing portion 22 , to help direct the sound into the ear canal.
  • earbud 10 and its operation are well known in the technical field and so are not further described herein. Also, details of earbud 10 are exemplary of aspects of headphones and are not limiting of the scope of this disclosure, as the present audio controller can be used in varied types and designs of earbuds, earphones, and other types of headphones.
  • Earbud 10 also includes processor 30 .
  • processor 30 is configured to process outputs of microphones 32 and 34 .
  • the processor is used to accomplish other processing needed for earbud functionality, such as processing digital sound files that are to be reproduced by the earbud, as would be apparent to one skilled in the technical field.
  • the processor is configured to calculate and then apply the audio controllers disclosed herein. The use of EQ and aware-mode audio controllers is known in the technical field.
  • the processor is programmed to calculate an EQ controller and/or an aware mode controller based on an audio transfer function between transducer 14 and feedback microphone 32 .
  • the transfer function is determined both with and without the ANR feedback controller applied.
  • FIG. 2 is a block diagram of aspects of a headphone device 60 .
  • Device 60 is an earbud, but this is not a limitation of the disclosure as the present disclosure also applies to other types of headphones such as those described herein.
  • Device 60 includes processor 66 that receives audio data from external sources via wireless transceiver 68 .
  • Processor 66 also receives the outputs of the feedback microphone(s) 70 and the feedforward microphone(s) 72 .
  • Processor 66 outputs audio data that is converted into analog signals that are supplied to audio driver 64 .
  • In an example device 60 includes memory comprising instructions that, when executed by the processor, accomplish the calculation and application of the audio controllers, and other processing described herein.
  • device 60 is configured to store a computer program product using a non-transitory computer-readable medium including computer program logic encoded thereon that, when performed on the headphone device (e.g., by the processor) causes the headphone device to determine the audio controllers as described herein.
  • the details of wearable audio device 60 are exemplary of aspects of headphones and are not limiting of the scope of this disclosure, as the present audio controller methodologies can be used in varied types and designs of earbuds and headphones. Also note that aspects of headphone 60 that are not involved in the present audio controller methodologies are not illustrated in FIG. 2 , for the sake of simplicity.
  • Headphones are typically designed with control schemes that are aimed at providing a preset manufacturer-designed audio response both when music is played and during aware or transparency mode use.
  • Equalization audio controllers are designed to help accomplish a desired target-curve equalization (EQ), so that, at least ideally, the reproduced sound has a desired spectral response.
  • Transparency mode controllers are used to help accomplish a desired transparency sound reproduction; the controllers are typically designed to exactly reproduce the sensed external sounds. In some examples one or both of these audio controllers are stored in device memory and applied by a device controller. Audio controllers and their use in headphones are well known in the field of audio engineering.
  • the headphones when the headphones are actually used the user's anatomy (such as the ear anatomy) as well as the way that headphones are worn creates a high degree of variability in the sound that is actually delivered from person to person. Thus, few if any people will actually receive the target sound profiles that are intended by the designed and installed EQ and aware (transparency) mode audio controllers.
  • one or both of the EQ and transparency audio controllers are calculated and applied in real time, during use of the headphones.
  • the audio controller calculation is based on an audio transfer function that is measured while the headphone is in use by the user. This transfer function is between the headphone acoustic transducer or driver and one or more headphone microphone(s) that receive the driver output.
  • this microphone can be a feedback microphone that is located between the driver and the user's eardrum.
  • this feedback microphone is typically located in the nozzle through which sound is delivered directly into the ear canal. Audio transfer functions and their calculation are well known in the audio field and so are not described in depth. Also, the application of an audio controller by a processor of the headphones is generally ubiquitous in headphones and so is also not described in depth.
  • FIG. 3 is a schematic representation of a user's head 80 that is useful to understanding the headphone audio controller.
  • Right headphone 86 is located on, over, or in right ear 82 with ear canal 83 .
  • Left headphone 88 is located on, over, or in left ear 84 with ear canal 85 .
  • microphone 90 is depicted located on the user's head 80 in a location that does not interfere with a headphone. The arrangement of the headphones and the microphone(s) is useful relative to aspects of the audio controller, as is further explained below.
  • method 100 of determining an audio controller for a headphone device is accomplished using an existing headphone control and sound delivery system, such as processor 66 , feedback (or other) microphone 70 , and driver 64 , FIG. 2 .
  • an audio transfer function between acoustic transducer 64 and microphone 70 is measured.
  • an audio transfer function measurement in some examples is based on a known audio signal being used to drive the transducer, reception of the resulting sound by the microphone, and calculation of the transfer function between the driver and the microphone.
  • a second audio transfer function is determined, this time with a feedback controller for the existing headphone ANR system applied.
  • At least one measurement is needed, and the other can be calculated with knowledge of the feedback controller.
  • Feedback controllers are used in headphones with ANR, and are well known in the technical field and so not further described herein.
  • the relevant audio controller(s) are calculated based on both the first and second transfer functions.
  • the aware mode controller In an exemplary aware mode controller Kaw, assuming negligible direct sound path due to passive insertion gain and effective ANR, and where the power spectrum Scc of a microphone located in the ear canal is equal to the power spectrum Scc,open, (where “open” means that a headphone is not worn during the measurement), the aware mode controller (termed a “semi-custom” or “sc” controller) may be represented by the following equation (1).
  • G denotes a transfer function
  • tilde when G is used with a tilde it denotes a transfer function with the feedback controller applied.
  • the subscripts used herein are defined as follows: d: driver/speaker signal; s: “system”/feedback microphone; c: canal microphone (microphone placed in the ear canal, which is a stand-in for the ear drum); o: “outside”/feedforward microphone; r: reference microphone on the head in a location where the presence or not of the headphone/headset does not affect it acoustically.
  • the angled brackets denote an average of the enclosed quantity.
  • the transfer function subscript annotation standard is from the second subscript to the first. Accordingly, Gsd is the transfer function from driver to feedback microphone. For the power spectra, they generally refer to measurements when the headset is worn, unless the subscript contains an “,open”, which means a measurement when the headset is not worn. Srr is a measurement of the reference microphone when the headset is worn. As for the controllers, Kaw is the aware mode controller and Keq is the audio controller. In some cases the absolute values of the controllers are defined. This is because their phase generally does not matter, as long unnecessary phase is not added. In other words, they should be minimum phase. For music, this is well known.
  • An exemplary semi-custom EQ controller may be represented by equation (2) below:
  • the aware mode and EQ controllers can be calculated and applied on the fly, while the headphone is in use.
  • the aware mode and EQ controllers may operate more uniformly across multiple users if they are revised to take into account data measured from multiple subjects with multiple fits of the headphones, such as described above relative to laboratory data. Since there is no way to obtain open-ear information from a user in the field, appropriate lab data can be used as a substitute for open ear information.
  • Gsd is used to estimate the relationship Sss,open/Srr,open (where Sss,open is the power spectrum expected at the feedback microphone if it were still left in the ear canal in the same location when the earbud has been removed. Since the feedback microphone is removed with the headset, Sss,open is an estimate of what it would have been in such an imagined situation).
  • Sss open is estimated from Gsd based on an average transfer matrix of the section of the ear that is blocked by an earbud, where the matrix is estimated from the lab data.
  • constants ⁇ and ⁇ are used to represent frequency dependent complex quantities derived from the lab data. These constants are determined based on Gsd and a transfer function (Gp1p2) between an outside reference microphone (on the user's head) and a microphone that is in the ear canal at the location of the feedback microphone of an inserted earbud.
  • Gsd. Gcd, and Scc,open are measured, and Gp1p2 is estimated based on these three measurements.
  • constant values can be derived as follows.
  • G P 1 ⁇ P 2 G s ⁇ d G s ⁇ d ⁇ ⁇ + ⁇ ( b )
  • Theoretical “optimal” or “opt” controllers include a microphone in the ear canal and so cannot be implemented on a headphone user.
  • the optimal controllers are useful for an understanding of the semi-custom and enhanced controllers disclosed herein.
  • Optimal aware mode and EQ controller equations are set forth in equations (5) and (6), respectively.
  • an overall target which may be similar to the target curve for speakers in a room but with some tweaks given that a headset is being used.
  • FIG. 5 illustrates the standard deviation of total insertion gain (in dB) for an earbud with and without (termed “ensemble mode”) the exemplary semi-custom and enhanced aware mode controllers set forth in equations (1) and (2). This evidences that from about 700 Hz to about 7 kHz the performance is improved using either of these aware mode controllers.
  • FIG. 6 illustrates the standard deviation of total insertion gain (in dB) for an earbud with and without (termed “ensemble mode”) the exemplary semi-custom and enhanced EQ mode controllers set forth in equations (3) and (4). This evidences that from about 300 Hz to about 5 kHz the performance is improved using either of these EQ mode controllers.
  • the subject audio controller determination and application is able to improve both EQ and aware mode uses of headphones.
  • the calculation of the controller(s) is based on real-time measurement of the audio transfer function between an audio transducer of the headphone and one or more microphones of the headphone that are configured to receive the transducer output. Accordingly, the controller(s) are at least in part customized for the particular user of the headphones, and the current use of the headphones.
  • the aware mode and/or EQ performance of the headphones is demonstrably closer to the desired designed target performance.
  • the headphones thus provide performance that is closer to standard across different users as compared to headphones with pre-set aware mode and EQ controllers.
  • the present approaches to determining audio controllers is not limited to the existence of a feedback loop for the EQ mode.
  • the need to remove all direct noise from outside may not be likely without a feedback loop present, but a feedforward loop alone with passive sound attenuation could potentially suffice.
  • this disclosure could use multiple microphones, including on the outside of the headset (for aware mode), which could involve adding them up simply or using them in an array fashion to have directional aware mode/hearing, as well as multiple feedback microphones.
  • the subject EQ and aware mode controllers can be injected in two places: at the driver (disturbance injection) or before the feedback controller (command injection). Both can be used, with complementary filters in place.
  • Elements of figures are shown and described as discrete elements in a block diagram. These may be implemented as one or more of analog circuitry or digital circuitry. Alternatively, or additionally, they may be implemented with one or more microprocessors executing software instructions.
  • the software instructions can include digital signal processing instructions. Operations may be performed by analog circuitry or by a microprocessor executing software that performs the equivalent of the analog operation.
  • Signal lines may be implemented as discrete analog or digital signal lines, as a discrete digital signal line with appropriate signal processing that is able to process separate signals, and/or as elements of a wireless communication system.
  • the steps may be performed by one element or a plurality of elements. The steps may be performed together or at different times.
  • the elements that perform the activities may be physically the same or proximate one another, or may be physically separate.
  • One element may perform the actions of more than one block.
  • Audio signals may be encoded or not, and may be transmitted in either digital or analog form. Conventional audio signal processing equipment and operations are in some cases omitted from the drawing.
  • Examples of the systems and methods described herein comprise computer components and computer-implemented steps that will be apparent to those skilled in the art.
  • the computer-implemented steps may be stored as computer-executable instructions on a computer-readable medium such as, for example, hard disks, optical disks, Flash ROMS, nonvolatile ROM, and RAM.
  • the computer-executable instructions may be executed on a variety of processors such as, for example, microprocessors, digital signal processors, gate arrays, etc.
  • DSP digital signal processor
  • a microprocessor a logic controller, logic circuits, field programmable gate array(s) (FPGA), application-specific integrated circuits) (ASIC), general computing processor(s), micro-controller(s), and the like, or any combination of these, may be suitable, and may include analog or digital circuit components and/or other components with respect to any particular implementation.
  • FPGA field programmable gate array
  • ASIC application-specific integrated circuits
  • general computing processor(s), micro-controller(s), and the like, or any combination of these may be suitable, and may include analog or digital circuit components and/or other components with respect to any particular implementation.
  • Functions and components disclosed herein may operate in the digital domain, the analog domain, or a combination of the two, and certain examples include analog-to-digital converters) (ADC) and/or digital-to-analog converter(s) (DAC) where appropriate, despite the lack of illustration of ADC's or DAC's in the various figures. Further, functions and components disclosed herein may operate in a time domain, a frequency domain, or a combination of the two, and certain examples include various forms of Fourier or similar analysis, synthesis, and/or transforms to accommodate processing in the various domains.
  • ADC analog-to-digital converters
  • DAC digital-to-analog converter(s)
  • Any suitable hardware and/or software may be configured to carry out or implement components of the aspects and examples disclosed herein, and various implementations of aspects and examples may include components and/or functionality in addition to those disclosed.
  • Various implementations may include stored instructions for a digital signal processor and/or other circuitry to enable the circuitry, at least in part, to perform the functions described herein.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
  • Headphones And Earphones (AREA)

Abstract

A method of determining an audio controller for a headphone that is configured to use an acoustic transducer to develop sound that is delivered to an ear of a user and that includes a feedback microphone that is configured to sense sound developed by the acoustic transducer, and a related computer program product and system. A first audio transfer function between the acoustic transducer and the feedback microphone is measured. A second audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied is determined. The audio controller is calculated based on both the first audio transfer function and the second audio transfer function.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of application Ser. No. 17/562,142 filed on Dec. 27, 2021. Priority is claimed.
  • BACKGROUND
  • This disclosure relates to controlling an audio headphone.
  • Headphones can be controlled with the aim of providing a particularly equalized sound. Headphones with active noise reduction (ANR) sometimes include a transparency or aware mode where external sounds are sensed by an external microphone and reproduced to the user. Such headphones can also be controlled to provide a desired transparency sound profile.
  • SUMMARY
  • Aspects and examples are directed to determining audio controllers for one or both of headphone equalization (EQ) and headphone aware mode. The controllers are calculated during use (on the fly) based at least in part on an audio transfer function that is measured between an acoustic transducer of the headphones and a microphone that senses the transducer output (e.g., a feedback microphone in ANR headphones), and further based on this same transfer function but determined with a feedback controller turned on. A result is that the EQ and aware mode controllers are customized for the particular user, without any action needing to be taken by the user or others. This provides a more consistent listening experience across large populations of users.
  • All examples and features mentioned below can be combined in any technically possible way.
  • In one aspect a method of determining an audio controller for a headphone that is configured to use an acoustic transducer to develop sound that is delivered to an ear of a user and that includes a feedback microphone that is configured to sense sound developed by the acoustic transducer includes measuring a first audio transfer function between the acoustic transducer and the feedback microphone, determining a second audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied, and calculating the audio controller based on both the first audio transfer function and the second audio transfer function.
  • Some examples include one of the above and/or below features, or any combination thereof. In an example measuring the first audio transfer function comprises providing an audio signal that is configured to operate the acoustic transducer to generate sound, sensing the sound with the feedback microphone, and calculating the first audio transfer function based on the audio signal and the sensed sound. In some examples determining the second audio transfer function comprises measuring an audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied. In an example measuring an audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied comprises providing an audio signal that is configured to operate the acoustic transducer to generate sound, sensing the sound with the feedback microphone, and calculating the second audio transfer function based on the audio signal, the sensed sound, and the feedback controller.
  • Some examples include one of the above and/or below features, or any combination thereof. In some examples determining the second audio transfer function comprises calculating the second audio transfer function based on both the first audio transfer function and the feedback controller. In an example the audio controller comprises an equalization (EQ) controller. In an example the audio controller comprises a controller for a headphone use aware mode wherein sound external to the headphone is reproduced by the acoustic transducer.
  • Some examples include one of the above and/or below features, or any combination thereof. In some examples the method further includes providing a measured power spectrum for a microphone located in an ear canal of a person, and providing a measured power spectrum for a microphone located on the person's head. In some examples the power spectra are measured on multiple different people. A compilation or sort of average of the values from this dataset can then be used in the headphones. In an example the calculation of the audio controller is further based on both the measured power spectrum for a microphone located in an ear canal of a person and the measured power spectrum for a microphone located on the person's head. In an example the calculation of the audio controller is further based on a third audio transfer function between an acoustic transducer and a microphone located in an ear canal of a person.
  • Some examples include one of the above and/or below features, or any combination thereof. In some examples the method further includes providing a third audio transfer function between a first location of a feedback microphone in an ear canal of a person and a second location on the person's head. In an example the method still further includes providing a fourth audio transfer function between the acoustic transducer and the first location of a feedback microphone in an ear canal of a person. In an example the second, third, and fourth audio transfer functions are each calculated by providing an audio signal to an acoustic transducer, sensing transduced sounds with a microphone, and calculating the transfer function based on the audio signal and the sensed sound. In some examples an audio transfer function is measured on the user in real time. As further explained below, in some examples data derived from measurements made on multiple people in a controlled environment are used together with the measured transfer function to calculate one or both of the aware mode and EQ audio controllers.
  • Some examples include one of the above and/or below features, or any combination thereof. In an example the method further includes providing first and second constant values. In an example the first and second constant values are calculated based on both the third and fourth audio transfer functions. In an example the first and second constant values are calculated based on both the third and fourth audio transfer functions at multiple different fits of the headphone on multiple different people. In an example the first and second constant values represent frequency-dependent complex quantities.
  • In another aspect a computer program product having a non-transitory computer-readable medium including computer program logic encoded thereon that, when performed on a headphone that is configured to use an acoustic transducer to develop sound that is delivered to an ear of a user and that includes a feedback microphone that is configured to sense sound developed by the acoustic transducer, causes the headphone to measure a first audio transfer function between the acoustic transducer and the feedback microphone, determine a second audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied, and calculate the audio controller based on both the first audio transfer function and the second audio transfer function.
  • Some examples include one of the above and/or below features, or any combination thereof. In an example the audio controller comprises at least one of an equalization (EQ) controller and a controller for a headphone aware use mode wherein sound external to the headphone is reproduced by the acoustic transducer. In an example the first audio transfer function is measured by providing an audio signal that is configured to operate the acoustic transducer to generate sound, sensing the sound with the feedback microphone, and calculating the first audio transfer function based on the audio signal and the sensed sound, and further wherein the second audio transfer function is calculated based on both the first audio transfer function and the feedback controller. In an example the computer program product further includes providing a measured power spectrum for a microphone located in an ear canal of a person, and providing a measured power spectrum for a microphone located on the person's head, wherein the audio controller calculation is further based on the measured power spectrum for a microphone located in an ear canal of a person, the measured power spectrum for a microphone located on the person's head, and a third audio transfer function between an acoustic transducer and a microphone located in an ear canal of a person.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various aspects of at least one example are discussed below with reference to the accompanying figures, which are not intended to be drawn to scale. The figures are included to provide illustration and a further understanding of the various aspects and examples, and are incorporated in and constitute a part of this specification, but are not intended as a definition of the limits of the inventions. In the figures, identical or nearly identical components illustrated in various figures may be represented by a like reference character or numeral. For purposes of clarity, not every component may be labeled in every figure. In the figures:
  • FIG. 1 is a partial cross-sectional view of a headphone.
  • FIG. 2 is a block diagram of aspects of a headphone.
  • FIG. 3 is a schematic diagram of a person wearing headphones.
  • FIG. 4 is a flow chart illustrating a method for calculating an audio controller.
  • FIG. 5 is a plot of the logarithmic standard deviation of third octave smoothed aware mode insertion gain in ANR earbuds with and without exemplary audio controllers.
  • FIG. 6 is a plot of the logarithmic standard deviation of third octave smoothed EQ mode insertion gain in ANR earbuds with and without exemplary audio controllers.
  • DETAILED DESCRIPTION
  • Examples of the systems, methods and apparatuses discussed herein are not limited in application to the details of construction and the arrangement of components set forth in the following description or illustrated in the accompanying drawings. The systems, methods and apparatuses are capable of implementation in other examples and of being practiced or of being carried out in various ways. Examples of specific implementations are provided herein for illustrative purposes only and are not intended to be limiting. In particular, functions, components, elements, and features discussed in connection with any one or more examples are not intended to be excluded from a similar role in any other examples.
  • Examples disclosed herein may be combined with other examples in any manner consistent with at least one of the principles disclosed herein, and references to “an example,” “some examples,” “an alternate example,” “various examples,” “one example” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described may be included in at least one example. The appearances of such terms herein are not necessarily all referring to the same example.
  • Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. Any references to examples, components, elements, acts, or functions of the computer program products, systems and methods herein referred to in the singular may also embrace embodiments including a plurality, and any references in plural to any example, component, element, act, or function herein may also embrace examples including only a singularity. Accordingly, references in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements. The use herein of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms.
  • This disclosure is in part directed to determining audio controllers for headphones such as on ear, over ear or in-ear headphones. The audio controllers can be one or both of headphone equalization (EQ) and headphone aware mode controllers. The controllers are calculated during use of the headphones, using existing headphone components and processing. The calculations are based at least in part on an audio transfer function that is measured between the acoustic transducer of the headphones and a microphone that senses the transducer output (e.g., a feedback microphone in ANR headphones, where the feedback microphone is typically located between the transducer and the user's eardrum). The calculations are further based on this same transfer function, but determined with a feedback controller turned on; this determination can be calculated based on the measured transfer function. A result of this real-time controller calculation is that the EQ and aware mode controllers are customized for the particular user, during use of the headphones, based on a single measured audio transfer function. This provides a more consistent listening experience across large populations of users of the subject headphones.
  • In an example a first audio transfer function is determined by operating the acoustic transducer of the headphones and sensing the sound with the feedback microphone. The first audio transfer function is calculated based on the audio signal provided to the transducer and based on the sensed sound. The second audio transfer function is determined by measuring an audio transfer function between the acoustic transducer and the feedback microphone, but this time with the headphone feedback controller applied. In an example the second transfer function is calculated based on the audio signal, the sensed sound, and the feedback controller. In an example the second audio transfer function is determined by calculating it based on both the first audio transfer function and the feedback controller.
  • In a more specific example the calculation of the audio controller is based on data obtained during design of the controller calculation scheme. Such data can be measured in a lab or another controlled environment, across multiple different people and multiple fittings of headphone use on both ears of each person. Measuring on different people provides data relative to many different ear geometries. In an example this data includes a measured power spectrum for a microphone located in the ear canal (which approximates the ear drum), and a separate measured power spectrum for a microphone located on the person's head in a location where it does not interfere with the headphones. The dataset can be developed by placing microphones in the ears of human subjects, and placing a microphone on the subjects' heads. Measurements are made with and without headphones.
  • In an example there are three measurements made in the lab, and there are two sources of sound for the measurements. In one measurement, the driver in the headphones is used to measure the transfer functions from it to the feedback and canal microphones, while wearing the headset. The other two measurements are made by playing sound from speakers in the measurement room. This is done both with and without the headset worn. For EQ, only the driver measurement and the open room noise measurement is used. For aware mode, all three are used since the response of the outside (feedforward) microphone when the headset is worn is needed.
  • In an example the lab data is represented by two constant values. These constant values can be derived as further described below.
  • This disclosure relates to a headphone audio device. Some non-limiting examples of this disclosure describe a type of headphone that is known as an earbud. Earbuds generally include an electro-acoustic transducer for producing sound, and are configured to deliver the sound directly into the user's ear canal. Earbuds can be wireless or wired. In non-limiting examples described herein the earbuds include one or more feedback microphones that sense sound produced by the transducer. Examples also include feedforward (external) microphones that sense external sounds outside of the housing. Feedback and feedforward microphones can be used for functions such as active noise reduction (ANR) where external sounds are canceled so they are not heard, and transparency mode operation where external sounds are reproduced for the user. Aspects of earbuds and other types of headphones that are not involved in this disclosure are not shown or described.
  • A headphone refers to a device that typically fits around, on, or in an ear and that radiates acoustic energy directly or indirectly into the ear canal. Headphones are sometimes referred to as earphones, earpieces, headsets, earbuds, or sport headphones, and can be wired or wireless. A headphone includes a driver (acoustic transducer) to transduce electrical audio signals to acoustic energy. The driver may or may not be housed in an earcup or in a housing that is configured to be located on the head or on the ear, or to be inserted directly into the user's ear canal. A headphone may be a single stand-alone unit or one of a pair of headphones (each including at least one acoustic driver), one for each ear. A headphone may be connected mechanically to another headphone, for example by a headband and/or by leads that conduct audio signals to an acoustic driver in the headphone. A headphone may include components for wirelessly receiving audio signals. A headphone may include components of an ANR system, which may include an internal microphone within the headphone housing and an external microphone that picks up sound outside the housing. Headphones may also include other functionality, such as additional microphones for an ANR system, or one or more microphones that are used to pick up the user's voice.
  • One or more of the systems and methods described herein, in various examples and combinations, may be used in a wide variety of headphones in various form factors. One such form factor is an earbud. Another is an on-ear or over-ear headphone.
  • It should be noted that although specific implementations of headphones primarily serving the purpose of acoustically outputting audio are presented with some degree of detail, such presentations of specific implementations are intended to facilitate understanding through provisions of examples and should not be taken as limiting either the scope of the disclosure or the scope of the claim coverage.
  • In some examples the headphone includes an electro-acoustic transducer that is configured to develop sound for a user, a housing that holds the transducer, and a feedback microphone that is configured to detect sound in the housing before it reaches the eardrum. A processor system of the headphone is programmed to accomplish methods of determining an audio controller, such as an equalization (EQ) controller and an aware mode controller.
  • FIG. 1 is a perspective view of a wireless in-ear earbud 10. An earbud is a non-limiting example of a headphone device. Earbud 10 includes body or housing 12 that houses the active components of the earbud. Housing 12 encloses electro-acoustic transducer (audio driver) 14 that generates sound via movable diaphragm 16. Housing 12 comprises front housing portion 22 and rear housing portion 23. Diaphragm 16 is driven in order to create sound pressure in front housing cavity 18. Sound is also created in rear housing cavity 20. Sound pressure is directed from cavity 18 out of front housing portion 22 via sound outlet 24. Internal microphone 32 is located inside of housing 12. In an example microphone 32 is in housing portion 22, as shown in FIG. 1 . External microphone 34 is configured to sense sound external to housing 12. In an example exterior microphone 34 is located inside of the housing and is acoustically coupled to the external environment via housing openings 36 that let environmental sound reach microphone 34. In an example interior microphone 32 is used as a feedback microphone for active noise reduction (ANR), and exterior microphone 34 is used as a feed-forward microphone for ANR, and/or for transparency mode operation where environmental sound is sensed and then reproduced to the user so the user is more environmentally aware and can hear others speaking and the like. An earbud typically also includes a pliable tip (not shown) that is engaged with neck 25 of housing portion 22, to help direct the sound into the ear canal. Note that details of earbud 10 and its operation are well known in the technical field and so are not further described herein. Also, details of earbud 10 are exemplary of aspects of headphones and are not limiting of the scope of this disclosure, as the present audio controller can be used in varied types and designs of earbuds, earphones, and other types of headphones.
  • Earbud 10 also includes processor 30. In some examples processor 30 is configured to process outputs of microphones 32 and 34. In some examples the processor is used to accomplish other processing needed for earbud functionality, such as processing digital sound files that are to be reproduced by the earbud, as would be apparent to one skilled in the technical field. In an example the processor is configured to calculate and then apply the audio controllers disclosed herein. The use of EQ and aware-mode audio controllers is known in the technical field.
  • In some examples the processor is programmed to calculate an EQ controller and/or an aware mode controller based on an audio transfer function between transducer 14 and feedback microphone 32. The transfer function is determined both with and without the ANR feedback controller applied.
  • FIG. 2 is a block diagram of aspects of a headphone device 60. In an example device 60 is an earbud, but this is not a limitation of the disclosure as the present disclosure also applies to other types of headphones such as those described herein. Device 60 includes processor 66 that receives audio data from external sources via wireless transceiver 68. Processor 66 also receives the outputs of the feedback microphone(s) 70 and the feedforward microphone(s) 72. Processor 66 outputs audio data that is converted into analog signals that are supplied to audio driver 64. In an example device 60 includes memory comprising instructions that, when executed by the processor, accomplish the calculation and application of the audio controllers, and other processing described herein. In some examples device 60 is configured to store a computer program product using a non-transitory computer-readable medium including computer program logic encoded thereon that, when performed on the headphone device (e.g., by the processor) causes the headphone device to determine the audio controllers as described herein. Note that the details of wearable audio device 60 are exemplary of aspects of headphones and are not limiting of the scope of this disclosure, as the present audio controller methodologies can be used in varied types and designs of earbuds and headphones. Also note that aspects of headphone 60 that are not involved in the present audio controller methodologies are not illustrated in FIG. 2 , for the sake of simplicity.
  • Headphones are typically designed with control schemes that are aimed at providing a preset manufacturer-designed audio response both when music is played and during aware or transparency mode use. Equalization audio controllers are designed to help accomplish a desired target-curve equalization (EQ), so that, at least ideally, the reproduced sound has a desired spectral response. Transparency mode controllers are used to help accomplish a desired transparency sound reproduction; the controllers are typically designed to exactly reproduce the sensed external sounds. In some examples one or both of these audio controllers are stored in device memory and applied by a device controller. Audio controllers and their use in headphones are well known in the field of audio engineering.
  • However, when the headphones are actually used the user's anatomy (such as the ear anatomy) as well as the way that headphones are worn creates a high degree of variability in the sound that is actually delivered from person to person. Thus, few if any people will actually receive the target sound profiles that are intended by the designed and installed EQ and aware (transparency) mode audio controllers.
  • In the present audio control system and method, one or both of the EQ and transparency audio controllers are calculated and applied in real time, during use of the headphones. A result is that the user experience is closer to what is intended by the headphone manufacturer, even accounting for variability from user to user. In some examples the audio controller calculation is based on an audio transfer function that is measured while the headphone is in use by the user. This transfer function is between the headphone acoustic transducer or driver and one or more headphone microphone(s) that receive the driver output. In ANR headphones this microphone can be a feedback microphone that is located between the driver and the user's eardrum. In an example, in earbuds this feedback microphone is typically located in the nozzle through which sound is delivered directly into the ear canal. Audio transfer functions and their calculation are well known in the audio field and so are not described in depth. Also, the application of an audio controller by a processor of the headphones is generally ubiquitous in headphones and so is also not described in depth.
  • FIG. 3 is a schematic representation of a user's head 80 that is useful to understanding the headphone audio controller. Right headphone 86 is located on, over, or in right ear 82 with ear canal 83. Left headphone 88 is located on, over, or in left ear 84 with ear canal 85. Also, microphone 90 is depicted located on the user's head 80 in a location that does not interfere with a headphone. The arrangement of the headphones and the microphone(s) is useful relative to aspects of the audio controller, as is further explained below.
  • In an example illustrated in FIG. 4 , method 100 of determining an audio controller for a headphone device is accomplished using an existing headphone control and sound delivery system, such as processor 66, feedback (or other) microphone 70, and driver 64, FIG. 2 . In step 102 an audio transfer function between acoustic transducer 64 and microphone 70 is measured. As is known in the technical field, an audio transfer function measurement in some examples is based on a known audio signal being used to drive the transducer, reception of the resulting sound by the microphone, and calculation of the transfer function between the driver and the microphone. In method 100, at step 104 a second audio transfer function is determined, this time with a feedback controller for the existing headphone ANR system applied. At least one measurement is needed, and the other can be calculated with knowledge of the feedback controller. Feedback controllers are used in headphones with ANR, and are well known in the technical field and so not further described herein. At step 106 the relevant audio controller(s) (one or both of an EQ controller and a transparency mode controller) are calculated based on both the first and second transfer functions.
  • In an exemplary aware mode controller Kaw, assuming negligible direct sound path due to passive insertion gain and effective ANR, and where the power spectrum Scc of a microphone located in the ear canal is equal to the power spectrum Scc,open, (where “open” means that a headphone is not worn during the measurement), the aware mode controller (termed a “semi-custom” or “sc” controller) may be represented by the following equation (1).
  • "\[LeftBracketingBar]" K AW , sc "\[RightBracketingBar]" 2 = S cc , open / S rr , open S o o / S r r "\[LeftBracketingBar]" G s d G c d "\[RightBracketingBar]" 2 "\[LeftBracketingBar]" 1 G ~ s d "\[RightBracketingBar]" 2 ( 1 )
  • where G denotes a transfer function, and when G is used with a tilde it denotes a transfer function with the feedback controller applied. The subscripts used herein are defined as follows: d: driver/speaker signal; s: “system”/feedback microphone; c: canal microphone (microphone placed in the ear canal, which is a stand-in for the ear drum); o: “outside”/feedforward microphone; r: reference microphone on the head in a location where the presence or not of the headphone/headset does not affect it acoustically. The angled brackets denote an average of the enclosed quantity.
  • The transfer function subscript annotation standard is from the second subscript to the first. Accordingly, Gsd is the transfer function from driver to feedback microphone. For the power spectra, they generally refer to measurements when the headset is worn, unless the subscript contains an “,open”, which means a measurement when the headset is not worn. Srr is a measurement of the reference microphone when the headset is worn. As for the controllers, Kaw is the aware mode controller and Keq is the audio controller. In some cases the absolute values of the controllers are defined. This is because their phase generally does not matter, as long unnecessary phase is not added. In other words, they should be minimum phase. For music, this is well known. But for the aware mode controller, this fact depends on the total noise reduction (passive+feedback+feedforward) being enough that the direct noise arriving at the ear is so low that when the aware mode controller is turned on, what is heard at the ear is completely dominated by the signal coming from Kaw. This is generally the case in many ANR headsets.
  • An exemplary semi-custom EQ controller may be represented by equation (2) below:
  • "\[LeftBracketingBar]" K EQ , sc "\[RightBracketingBar]" 2 = S cc , open S rr , open "\[LeftBracketingBar]" G s d G c d "\[RightBracketingBar]" 2 "\[LeftBracketingBar]" 1 G ~ s d "\[RightBracketingBar]" 2 ( 2 )
  • Thus, by measuring transfer functions between the transducer and one or more microphones of the headphones (both with and without the ANR feedback controller applied) the aware mode and EQ controllers can be calculated and applied on the fly, while the headphone is in use.
  • The aware mode and EQ controllers may operate more uniformly across multiple users if they are revised to take into account data measured from multiple subjects with multiple fits of the headphones, such as described above relative to laboratory data. Since there is no way to obtain open-ear information from a user in the field, appropriate lab data can be used as a substitute for open ear information. In some examples, Gsd is used to estimate the relationship Sss,open/Srr,open (where Sss,open is the power spectrum expected at the feedback microphone if it were still left in the ear canal in the same location when the earbud has been removed. Since the feedback microphone is removed with the headset, Sss,open is an estimate of what it would have been in such an imagined situation). In an example Sss,open is estimated from Gsd based on an average transfer matrix of the section of the ear that is blocked by an earbud, where the matrix is estimated from the lab data. In an example, constants α and β are used to represent frequency dependent complex quantities derived from the lab data. These constants are determined based on Gsd and a transfer function (Gp1p2) between an outside reference microphone (on the user's head) and a microphone that is in the ear canal at the location of the feedback microphone of an inserted earbud. In an example, Gsd. Gcd, and Scc,open are measured, and Gp1p2 is estimated based on these three measurements.
  • In an example the constant values can be derived as follows.
  • If the reference microphone is termed #1 and the feedback microphone #2, the transfer function from 1 to 2 for external noise is termed GP 1 P 2 , it can be stated that (a):
  • "\[LeftBracketingBar]" G P 1 P 2 "\[RightBracketingBar]" 2 = S ss , open S rr , open ( a )
  • From simple modelling, the following (b) can be asserted:
  • G P 1 P 2 = G s d G s d α + β ( b )
  • Also, from the laboratory data the following (c) can be estimated:
  • S ss , open S rr , open = S cc , open S rr , open "\[LeftBracketingBar]" G s d G c d "\[RightBracketingBar]" 2 ( c )
  • Then, to solve for the best α and β, a and c are combined to derive (d):
  • "\[LeftBracketingBar]" G P 1 P 2 "\[RightBracketingBar]" = S cc , open S rr , open "\[LeftBracketingBar]" G s d G c d "\[RightBracketingBar]" ( d )
  • To remove the absolute value around GP 1 P 2 , some phase is added to the expression on the right of (d). Possible examples are giving it zero phase at all frequencies or calculating a minimum phase that matches the magnitude. The result is GP 1 P 2 for every fit in the lab data. Even though every fit ideally should have different α's and β's, one of each is chosen for use in the headphones, which can be accomplished by developing a best fit to the data on average.
  • Now, (b) is rearranged to give (e):

  • G P 1 P 2 G sd α+G P 1 P 2 β=G sd  (e)
  • Using this equation (e) for every fit, a matrix equation of the type

  • Ax=b
  • Can be set up. Then solve for the best least mean square fit using the pseudo-inverse of A:

  • x=A b
  • Or in terms of the equation above, using all the fits in the lab data:
  • [ G P 1 P 2 fit 1 G sd fit 1 G P 1 P 2 fit 1 G P 1 P 2 fit 2 G sd fit 2 G P 1 P 2 fit 2 ] [ α β ] = [ G sd fit 1 G sd fit 2 ] [ α β ] = [ G P 1 P 2 fit 1 G sd fit 1 G P 1 P 2 fit 1 G P 1 P 2 fit 2 G sd fit 2 G P 1 P 2 fit 2 ] [ G sd fit 1 G sd fit 2 ]
  • This gives an optimized solution that works best across the population on average.
  • Taking the laboratory data into account in this way leads to revised or “enhanced” aware mode and EQ controllers, set forth in equations (3) and (4), respectively. Note that the desired controller shape is calculated frequency by frequency, based on the controller design.
  • "\[LeftBracketingBar]" K A W , e n h "\[RightBracketingBar]" 2 = S r r S o o "\[LeftBracketingBar]" G s d G s d α + β "\[RightBracketingBar]" 2 "\[LeftBracketingBar]" 1 G ~ s d "\[RightBracketingBar]" 2 ( 3 ) "\[LeftBracketingBar]" K E Q , e n h "\[RightBracketingBar]" 2 = "\[LeftBracketingBar]" G s d G s d α + β "\[RightBracketingBar]" 2 "\[LeftBracketingBar]" 1 G ~ s d "\[RightBracketingBar]" 2 ( 4 )
  • Theoretical “optimal” or “opt” controllers include a microphone in the ear canal and so cannot be implemented on a headphone user. The optimal controllers are useful for an understanding of the semi-custom and enhanced controllers disclosed herein. Optimal aware mode and EQ controller equations are set forth in equations (5) and (6), respectively.
  • "\[LeftBracketingBar]" K A W , o p t "\[RightBracketingBar]" 2 = S cc , open / S rr , open S o o / S r r "\[LeftBracketingBar]" 1 G ~ c d "\[RightBracketingBar]" 2 ( 5 )
  • This is further multiplied by overall target shapes (e.g. filtering out low frequencies below voice band and high frequencies to avoid instabilities when moving hands near headset).
  • "\[LeftBracketingBar]" K E Q , o p t "\[RightBracketingBar]" 2 = S cc , open S rr , open "\[LeftBracketingBar]" 1 G ~ c d "\[RightBracketingBar]" 2 ( 6 )
  • This is further multiplied by an overall target, which may be similar to the target curve for speakers in a room but with some tweaks given that a headset is being used.
  • FIG. 5 illustrates the standard deviation of total insertion gain (in dB) for an earbud with and without (termed “ensemble mode”) the exemplary semi-custom and enhanced aware mode controllers set forth in equations (1) and (2). This evidences that from about 700 Hz to about 7 kHz the performance is improved using either of these aware mode controllers.
  • FIG. 6 illustrates the standard deviation of total insertion gain (in dB) for an earbud with and without (termed “ensemble mode”) the exemplary semi-custom and enhanced EQ mode controllers set forth in equations (3) and (4). This evidences that from about 300 Hz to about 5 kHz the performance is improved using either of these EQ mode controllers.
  • The subject audio controller determination and application is able to improve both EQ and aware mode uses of headphones. The calculation of the controller(s) is based on real-time measurement of the audio transfer function between an audio transducer of the headphone and one or more microphones of the headphone that are configured to receive the transducer output. Accordingly, the controller(s) are at least in part customized for the particular user of the headphones, and the current use of the headphones. A result is that the aware mode and/or EQ performance of the headphones is demonstrably closer to the desired designed target performance. The headphones thus provide performance that is closer to standard across different users as compared to headphones with pre-set aware mode and EQ controllers.
  • Note that as long as there is a suitable microphone in the headset, the present approaches to determining audio controllers is not limited to the existence of a feedback loop for the EQ mode. For aware mode the need to remove all direct noise from outside may not be likely without a feedback loop present, but a feedforward loop alone with passive sound attenuation could potentially suffice.
  • Also note that this disclosure could use multiple microphones, including on the outside of the headset (for aware mode), which could involve adding them up simply or using them in an array fashion to have directional aware mode/hearing, as well as multiple feedback microphones.
  • Further, the subject EQ and aware mode controllers can be injected in two places: at the driver (disturbance injection) or before the feedback controller (command injection). Both can be used, with complementary filters in place.
  • Elements of figures are shown and described as discrete elements in a block diagram. These may be implemented as one or more of analog circuitry or digital circuitry. Alternatively, or additionally, they may be implemented with one or more microprocessors executing software instructions. The software instructions can include digital signal processing instructions. Operations may be performed by analog circuitry or by a microprocessor executing software that performs the equivalent of the analog operation. Signal lines may be implemented as discrete analog or digital signal lines, as a discrete digital signal line with appropriate signal processing that is able to process separate signals, and/or as elements of a wireless communication system.
  • When processes are represented or implied in the block diagram, the steps may be performed by one element or a plurality of elements. The steps may be performed together or at different times. The elements that perform the activities may be physically the same or proximate one another, or may be physically separate. One element may perform the actions of more than one block. Audio signals may be encoded or not, and may be transmitted in either digital or analog form. Conventional audio signal processing equipment and operations are in some cases omitted from the drawing.
  • Examples of the systems and methods described herein comprise computer components and computer-implemented steps that will be apparent to those skilled in the art. For example, it should be understood by one of skill in the art that the computer-implemented steps may be stored as computer-executable instructions on a computer-readable medium such as, for example, hard disks, optical disks, Flash ROMS, nonvolatile ROM, and RAM. Furthermore, it should be understood by one of skill in the art that the computer-executable instructions may be executed on a variety of processors such as, for example, microprocessors, digital signal processors, gate arrays, etc. For ease of exposition, not every step or element of the systems and methods described above is described herein as part of a computer system, but those skilled in the art will recognize that each step or element may have a corresponding computer system or software component. Such computer system and/or software components are therefore enabled by describing their corresponding steps or elements (that is, their functionality), and are within the scope of the disclosure.
  • Functions, methods, and/or components of the methods and systems disclosed herein according to various aspects and examples may be implemented or carried out in a digital signal processor (DSP) and/or other circuitry, analog or digital, suitable for performing signal processing and other functions in accord with the aspects and examples disclosed herein. Additionally or alternatively, a microprocessor, a logic controller, logic circuits, field programmable gate array(s) (FPGA), application-specific integrated circuits) (ASIC), general computing processor(s), micro-controller(s), and the like, or any combination of these, may be suitable, and may include analog or digital circuit components and/or other components with respect to any particular implementation.
  • Functions and components disclosed herein may operate in the digital domain, the analog domain, or a combination of the two, and certain examples include analog-to-digital converters) (ADC) and/or digital-to-analog converter(s) (DAC) where appropriate, despite the lack of illustration of ADC's or DAC's in the various figures. Further, functions and components disclosed herein may operate in a time domain, a frequency domain, or a combination of the two, and certain examples include various forms of Fourier or similar analysis, synthesis, and/or transforms to accommodate processing in the various domains.
  • Any suitable hardware and/or software, including firmware and the like, may be configured to carry out or implement components of the aspects and examples disclosed herein, and various implementations of aspects and examples may include components and/or functionality in addition to those disclosed. Various implementations may include stored instructions for a digital signal processor and/or other circuitry to enable the circuitry, at least in part, to perform the functions described herein.
  • Having described above several aspects of at least one example, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure and are intended to be within the scope of the invention. Accordingly, the foregoing description and drawings are by way of example only, and the scope of the invention should be determined from proper construction of the appended claims, and their equivalents.

Claims (20)

What is claimed is:
1. A headphone comprising:
an acoustic transducer that is configured to develop sound for delivery to an ear of a user;
a feedback microphone that is configured to sense sound from the acoustic transducer and develop an output; and
a processor, responsive to the feedback microphone output, and that is configured to:
measure a first audio transfer function between the acoustic transducer and the feedback microphone;
determine a second audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied; and
calculate an audio controller based on both the first audio transfer function and the second audio transfer function.
2. The headphone of claim 1 wherein measuring the first audio transfer function comprises providing an audio signal that is configured to operate the acoustic transducer to generate sound, sensing the sound with the feedback microphone, and calculating the first audio transfer function based on the audio signal and the sensed sound.
3. The headphone of claim 1 wherein determining the second audio transfer function comprises measuring an audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied.
4. The headphone of claim 3 wherein measuring an audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied comprises providing an audio signal that is configured to operate the acoustic transducer to generate sound, sensing the sound with the feedback microphone, and calculating the second audio transfer function based on the audio signal, the sensed sound, and the feedback controller.
5. The headphone of claim 1 wherein determining the second audio transfer function comprises calculating the second audio transfer function based on both the first audio transfer function and the feedback controller.
6. The headphone of claim 5 wherein the audio controller comprises an equalization (EQ) controller.
7. The headphone of claim 5 wherein the audio controller comprises a controller for a headphone use mode wherein sound external to the headphone is reproduced by the acoustic transducer.
8. The headphone of claim 1 wherein the processor is further configured to provide a measured power spectrum for a microphone located in an ear canal of a person, and provide a measured power spectrum for a microphone located on the person's head.
9. The headphone of claim 8 wherein the calculation of the audio controller is further based on both the measured power spectrum for a microphone located in an ear canal of a person and the measured power spectrum for a microphone located on the person's head.
10. The headphone of claim 9 wherein the calculation of the audio controller is further based on a third audio transfer function between an acoustic transducer, and a microphone located in an ear canal of a person.
11. The headphone of claim 1 wherein the processor is further configured to provide a third audio transfer function between a first location of a feedback microphone in an ear canal of a person and a second location on the person's head.
12. The headphone of claim 11 wherein the processor is further configured to provide a fourth audio transfer function between the acoustic transducer and the first location of a feedback microphone in an ear canal of a person.
13. The headphone of claim 12 wherein the processor is further configured to provide first and second constant values.
14. The headphone of claim 13 wherein the first and second constant values are calculated based on both the third and fourth audio transfer functions.
15. The headphone of claim 14 wherein the first and second constant values are calculated based on both the third and fourth audio transfer functions at multiple different fits of the headphone on multiple different people.
16. The headphone of claim 13 wherein the first and second constant values represent frequency-dependent complex quantities.
17. An audio device, comprising:
an acoustic transducer that is configured to develop sound for delivery to an ear of a user;
a feedback microphone that is configured to sense sound from the acoustic transducer and develop an output; and
a processor, responsive to the feedback microphone output, and that is configured to:
measure a first audio transfer function between the acoustic transducer and the feedback microphone;
determine a second audio transfer function between the acoustic transducer and the feedback microphone with a feedback controller applied; and
calculate an audio controller based on both the first audio transfer function and the second audio transfer function.
18. The audio device of claim 17 wherein the audio controller comprises at least one of an equalization (EQ) controller and a controller for a use mode wherein sound external to the audio device is reproduced by the acoustic transducer.
19. The audio device of claim 18 wherein the first audio transfer function is measured by providing an audio signal that is configured to operate the acoustic transducer to generate sound, sensing the sound with the feedback microphone, and calculating the first audio transfer function based on the audio signal and the sensed sound, and further wherein the second audio transfer function is calculated based on both the first audio transfer function and the feedback controller.
20. The audio device of claim 19 wherein the processor is further configured to provide a measured power spectrum for a microphone located in an ear canal of a person, and provide a measured power spectrum for a microphone located on the person's head, wherein the audio controller calculation is further based on the measured power spectrum for a microphone located in an ear canal of a person, the measured power spectrum for a microphone located on the person's head, and a third audio transfer function between an acoustic transducer and a microphone located in an ear canal of a person.
US17/889,561 2021-12-27 2022-08-17 Headphone audio controller Active US11689844B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/889,561 US11689844B1 (en) 2021-12-27 2022-08-17 Headphone audio controller
US18/195,645 US12028675B2 (en) 2021-12-27 2023-05-10 Headphone audio controller
US18/675,480 US20240314489A1 (en) 2021-12-27 2024-05-28 Headphone Audio Controller

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/562,142 US11457304B1 (en) 2021-12-27 2021-12-27 Headphone audio controller
US17/889,561 US11689844B1 (en) 2021-12-27 2022-08-17 Headphone audio controller

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/562,142 Continuation US11457304B1 (en) 2021-12-27 2021-12-27 Headphone audio controller

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/195,645 Continuation US12028675B2 (en) 2021-12-27 2023-05-10 Headphone audio controller

Publications (2)

Publication Number Publication Date
US11689844B1 US11689844B1 (en) 2023-06-27
US20230209244A1 true US20230209244A1 (en) 2023-06-29

Family

ID=83365771

Family Applications (4)

Application Number Title Priority Date Filing Date
US17/562,142 Active US11457304B1 (en) 2021-12-27 2021-12-27 Headphone audio controller
US17/889,561 Active US11689844B1 (en) 2021-12-27 2022-08-17 Headphone audio controller
US18/195,645 Active US12028675B2 (en) 2021-12-27 2023-05-10 Headphone audio controller
US18/675,480 Pending US20240314489A1 (en) 2021-12-27 2024-05-28 Headphone Audio Controller

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/562,142 Active US11457304B1 (en) 2021-12-27 2021-12-27 Headphone audio controller

Family Applications After (2)

Application Number Title Priority Date Filing Date
US18/195,645 Active US12028675B2 (en) 2021-12-27 2023-05-10 Headphone audio controller
US18/675,480 Pending US20240314489A1 (en) 2021-12-27 2024-05-28 Headphone Audio Controller

Country Status (3)

Country Link
US (4) US11457304B1 (en)
CN (1) CN118451726A (en)
WO (1) WO2023129228A1 (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080095389A1 (en) * 2006-10-23 2008-04-24 Starkey Laboratories, Inc. Entrainment avoidance with pole stabilization
US20080298615A1 (en) * 2006-03-03 2008-12-04 Widex A/S Hearing aid and method of utilizing gain limitation in a hearing aid
US20100202641A1 (en) * 2009-02-06 2010-08-12 Oticon A/S Hearing device with adaptive feedback suppression
US20110150257A1 (en) * 2009-04-02 2011-06-23 Oticon A/S Adaptive feedback cancellation based on inserted and/or intrinsic characteristics and matched retrieval
US20110211715A1 (en) * 2010-02-26 2011-09-01 Siemens Medical Instruments Pte. Ltd. Hearing device with feedback-reduction filters operated in parallel, and method
US20120288107A1 (en) * 2011-05-09 2012-11-15 Bernafon Ag Test system for evaluating feedback performance of a listening device
US20160300563A1 (en) * 2015-04-13 2016-10-13 Qualcomm Incorporated Active noise cancellation featuring secondary path estimation
US10462551B1 (en) * 2018-12-06 2019-10-29 Bose Corporation Wearable audio device with head on/off state detection
US20220210581A1 (en) * 2020-12-28 2022-06-30 Oticon A/S Hearing aid comprising a feedback control system
US20220383848A1 (en) * 2019-11-04 2022-12-01 Soundchip Sa Active noise cancelling system
US20220392427A1 (en) * 2019-09-30 2022-12-08 Ams Ag Audio system and signal processing method for an ear mountable playback device
US20220415300A1 (en) * 2019-11-28 2022-12-29 Ams Ag Noise cancellation system and signal processing method for an ear-mountable playback device
US20230011909A1 (en) * 2020-08-29 2023-01-12 Shenzhen Shokz Co., Ltd. Systems and methods for detecting state of bone conduction hearing device
US20230028004A1 (en) * 2020-08-29 2023-01-26 Shenzhen Shokz Co., Ltd. Systems and methods for obtaining vibration transfer functions

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9837066B2 (en) 2013-07-28 2017-12-05 Light Speed Aviation, Inc. System and method for adaptive active noise reduction
GB2600355B (en) * 2015-05-22 2022-07-27 Cirrus Logic Int Semiconductor Ltd Adaptive receiver
US9747887B2 (en) * 2016-01-12 2017-08-29 Bose Corporation Systems and methods of active noise reduction in headphones
US9928823B2 (en) * 2016-08-12 2018-03-27 Bose Corporation Adaptive transducer calibration for fixed feedforward noise attenuation systems
US10229698B1 (en) * 2017-06-21 2019-03-12 Amazon Technologies, Inc. Playback reference signal-assisted multi-microphone interference canceler
US10096313B1 (en) * 2017-09-20 2018-10-09 Bose Corporation Parallel active noise reduction (ANR) and hear-through signal flow paths in acoustic devices
EP3644307A1 (en) * 2018-10-23 2020-04-29 AMS Sensors UK Limited Tuning method, manufacturing method, computer-readable storage medium and tuning system
US11062687B2 (en) * 2019-01-04 2021-07-13 Bose Corporation Compensation for microphone roll-off variation in acoustic devices
EP3687188B1 (en) * 2019-01-25 2022-04-27 ams AG A noise cancellation enabled audio system and method for adjusting a target transfer function of a noise cancellation enabled audio system
US10595151B1 (en) * 2019-03-18 2020-03-17 Cirrus Logic, Inc. Compensation of own voice occlusion
EP3712884B1 (en) * 2019-03-22 2024-03-06 ams AG Audio system and signal processing method for an ear mountable playback device
US10839786B1 (en) * 2019-06-17 2020-11-17 Bose Corporation Systems and methods for canceling road noise in a microphone signal
US11017792B2 (en) * 2019-06-17 2021-05-25 Bose Corporation Modular echo cancellation unit
US10748521B1 (en) * 2019-06-19 2020-08-18 Bose Corporation Real-time detection of conditions in acoustic devices
US11189297B1 (en) * 2020-01-10 2021-11-30 Amazon Technologies, Inc. Tunable residual echo suppressor
US10937410B1 (en) 2020-04-24 2021-03-02 Bose Corporation Managing characteristics of active noise reduction

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080298615A1 (en) * 2006-03-03 2008-12-04 Widex A/S Hearing aid and method of utilizing gain limitation in a hearing aid
US20080095389A1 (en) * 2006-10-23 2008-04-24 Starkey Laboratories, Inc. Entrainment avoidance with pole stabilization
US20100202641A1 (en) * 2009-02-06 2010-08-12 Oticon A/S Hearing device with adaptive feedback suppression
US20110150257A1 (en) * 2009-04-02 2011-06-23 Oticon A/S Adaptive feedback cancellation based on inserted and/or intrinsic characteristics and matched retrieval
US20110211715A1 (en) * 2010-02-26 2011-09-01 Siemens Medical Instruments Pte. Ltd. Hearing device with feedback-reduction filters operated in parallel, and method
US20120288107A1 (en) * 2011-05-09 2012-11-15 Bernafon Ag Test system for evaluating feedback performance of a listening device
US20160300563A1 (en) * 2015-04-13 2016-10-13 Qualcomm Incorporated Active noise cancellation featuring secondary path estimation
US10462551B1 (en) * 2018-12-06 2019-10-29 Bose Corporation Wearable audio device with head on/off state detection
US20220392427A1 (en) * 2019-09-30 2022-12-08 Ams Ag Audio system and signal processing method for an ear mountable playback device
US20220383848A1 (en) * 2019-11-04 2022-12-01 Soundchip Sa Active noise cancelling system
US20220415300A1 (en) * 2019-11-28 2022-12-29 Ams Ag Noise cancellation system and signal processing method for an ear-mountable playback device
US20230011909A1 (en) * 2020-08-29 2023-01-12 Shenzhen Shokz Co., Ltd. Systems and methods for detecting state of bone conduction hearing device
US20230028004A1 (en) * 2020-08-29 2023-01-26 Shenzhen Shokz Co., Ltd. Systems and methods for obtaining vibration transfer functions
US20220210581A1 (en) * 2020-12-28 2022-06-30 Oticon A/S Hearing aid comprising a feedback control system

Also Published As

Publication number Publication date
CN118451726A (en) 2024-08-06
US12028675B2 (en) 2024-07-02
US20230283943A1 (en) 2023-09-07
US11689844B1 (en) 2023-06-27
US20240314489A1 (en) 2024-09-19
US11457304B1 (en) 2022-09-27
WO2023129228A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
EP3516883B1 (en) In-ear active noise reduction earphone
US11503409B1 (en) Spatial headphone transparency
CN110089129B (en) On/off-head detection of personal sound devices using earpiece microphones
CN111133505B (en) Parallel Active Noise Reduction (ANR) and traversing listening signal flow paths in acoustic devices
AU2021259164B2 (en) Managing characteristics of active noise reduction
US9837066B2 (en) System and method for adaptive active noise reduction
EP3799031B1 (en) Audio system and signal processing method for an ear mountable playback device
CN113450754A (en) Active noise cancellation system and method
US20240064454A1 (en) Active Noise Reduction Earbud
US11889257B2 (en) Wearable audio device zero-crossing based parasitic oscillation detection
US11689844B1 (en) Headphone audio controller
US11736861B2 (en) Auto-calibrating in-ear headphone
US11887577B2 (en) System and method for evaluating an acoustic characteristic of an electronic device
EP4445619A2 (en) Audio device having aware mode auto-leveler
WO2023107426A2 (en) Audio device having aware mode auto-leveler

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE