WO2009023784A1 - Method and device for linking matrix control of an earpiece ii - Google Patents

Method and device for linking matrix control of an earpiece ii Download PDF

Info

Publication number
WO2009023784A1
WO2009023784A1 PCT/US2008/073189 US2008073189W WO2009023784A1 WO 2009023784 A1 WO2009023784 A1 WO 2009023784A1 US 2008073189 W US2008073189 W US 2008073189W WO 2009023784 A1 WO2009023784 A1 WO 2009023784A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
sound signals
earpiece
sound
filter
Prior art date
Application number
PCT/US2008/073189
Other languages
French (fr)
Inventor
John Usher
Marc Boillot
Original Assignee
Personics Holdings Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Personics Holdings Inc. filed Critical Personics Holdings Inc.
Publication of WO2009023784A1 publication Critical patent/WO2009023784A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1016Earpieces of the intra-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
    • H04R2201/107Monophonic and stereophonic headphones with microphone for two-way hands free communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/01Input selection or mixing for amplifiers or loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments

Definitions

  • the present invention relates to a method of controlling an earpiece device, and more particularly, though not exclusively, a method of configuring and managing audio input and output on an earpiece.
  • Headphones or earpieces can be used for music enjoyment or voice communication. Use of these devices has steadily been increasing, and more products are expanding functionality to support audio delivery to headphones and earpieces. [0003] The earpieces and associated products are becoming more intelligent as more communication features become available. However, the earpieces themselves have limited resources and must make efficient use of the many audio input and output configurations required to manage and support audio delivery.
  • an earpiece can include a peripheral interface configured to receive a plurality of sound signals and direct the plurality of sound signals to a plurality of audio processing modules that produce audio control information responsive to an analysis of the sound signals, a logic control unit operatively coupled to the plurality of audio processing modules to receive the audio control information from the plurality of audio processing modules and generate configuration data, a filter set operatively coupled to the peripheral interface to process the plurality of sound signals in accordance with the configuration data to produce filtered sound signals, and a mixing matrix operatively coupled to the control logic unit and filter set to mix the plurality of filtered sound signals in accordance with the audio control information to produce output sound signals and route the output sound signals to at least one peripheral component.
  • the peripheral component can be an Ear Canal Receiver (ECR), a phone, a portable communication device, or a data storage.
  • the peripheral interface can include at least one Ambient Sound Microphone (ASM) configured to convert an ambient sound to an ambient sound signal, and at least one Ear Canal Microphone (ECM) configured to convert an internal sound from an ear canal of a user to an internal sound signal.
  • the earpiece can further include an audio content interface configured to receive a plurality of audio streams and direct the plurality of audio streams to the plurality of audio processing modules.
  • the audio content interface can receive an audio stream from a phone, a media player, or a portable communication device.
  • the audio content interface can mix the plurality of audio streams based on a user context that is one among an incoming call, a music session, or a voice mail.
  • the audio control information can include filter data for processing the plurality of sound signals, audio control data for assigning a priority to the plurality of sound signals, and router data for mixing the plurality of filtered signals according to the priority.
  • the priority can be event driven responsive to detecting a sound signature, a background noise condition, a battery life indication, a manual interaction, or a voice recognition command.
  • an earpiece can include a peripheral interface configured to receive a plurality of sound signals and direct the plurality of sound signals to a plurality of audio processing modules, an audio content interface configured to receive a plurality of audio streams and also direct the plurality of audio streams to the plurality of audio processing modules, at least one signal analysis module operatively coupled to the peripheral interface and audio content interface to provide a shared analysis of the sound signals and the audio streams for the plurality of audio processing modules, a logic control unit operatively coupled to the plurality of audio processing modules and the at least one signal analysis module to receive the shared analysis and audio control information to generate configuration data, a filter set operatively coupled to the peripheral interface to process the plurality of sound signals in accordance with the configuration data to produce filtered sound signals, and a mixing matrix operatively coupled to the control logic unit and filter set to mix the plurality of filtered sound signals in accordance with the audio control information to produce output sound signals and route the output sound signals to at least one peripheral component.
  • the shared analysis can include spectral analysis, spectral band energy level analysis, spectral envelope analysis, voice activity detection analysis, and cross-correlation analysis.
  • a separate signal analysis module can be provided for components of the peripheral interface and the audio content interface that is shared among the plurality of audio processing modules.
  • the peripheral interface can include at least one Ambient Sound Microphone (ASM) coupled to an ASM signal analysis module configured to analyze an ambient sound signal, and at least one Ear Canal Microphone (ECM) coupled to an ECM signal analysis module configured to analyze an internal sound from an ear canal of a user.
  • ASM Ambient Sound Microphone
  • ECM Ear Canal Microphone
  • the audio content interface can be coupled to an audio content (AC) signal analysis module and configured to analyze an audio stream from a phone, a media player, or a portable communication device.
  • AC audio content
  • a method for configuring audio delivery on an earpiece can include the steps of receiving at least one sound signal and at least one audio stream, performing an analysis of the at least one sound signal and the at least one audio stream, presenting the analysis, the at least one sound signal, and the at least one audio stream to a plurality of audio processing modules that generate configuration data responsive to the receiving.
  • the method can include filtering the at least one sound signal and at least one audio stream according to the configuration data to produce filtered sound signals, mixing the filtered signals according to the configuration data to produce output sound signals, and routing the output sound signals to at least one peripheral component.
  • the at least one sound signal can be an ambient sound signal, an ear canal sound signal.
  • the at least one audio stream can be received from a phone, a media player, or a portable communication device.
  • a linkage matrix of mixing gains can be generated based on the configuration data that are applied to the plurality of filtered sound signals for producing the plurality of output sound signals.
  • FIG. 1 is a pictorial diagram of an earpiece in accordance with an exemplary embodiment
  • FIG. 2 is a block diagram of the earpiece in accordance with an exemplary embodiment
  • FIG. 3 is an exemplary schematic of a software system for the earpiece in accordance with an exemplary embodiment
  • FIG. 4 is a more detailed exemplary schematic of the software system of FIG. 3 for the earpiece in accordance with an exemplary embodiment t;
  • FIG. 5 is a flowchart of a method for generating filter configuration data from a plurality of processing modules in accordance with an exemplary embodiment in accordance with an exemplary embodiment;
  • FIG. 6 is a flowchart of a method for generating and applying filter configuration data from a plurality of processing modules in accordance with an exemplary embodiment
  • FIG. 7 is an exemplary schematic for configuring audio input and output via a mixing matrix in accordance with an exemplary embodiment
  • FIG. 8 is another exemplary schematic of a software system for the earpiece providing separate analysis and re-synthesis modules for peripheral inputs in accordance with an exemplary embodiment
  • FIG. 9 is a more detailed schematic for analysis and re-synthesis for a particular peripheral input in accordance with an exemplary embodiment.
  • At least one exemplary embodiment of the invention is directed to an earpiece for background noise mitigation.
  • an earpiece device generally indicated as earpiece 100
  • earpiece 100 depicts an electro-acoustical assembly 113 for an in-the-ear acoustic assembly, as it would typically be placed in the ear canal 131 of ear 117 of user 135
  • the earpiece 100 can be an in the ear earpiece, behind the ear earpiece, receiver in the ear, open-fit device, or any other suitable earpiece type.
  • the earpiece 100 can be partially or fully occluded in the ear canal, and is suitable for use with users having healthy or abnormal auditory functioning.
  • Earpiece 100 includes an Ambient Sound Microphone (ASM) 111 to capture ambient sound, an Ear Canal Receiver (ECR) 125 to deliver audio to an ear canal 131 , and an Ear Canal Microphone (ECM) 123 to capture internal sounds within the ear canal and also assess a sound exposure level within the ear canal.
  • ASM Ambient Sound Microphone
  • ECR Ear Canal Receiver
  • ECM Ear Canal Microphone
  • the earpiece 100 can partially or fully occlude the ear canal 131 to provide various degrees of acoustic isolation.
  • the assembly is designed to be inserted into the users ear canal 131 , and to form an acoustic seal with the ear canal walls 129 at a location 127 between the entrance to the ear canal and the tympanic membrane (or ear drum) 133
  • a seal is typically achieved by means of a soft and compliant housing of assembly 113.
  • Such a seal is pertinent to the performance of the system in that it creates a closed cavity 131 of approximately 5cc between the in-ear assembly 113 and the tympanic membrane 133.
  • the ECR (speaker) 125 is able to generate a full range bass response when reproducing sounds for the user.
  • This seal also serves to significantly reduce the sound pressure level at the user's eardrum resulting from the sound field at the entrance to the ear canal. This seal is also the basis for the sound isolating performance of the electro-acoustic assembly.
  • the ECM 123 Located adjacent to the ECR 125, is the ECM 123, which is acoustically coupled to the (closed) ear canal cavity 131.
  • One of its functions is that of measuring the sound pressure level in the ear canal cavity 131 as a part of testing the hearing acuity of the user as well as confirming the integrity of the acoustic seal and the working condition of itself and the ECR.
  • the ECM 123 can also be used for capturing voice that is resonant within the ear canal when the user is speaking to permit voice communication.
  • the ASM 111 is housed in an ear seal 113 and monitors sound pressure at the entrance to the occluded or partially occluded ear canal.
  • the ASM 111 can also be used to capture the user's voice externally for permitting voice communication. All transducers shown can receive or transmit audio signals to a processor 121 that undertakes audio signal processing and provides a transceiver for audio or voice via the wired or wireless communication path 119.
  • the earpiece 100 can actively monitor a sound pressure level both inside and outside an ear canal and enhance spatial and timbral sound quality while maintaining supervision to ensure safes sound reproduction levels.
  • the earpiece 100 in various embodiments can conduct listening tests, filter sounds in the environment, monitor warning sounds in the environment, present notification based on identified warning sounds, maintain constant audio content to ambient sound levels, and filter sound in accordance with a Personalized Hearing Level (PHL).
  • PHL Personalized Hearing Level
  • the earpiece 100 can include a processor 206 operatively coupled to the ASM 111 , ECR 125, and ECM 123 via one or more Analog to Digital Converters (ADC) 202 and Digital to Analog Converters (DAC) 203.
  • the processor 206 can measure ambient sounds in the environment received at the ASM 111 and internal sounds captured at the ECM 130. Ambient sounds correspond to sounds within the environment such as the sound of traffic noise, street noise, conversation babble, or any other acoustic sound.
  • Ambient sounds measured by the ASM 111 can also correspond to industrial sounds present in an industrial setting, such as, factory noise, lifting vehicles, automobiles, and robots.
  • the processor 206 can monitor the ambient sound captured by the ASM 110 for sounds in the environment, such as an abrupt high energy sound corresponding to an on-set of a warning sound (e.g., bell, emergency vehicle, security system, etc.), siren (e.g., police car, ambulance, etc.) , voice (e.g., "help", "stop”, “police”, etc.), or specific noise type (e.g., breaking glass, gunshot, etc.).
  • a warning sound e.g., bell, emergency vehicle, security system, etc.
  • siren e.g., police car, ambulance, etc.
  • voice e.g., "help", "stop”, “police”, etc.
  • specific noise type e.g., breaking glass, gunshot, etc.
  • Internal sounds measured by the ECM 123 can correspond to sounds contained within the ear canal 131 such as spoken voice or audio content delivered by way of the ECR 125.
  • the internal sounds can include residual background noise related to ambient sounds in the environment; for example, high level sounds that leak around the ear seal 127 and enter the ear canal 131.
  • the processor 206 can monitor internal sounds captured by the ECM 123 and analyze the internal sounds.
  • the processor 206 can also adjust a mixing between the ambient sound signals measured at the ASM 111 and the internal sound signals measured at the ECM 123, for example, responsive to assessing ambient background noise conditions.
  • the processor 206 can utilize computing technologies such as a microprocessor, Application Specific Integrated Chip (ASIC), and/or digital signal processor (DSP) with associated storage memory 208 such a Flash, ROM, RAM, SRAM, DRAM or other like technologies for controlling operations of the earpiece device 100.
  • the memory 208 can store program instructions for execution on the processor 206 as well as captured audio processing data.
  • the memory 208 can also store program instructions for execution on the processor 206.
  • the memory 208 can be off-chip and external to the processor 208, and include a data buffer to temporarily capture the ambient sound and the internal sound, and a storage memory to save from the data buffer the recent portion of the history in a compressed format responsive to a directive by the processor.
  • the data buffer can be a circular buffer that temporarily stores audio sound at a current time point to a previous time point. It should also be noted that the data buffer can in one configuration reside on the processor 206 to provide high speed data access.
  • the storage memory can be non-volatile memory such as SRAM to store captured or compressed audio data.
  • the memory 208 can be a machine-readable medium.
  • machine- readable medium should be taken to include a single medium or multiple media (e.g., a centralized or distributed, and/or associated caches and external memory) that store the one or more sets of instructions.
  • machine-readable medium shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
  • machine-readable medium shall accordingly be taken to include, but not be limited to: solid-state memories such as a memory card or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories; and/or magneto-optical or optical medium; and carrier wave signals such as a signal embodying computer instructions in a transmission medium. Accordingly, the disclosure is considered to include any one or more of a machine-readable medium or a distribution medium, as listed herein and including art-recognized equivalents and successor media, in which the software implementations herein are stored.
  • the earpiece 100 can include an audio interface 212 operatively coupled to the processor 206 to receive audio content, for example from a media player or cell phone, and deliver the audio content to the processor 206.
  • the processor 206 responsive to detecting ambient sounds can adjust the audio content and pass the ambient sounds directly to the ear canal. For instance, the processor 206 can lower a volume of the audio content played out the ECR 125 responsive to detecting an acute sound for transmitting the ambient sound to the ear canal.
  • the processor 206 can also actively monitor the sound exposure level inside the ear canal via the ECM 123 and adjust the audio content to within a safe and subjectively optimized listening level range.
  • the earpiece 100 can further include a transceiver 204 that can support singly or in combination any number of wireless access technologies including without limitation BluetoothTM, Wireless Fidelity (WiFi), Worldwide Interoperability for Microwave Access (WiMAX), and/or other short or long range communication protocols.
  • the transceiver 204 can also provide support for dynamic downloading over-the-air to the earpiece 100. It should be noted also that next generation access technologies can also be applied to the present disclosure.
  • the power supply 210 can utilize common power management technologies such as replaceable batteries, supply regulation technologies, and charging system technologies for supplying energy to the components of the earpiece 100 and to facilitate portable applications.
  • a motor (not shown) can be a single supply motor driver coupled to the power supply 210 to improve sensory input via haptic vibration.
  • the processor 206 can direct the motor to vibrate responsive to an action, such as a detection of a warning sound or an incoming voice call.
  • the earpiece 100 can further represent a single operational device or a family of devices configured in a master-slave arrangement, for example, a mobile device and an earpiece. In the latter embodiment, the components of the earpiece 100 can be reused in different form factors for the master and slave devices.
  • FIG. 3 is an exemplary schematic of a software system 300 for operating the earpiece 100.
  • the software system 300 can reside at least in part or whole on the processor 206, the memory 208, and/or any associated machine readable storage medium operated on by the processor 206 (see FIG. 2).
  • the software system 300 by way of the processor 206 can manage a configuration of audio input and output (paths) to the earpiece 100 to support audio delivery.
  • the software system 300 includes a peripheral interface 310 configured to manage a plurality of sound signals 315 and direct the plurality of sound signals 315 to a plurality of audio processing modules 330.
  • the sound signals can be an ambient sound signal measured by the ASM 111 or an internal sound signal from measured by the ECM 123.
  • the software system 300 can include an audio content interface 320 configured to manage a plurality of audio streams 325 and direct the plurality of audio streams 325 to the plurality of audio processing modules 330.
  • the audio stream can be a voice signal from a Phone, a music signal from a personal media player (PMP), or an audio signal provided by the earpiece (e.g., loopback signal).
  • the plurality of sound signals 315 and the plurality of audio streams 325 are also passed to a filter set 350 as shown by the wide arrows.
  • the plurality of audio processing modules 330 can produce audio control information 331 responsive to an analysis of the sound signals.
  • the audio control information 331 is provided to the logic control unit 340 for configuring filtering and mixing operations on the plurality of sound signals 315 and the plurality of audio streams 325 at a filter set 350.
  • the logic control 340 unit is operatively coupled to the plurality of audio processing modules 330 to receive the audio control information 331 from the plurality of audio processing modules 330 and generate configuration data.
  • the configuration data can include filter data 342 for processing the plurality of sound signals, audio control data 343 for assigning a priority to the plurality of sound signals, and router data 344 for mixing a plurality of filtered signals 355 according to the priority
  • the software system 300 includes the filter set 350 operatively coupled to the peripheral interface to process the plurality of sound signals 315 in accordance with the configuration data 342 for generating the filtered sound signals 355.
  • the filter set 350 also processes the plurality of audio streams 325 in accordance with the configuration data 344 to produce filtered sound signals 355.
  • the filtered sound signals 355 are passed to a mixing matrix 360.
  • the mixing matrix 360 is operatively coupled to the control logic unit 340 and filter set 350 to mix the plurality of filtered sound signals 355 in accordance with the configuration data 344 and generate output sound signals 365 that are routed to at least one output peripheral component 370.
  • the peripheral component 370 can be the ECR 125, a phone, or a data storage.
  • the mixing matrix 360 can include individual gains that are applied to the plurality of filtered sound signals 355 for producing the output sound signals.
  • FIG. 4 is a more detailed schematic of the software system 300 of FIG. 3. System components of FIG. 3 will be referred to when describing components of the detailed schematic.
  • the peripheral interface 310 (see FIG.
  • the peripheral interface 310 can include the Ambient Sound Microphone (ASM) 111 configured to convert an ambient sound to an ambient sound signal, and the Ear Canal Microphone (ECM) 123 configured to convert an internal sound from an ear canal of a user to an internal sound signal.
  • the peripheral interface 310 receives the ASM sound signals from the external environment including background noise, warning sounds, and the ECM sound signal including the voice of a wearer of the earpiece 100 and any audio content playing out the ECR 125.
  • the components (e.g., ASM-R, ECM- R) of the peripheral interface 310 can contain more than the components shown, for instance, a secondary ASM for sound localization or noise suppression.
  • the peripheral interface 310 can also include components (ASM-L, ECM-L) from a second earpiece such as a L-left or R-right earpiece.
  • ASM-L, ECM-L components from a second earpiece
  • the software system 300 can reside on one of the earpieces, although resources can be shared if the software system 300 is enabled on both earpieces.
  • the audio content interface 320 can receive audio streams, for example, from a phone 421 , a personal media player (PMP) 422, a portable communication device (e.g., VOP) 423, or a component 411 of the earpiece.
  • the local component 411 e.g., ECR 125
  • the audio content interface (AC) 420 can mix the plurality of audio streams to produce a single audio stream delivered to the processing modules (AP1 -AP5). In one arrangement, the AC 420 can perform audio content mixing based on a user context.
  • the user context identifies an operation or mode of the earpiece, such as, an incoming call, a music session, or a voice mail. For example, if the user is listening to music and an incoming call is detected, the earpiece 100 by way of the software system 300 (see FIG. 3) further discussed herein can lower the volume of the music relative to the incoming call ring tone to permit a hearing of the incoming call (e.g., ring tone). As another example, if the user is using the earpiece in a transparent "safe" mode while listening to music, then any harmful ambient sounds (e.g., loud jackhammer, bus noises) can be attenuated with the music, for instance, by reducing the ASM to ECM pass through levels.
  • any harmful ambient sounds e.g., loud jackhammer, bus noises
  • important warning sounds captured at the ASM 111 can be elevated in the mix with respect to other sounds by increasing the ratio of ASM 111 to ECM 123 sound levels.
  • the AC 420 can also mix the sounds in accordance with manual intervention, for example, if the user adjusts a volume level manually by way of a user interface or directly mixes the audio streams.
  • the sound signals from peripheral interface 310 and the mixed sound signals from the audio content interface 320 can be passed through a signal analysis module 410 operatively coupled to the peripheral interface 310 and the audio content interface 320 to provide a shared analysis of the sound signals and the audio streams.
  • the sound signals and the shared analysis can then be sent to the plurality of audio processing modules (AP1 - AP5).
  • the shared analysis can include spectral analysis, spectral band energy level analysis, spectral envelope analysis, voice activity detection analysis, and cross-correlation analysis.
  • the shared analysis module 410 permits a sharing of processing resources among the processing modules (AP1 - AP5) to minimize individual resource use of the processing modules from performing a common analysis. For instance, instead of each processing module performing an FFT analysis, the shared module 410 can perform an FFT and share output results with the modules. In another arrangement, output results from the shared analysis can be conveyed to the processing modules based on frequency band requirements or trigger events and thresholds. For instance, AP1 may register as an event listener for ASM signals that exceed a certain level in a frequency band; AP2 may register as an event listener to receive ECM signals that match a particular masking profile, and so on.
  • each processing module can interpret the output results individually and make their own respective determination as how to process the sound signals.
  • each processing module (AP1 - AP5) can selectively process the plurality of sound signals in accordance with its own requirements.
  • AP1 can be a sound detection module to identify one or more sounds in the user's environment, for example, in accordance with the teachings presented in U.S. Patent Application No11/966,457 filed on December 28, 2007 entitled “Method and Device for Sound Signature Detection" herein incorporated by reference in its entirety.
  • AP 2 can be a sound exposure monitoring module to assess safe listening levels, for instance, in accordance with the teachings presented in U.S. provisional patent application No.
  • AP 3 can be a sound enhancement module or voice control module, for instance, in accordance with the teachings presented in U.S. Provisional Application No. 60/911 ,691 filed on April 13, 2007 entitled “Method and Device for Voice Operated Control", and U.S. Provisional Application No. 60/885,917 filed on January 22, 2007 entitled “Method and Device for Acute Sound Detection and Reproduction” herein incorporated by reference in its entirety.
  • AP 4 can be a sound correction module to modify a sound signal based on a sound level exposure within safe listening levels, for instance, in accordance with the teachings presented in U.S. Provisional Application No. 60/866,420 filed on November 18, 2006, entitled “Method and Device for Personalized Hearing” the entire disclosure of which is incorporated herein by reference.
  • Each of the processing modules (AP1 - AP5) generates separate audio control information 331 in view of the audio signals that can be used by the logic control unit 340 to make an informed decision related to filtering and mixing the sound signals and audio streams.
  • the audio control information 331 includes filter control data 342, audio control data 343, and router control data 344.
  • the filter control data 342 can include filter coefficients that each processing module (AP) identifies as being significant to their intended function.
  • the audio control data 343 can control a mixing of the plurality of audio streams at the AC 420 based on an established priority in view of the processing module decisions (e.g., warning signal detected, sound exposure level exceeded, echo feedback condition).
  • the router control data 344 can include mixing gains (g1 -g2) to amplify or attenuate the filtered sound streams.
  • the logic control unit 340 responsive to receiving the audio control information 331 generates a priority that is used for generating the configuration data (341 ,342 and 343) used to filter the sound signals and mix the filtered sound signals.
  • the priority controls how the filtered signals are filtered at the filter set 350 (e.g. F1 , F2, F3, see also FIG. 3).
  • F1 is a first filter for the ambient sound signal
  • F2 is a second filter for the internal sound signal in the ear canal.
  • the logic control unit 340 varies the level of filtering at both F1 and F2 in accordance with the filter configuration data 342 as established by the determined priority.
  • the filter coefficients for the F1 filter and F2 filter are provided on a frame-by-frame basis in real-time with the filter configuration data 342 from the logic control unit 340.
  • the priority also controls how the audio steams are mixed in the audio control unit 320 (e.g., see VOP, Phone, PMP, see also FIG. 3) based on the audio control data, and how the filtered signals are mixed in the mixing matrix 360 (e.g., g1 -g9, see also FIG. 3).
  • an AP1 module configured for warning sound detection upon identifying a warning sound in an ASM sound signal, can generate filter coefficients to amplify one or a group of frequency bands of the warning sound (e.g., horn or siren).
  • the AP1 module can consider the other frequency bands as don't cares since they are not part of the warning sound. AP1 can also raise or flag a priority level that a warning sound has been detected.
  • AP3 module for speech enhancement may determine from a simultaneous ECM audio stream that the wearer is speaking and generate filter coefficients that accentuate certain portions of the sound spectrum. AP3 can raise a flag or priority level based on an energy level of the spoken voice.
  • AP4 may determine an echo condition feedback that could potentially damage the users hearing and generate filter coefficients to null out the feedback. AP4 can raise a flag or priority level indicating an immediate danger condition.
  • AP2 may determine that a sound level exposure is being exceeded and generate audio control data to turn volume down on a media player.
  • the logic control unit 340 upon receiving the audio control information and corresponding priorities from the respective processing modules (AP1 -AP5) can then determine the appropriate filter configuration given the filter coefficients and the priority.
  • the logic control unit 340 evaluates the audio control information from all the processing modules (AP1 -APN) individually, and then collectively as a whole, for prioritizing the configuration data. For instance, warning sounds that are given a higher priority over voice from the ECM or music from the AC, will be mixed according to the priority. Thus, the output sound signal generated by the mixing matrix 360 emphasizes the warning sound, followed by the voice, followed by the audio content. If voice commands are given a higher priority over music, than the mixing matrix 360 can reduce the music levels from the AC module when an AP module detects voice.
  • the priority can be established manually (e.g., via user interface) or automatically (e.g., user context, or presence).
  • the mixing matrix 360 can increase ASM to ECM pass through upon detection of a warning sound, while simultaneously lowering a phone volume if the user is checking voice mail messages. As yet another example, the mixing matrix 360 can reduce ASM to ECM pass and simultaneously lower a music level if an incoming phone call is detected.
  • the priority can also be event driven responsive to detecting a sound signature, a background noise condition, a battery life indication, a manual interaction, or a voice recognition command.
  • the AC control unit 420 mixes audio content (e.g., phone, VOP, PMP) and outputs an AC signal (or pair of stereo channels if two earphone devices are used). The mixing is controlled accordance with configuration data 343 received from the control logic unit 340 and also automatically by the AC control unit 420.
  • the AC signal generated from the AC 420 is then filtered by audio content filter F3.
  • the internal ear-canal audio signal from the ECM 123 is filtered by ECM filter F1 205, and the ambient audio signal from the ASM is filtered by ASM filter F2. All three filters (F1 , F2 and F3) are updated and controlled by the logic control unit 340 via filter configuration data 342.
  • the filtered audio signals 355 are then passed to the mixing matrix 360.
  • the 9 mixing coefficients (g1 -g9) of the mixing matrix 360 are controlled and updated by the control logic unit 340 via configuration data 344.
  • Each mixing coefficient (e.g., g1 -g9) can be a time-varying positive number, negative number, or zero.
  • the output sound signals are routed to the three peripheral output components (Phone 421 , ECR 125, and Storage 430).
  • the mixing gains can be normalized so as to permit balanced audio delivery.
  • the peripheral output components comprise the ear canal receiver ECR 125 for delivering audio to the ear canal, the output signal to the phone 421 (i.e. the signal that is transmitted to another remote individual), and the audio storage device 430 (e.g. hard drive on a PMP).
  • the filter configuration data 342 for the 2 filters, the AC configuration data 343 for the AC control unit, and the routing configuration data 344 for the mixing matrix 360 is generated responsive to receiving the audio control information from the plurality of processing modules (AP1 -AP5).
  • each module e.g., modules 1 -5, or more
  • each module analyzes the input audio signal and generates 3 output types: coefficients for the router (e.g., g1 -g9), filter coefficients for the 3 signal filters (e.g., F1 ,F2 and F3), and audio control signals for the AC control unit.
  • coefficients for the router e.g., g1 -g9
  • filter coefficients for the 3 signal filters e.g., F1 ,F2 and F3
  • audio control signals for the AC control unit Each of these output types can have "don't care" states- which can be represented by a reserved number value (e.g., -1 ).
  • the control logic unit 340 prioritizes the sound signals and audio streams for each signal type, for instance, it can prioritize the router coefficient values (e.g., mixing gains g1 -g9) higher for one AP module than the other.
  • FIG. 5 is a flowchart of a method 500 for generating filter configuration data in accordance with an exemplary embodiment. The method 500 be practiced with more or less than the number of steps shown and is not limited to the order shown.
  • the method 500 can be implemented in a single earpiece, a pair of earpieces, headphones, or other suitable headset audio delivery device.
  • method 500 describes one exemplary arrangement in which the logic circuit 340 generates filter configuration data from the audio content received by individual AP modules to produce a combination filter output.
  • the filter output for method 300 includes all 3 filters (F1 , F2 and F3) , for instance, the ECM sound signal, the ASM sound signal and the AC sound signal.
  • the method 500 can start at step 502.
  • the logic control unit 340 can receive N filter vectors from the N processing modules 506 (AP1 -APN). Each processing module generates a filter vector that is a set of filter coefficients (time or frequency domain). Each of the N processing modules (AP1 -APN) also outputs an associated module priority.
  • the logic control unit 340 assesses the priorities raised by each processing module and evaluates the priorities against a corresponding reference module priority at step 510. If a processing module does not assign a particular priority value, then a default priority value is retrieved from the reference module priority list.
  • the logic control unit 340 identifies the processing module with the highest priority. If a first processing module returns the same module priority value as a second processing module, then the logic control unit 340 assigns a unique module priority value to both the second and first modules, for example, based on a user context.
  • the logic control unit 340 Upon selecting the processing module with the highest priority, the logic control unit 340 proceeds to determine if any configuration data is a "don't care" as shown in step 514.
  • a "don't care” value is a value that does not affect the filter state of the respective processing module.
  • the filter configuration data 342 generated by each processing module can include filter values of a time domain filter vector (e.g. for an FIR type filter) of frequency domain filter (e.g., bi-quads) with "don't care” value; the logic control unit 340 can assign a reserved numerical value of symbol for "don't care” filter states.
  • a "dont care” flag can be associated with each filter output from each module.
  • the logic control unit 340 can copy the configuration data (e.g., filter data, router data, audio content data) for "care” states to internal memory for output to the filter set 350, mixing matrix 360, and audio control 420 (see FIG. 4).
  • the logic control unit 340 can continue to update the configuration data for "care” states as it is retrieved from each of the respective processing modules according to the priorities. Starting with the next highest priority module, as shown in step 520, the filter values for this high priority module are then checked at step 514to see if they correspond to a "don't care" state in the method loop. The method loop continues until the end of the module priority list as shown in step 518.
  • the logic control unit 340 identifies a "don't care" state is present, the filter values for that state is not updated; else, the filter values for this module are copied to the output and no further update of the filters is conducted until new filter vectors are received from all modules as shown in step 522.
  • the logic control unit 340 output filter states using filter values directly from the processing modules based strictly on priority. If all module filter values are "don't care" states, then the filter output is the same as a previous filter output.
  • the method 500 continues back at step 504 to evaluate the priorities of each module in real-time as results are received.
  • the logic control unit 340 thus continually updates the filter values in view of priority for the respective filters (F1 , F2 and F3) to process audio signals and audio streams in real time.
  • FIG. 6 is a flowchart of a method 600 for generating filter configuration data where filter data is collectively combined based on priority.
  • the method 600 is an extension of method 500 of FIG. 5 and can practiced with more or less than the number of steps shown and is not limited to the order shown.
  • FIG. 6 presents an embodiment wherein processing modules can have a same priority value. In such a case when two modules have the same priority value, and neither contain (don't care" filter states, then the filter values from each of these two module with the same priority value are combined.
  • the combination can be a simple summation of filter weights (e.g. for frequency domain filters) or a convolution of the two sequences (e.g. for time domain weights).
  • the method 600 can start at step 602.
  • the logic control unit 340 can receive N filter vectors from the N processing modules 606 (AP1 -APN). Each processing module generates a filter vector that is a set of filter coefficients (time or frequency domain). Each of the N processing modules (AP1 -APN) also outputs an associated module priority.
  • the logic control unit 340 assesses the priorities raised by each processing module. If a processing module does not assign a particular priority value, then a default priority value can be retrieved from a reference module priority list. [0071]
  • the logic control unit 340 identifies the processing module with the highest priority.
  • the logic control unit 340 at step 618 then examines the audio control information from each of the N processing modules to identify "don't care” states. Filter values with "don't care” are not considered in generating output filter configuration data 342 or router configuration data 344 (see FIG. 4). Accordingly, the method 600 continues to get the next most important priority modules at step 620 until the end of module priority list is reached at step 622.
  • Filter values other than "don't care” are however used in generating a combined filter output; that is, a filter output representative of the collective priority of all the processing modules (AP1 -APN) based on their processing results. The combination also depends on whether the module is uniquely defined; for instance, it has precedence over combination with other filter vales.
  • the logic control unit 340 determines whether a processing module with a "care" filter value is uniquely defined. If yes, then the filter values for that unique model are copied to the output (e.g., router configuration, filter configuration data, audio configuration data) at step 616. If however, the module is not unique, then the logic control unit 340 at step 612 combines the filter value with filter data from modules of the same priority.
  • FIG. 7 is an exemplary schematic 700 for configuring audio input and output via a mixing matrix in accordance with an exemplary embodiment.
  • the filtered ECM signal 702 is the output of the ECM filter (F1 ) of FIG. 4.
  • the mixing gains g1 (704), g4 (704) and g7 (707) correspond to the first column of the mixing matrix 360 as shown in FIG. 4.
  • the phone out 710, ECR 712, and audio storage 714 correspond to the output peripheral components (421 , 125, and 430) of the mixing matrix 360.
  • the mixing gains (g1 -g9) of the mixing matrix 360 can be applied to the filtered audio signals 355 (see FIG. 3) before they are routed to the corresponding peripheral output components (phone out 710, ECR 712, and audio storage 714).
  • FIG. 7 shows a signal path diagram for the filtered ECM input signal 702 to the 3 peripheral output components of the matrix 260.
  • the coefficient values can be time variant for each new input audio sample into the mixing matrix 360, or can remain constant for a block of input samples.
  • the 9 coefficients (g1 -g9) for each of the mixing gains in mixing matrix 360 can be generated using a similar logic process as described in FIG. 4 and FIG. 4, except the inputs to the logic system are not filter vectors but rather 9 coefficient values from each module.
  • the priority value assigned to each module, or the priority value automatically generated by that module, can be a different value for the filter vectors than for the 9 coefficient values.
  • 9 coefficients are shown since there are 3 audio paths (ASM, ECM, and AC) and 3 output peripheral components (Phone, ECR, data storage).
  • the number of mixing gains and structure of the mixing matrix 360 can thus be a function of the number of audio input and audio output paths.
  • FIG. 8 is another exemplary schematic 800 of the software system 300 providing separate analysis and re-synthesis modules for peripheral inputs in accordance with an exemplary embodiment.
  • the schematic 800 combines the three audio filters F1 , F2 and F3 with the shared analysis module 410 of the software system shown in FIG 4.
  • Analysis/Re-synthesis unit 810 performs shared analysis of the ECM sound signal for processing modules (AP1 -AP3).
  • Analysis/Re-synthesis unit 820 performs shared analysis of the ASN sound signal for processing modules (AP1 -AP3).
  • Analysis/Re-synthesis unit 830 performs shared analysis of the Audio Content (AC) sound signal for processing modules (AP1 -AP3).
  • AC Audio Content
  • the combined analysis/resysnthesis units 810, 820 and 830 filter the 3 input audio signals; the ECM signal; the ASM signal, and Audio Content (AC) signal.
  • the AC signal comprises at least one of; an earcon signal (e.g. a "low battery” auditory warning cue); a signal from a mobile telephone; and a stereo signal from a PMP (e.g. portable DVD player, music audio player etc).
  • the output signal of the analysis/resynthesis units 810, 820 and 830 are passed to the mixing columns (g1 , g4, g7; g2, g5, g8; and g3, g6, g9) of the mixing matrix 360.
  • the mixing gains g1 -g9 are controlled via the router configuration data from the logic control unit 340, which in turn were determined from the audio control information generated by the processing modules (AP1 -AP3).
  • the mixing matrix 360 can associate a phone with a first row of a linkage matrix, associates an ECR with a second row of a linkage matrix, associate an audio storage with a third row of a linkage matrix, associate an ECM with a first column of a linkage matrix, associate an ASM with a second column of a linkage matrix, and associate an AC controller with a third column of a linkage matrix.
  • the mixing matrix can modify the output sound signal of the phone using values in the first row of the linkage matrix, modify the output sound signal of the ECR using values in the second row of the linkage matrix, and modify the output sound signal sent to the audio storage using values in the third row of the linkage matrix.
  • each processing module has two or more output signals: filter gain coefficients which control the resynthesis of the signals in the analysis/resythesis units; 9 coefficients for the router, and audio configuration data for the audio control.
  • the processing modules create AC Control signals to control the routing of the input AC signals using AC control unit (420, see FIG. 4).
  • the processing modules are configured and their internal state can be queried by the user interface 850 (e.g. a computer connected to the DSP unit via USB).
  • FIG. 9 is a more detailed schematic 900 of an analysis and re-synthesis unit for a particular peripheral input in accordance with an exemplary embodiment.
  • the analysis and re-synthesis unit 910 corresponds to the ASM sound signal path of the software system 800 shown in FIG. 8. That is, the analysis and re-synthesis unit 910 can receive audio input 901 from the ASM 111.
  • the schematic 900 can be applied for producing filtered output for also the ECM 123 sound signals and AC 420 sound signals (see FIG. 4) or other signals.
  • the analysis and re-synthesis unit 910 incorporates a filter-bank 912 (e.g., cascade of band-pass filters (BPF)) structure for decomposing the ASM sound signal into a plurality of filter-banks.
  • the filter-bank can be based on a linear frequency scale or a non-linear frequency scale, such as, an octave, 1/3 octave, critical band, equivalent rectangular band (ERB), or melody (mel)-frequency scale.
  • Each output of each filter-bank can then be presented to each of the N processing modules (AP1 920-AP2 930).
  • the processing modules 920 and 930 generate audio control information 331 (see FIG.
  • the filter configuration data 342 is used by the analysis and re-synthesis unit 910 to scale the filter-bank output signals. For instance, the filter-bank outputs are gain scaled to amplify or attenuate the individual filter-banks with the filter coefficients in the filter configuration data 342. The filter-bank output signals can then be summed to generate a composite filtered sound signal. The filtered sound signal is then presented to the mixing matrix (see FIG. 8) for routing and mixing to the one or more output peripheral components (e.g., Phone, ECR, data storage).
  • the filter configuration data 342 is used by the analysis and re-synthesis unit 910 to scale the filter-bank output signals. For instance, the filter-bank outputs are gain scaled to amplify or attenuate the individual filter-banks with the filter coefficients in the filter configuration data 342. The filter-bank output signals can then be summed to generate a composite filtered sound signal. The filtered sound signal is then presented to the mixing matrix (see FIG. 8) for routing and mixing to the one or
  • the input signal 901 is called "audio input 1 ", and can be any one of the exemplary three signals described previously (i.e. ECM, ASM or AC signal).
  • the audio input signal is split and processed by the different band-pass-filters 912 (i.e. with different centre frequencies).
  • the response of each respective band pass filter 912 is such that if the filter outputs are summed, then the power spectral density is the same as the original audio input signal.
  • the band pass filters 912 can be time-domain FIR type filters or MR type filters, or the analysis could be with a frequency domain FFT.
  • each band-pass filter is fed to at least one module (for the sake of clarity, 2 modules are shown in the exemplary embodiment of figure 8). Each module then creates an output signal to control the re-sythesis of the band-pass filtered input signals.
  • This output signal is a vector of weights to control the gains of the filtered signals (i.e. the number of gains is equal to the number of filter used in the analysis).
  • the logic control unit 340 determines a single set of gain coefficients for the gains (i.e. G1 , G2, G3 and G4 in the exemplary embodiment of figure 8).

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

A device (100) and method (500) for configuring audio delivery on an earpiece is provided. The earpiece can include a peripheral interface (310) configured to receive a plurality of sound signals and direct the plurality of sound signals to a plurality of audio processing modules (330) that produce audio control information (331 ) responsive to an analysis of the sound signals, a logic control unit (340) to receive the audio control information from the plurality of audio processing modules and generate configuration data (342), a filter set (350) to process the plurality of sound signals in accordance with the configuration data to produce filtered sound signals (355), and a mixing matrix (360) to mix the plurality of filtered sound signals in accordance with the configuration data to produce output sound signals (365) and route the output sound signals to at least one peripheral component (370). Other embodiments are disclosed.

Description

METHOD AND DEVICE FOR LINKING MATRIX CONTROL OF AN EARPIECE Il
FIELD
[0001] The present invention relates to a method of controlling an earpiece device, and more particularly, though not exclusively, a method of configuring and managing audio input and output on an earpiece.
BACKGROUND
[0002] Headphones or earpieces can be used for music enjoyment or voice communication. Use of these devices has steadily been increasing, and more products are expanding functionality to support audio delivery to headphones and earpieces. [0003] The earpieces and associated products are becoming more intelligent as more communication features become available. However, the earpieces themselves have limited resources and must make efficient use of the many audio input and output configurations required to manage and support audio delivery.
[0004] A need therefore exists for improving audio configuration paths of headphones or earpieces.
SUMMARY
[0005] In a first embodiment, an earpiece, can include a peripheral interface configured to receive a plurality of sound signals and direct the plurality of sound signals to a plurality of audio processing modules that produce audio control information responsive to an analysis of the sound signals, a logic control unit operatively coupled to the plurality of audio processing modules to receive the audio control information from the plurality of audio processing modules and generate configuration data, a filter set operatively coupled to the peripheral interface to process the plurality of sound signals in accordance with the configuration data to produce filtered sound signals, and a mixing matrix operatively coupled to the control logic unit and filter set to mix the plurality of filtered sound signals in accordance with the audio control information to produce output sound signals and route the output sound signals to at least one peripheral component. [0006] The peripheral component can be an Ear Canal Receiver (ECR), a phone, a portable communication device, or a data storage. The peripheral interface can include at least one Ambient Sound Microphone (ASM) configured to convert an ambient sound to an ambient sound signal, and at least one Ear Canal Microphone (ECM) configured to convert an internal sound from an ear canal of a user to an internal sound signal. The earpiece can further include an audio content interface configured to receive a plurality of audio streams and direct the plurality of audio streams to the plurality of audio processing modules. The audio content interface can receive an audio stream from a phone, a media player, or a portable communication device. The audio content interface can mix the plurality of audio streams based on a user context that is one among an incoming call, a music session, or a voice mail.
[0007] The audio control information can include filter data for processing the plurality of sound signals, audio control data for assigning a priority to the plurality of sound signals, and router data for mixing the plurality of filtered signals according to the priority. The priority can be event driven responsive to detecting a sound signature, a background noise condition, a battery life indication, a manual interaction, or a voice recognition command. [0008] In a second embodiment an earpiece can include a peripheral interface configured to receive a plurality of sound signals and direct the plurality of sound signals to a plurality of audio processing modules, an audio content interface configured to receive a plurality of audio streams and also direct the plurality of audio streams to the plurality of audio processing modules, at least one signal analysis module operatively coupled to the peripheral interface and audio content interface to provide a shared analysis of the sound signals and the audio streams for the plurality of audio processing modules, a logic control unit operatively coupled to the plurality of audio processing modules and the at least one signal analysis module to receive the shared analysis and audio control information to generate configuration data, a filter set operatively coupled to the peripheral interface to process the plurality of sound signals in accordance with the configuration data to produce filtered sound signals, and a mixing matrix operatively coupled to the control logic unit and filter set to mix the plurality of filtered sound signals in accordance with the audio control information to produce output sound signals and route the output sound signals to at least one peripheral component.
[0009] The shared analysis can include spectral analysis, spectral band energy level analysis, spectral envelope analysis, voice activity detection analysis, and cross-correlation analysis. A separate signal analysis module can be provided for components of the peripheral interface and the audio content interface that is shared among the plurality of audio processing modules. The peripheral interface can include at least one Ambient Sound Microphone (ASM) coupled to an ASM signal analysis module configured to analyze an ambient sound signal, and at least one Ear Canal Microphone (ECM) coupled to an ECM signal analysis module configured to analyze an internal sound from an ear canal of a user. The audio content interface can be coupled to an audio content (AC) signal analysis module and configured to analyze an audio stream from a phone, a media player, or a portable communication device. The logic control unit can produce a linkage matrix of mixing gains that are applied to the plurality of filtered sound signals to produce the plurality of output sound signals to each of a respective peripheral output device. [0010] In a third embodiment, a method for configuring audio delivery on an earpiece can include the steps of receiving at least one sound signal and at least one audio stream, performing an analysis of the at least one sound signal and the at least one audio stream, presenting the analysis, the at least one sound signal, and the at least one audio stream to a plurality of audio processing modules that generate configuration data responsive to the receiving. The method can include filtering the at least one sound signal and at least one audio stream according to the configuration data to produce filtered sound signals, mixing the filtered signals according to the configuration data to produce output sound signals, and routing the output sound signals to at least one peripheral component. The at least one sound signal can be an ambient sound signal, an ear canal sound signal. The at least one audio stream can be received from a phone, a media player, or a portable communication device. A linkage matrix of mixing gains can be generated based on the configuration data that are applied to the plurality of filtered sound signals for producing the plurality of output sound signals.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] FIG. 1 is a pictorial diagram of an earpiece in accordance with an exemplary embodiment;
[0012] FIG. 2 is a block diagram of the earpiece in accordance with an exemplary embodiment;
[0013] FIG. 3 is an exemplary schematic of a software system for the earpiece in accordance with an exemplary embodiment;
[0014] FIG. 4 is a more detailed exemplary schematic of the software system of FIG. 3 for the earpiece in accordance with an exemplary embodiment t; [0015] FIG. 5 is a flowchart of a method for generating filter configuration data from a plurality of processing modules in accordance with an exemplary embodiment in accordance with an exemplary embodiment;
[0016] FIG. 6 is a flowchart of a method for generating and applying filter configuration data from a plurality of processing modules in accordance with an exemplary embodiment;
[0017] FIG. 7 is an exemplary schematic for configuring audio input and output via a mixing matrix in accordance with an exemplary embodiment;
[0018] FIG. 8 is another exemplary schematic of a software system for the earpiece providing separate analysis and re-synthesis modules for peripheral inputs in accordance with an exemplary embodiment; and
[0019] FIG. 9 is a more detailed schematic for analysis and re-synthesis for a particular peripheral input in accordance with an exemplary embodiment.
DETAILED DESCRIPTION
[0020] The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. [0021] Processes, techniques, apparatus, and materials as known by one of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the enabling description where appropriate, for example the fabrication and use of transducers. Additionally in at least one exemplary embodiment the sampling rate of the transducers can be varied to pick up pulses of sound, for example less than 50 milliseconds. [0022] In all of the examples illustrated and discussed herein, any specific values, for example the sound pressure level change, should be interpreted to be illustrative only and non-limiting. Thus, other examples of the exemplary embodiments could have different values.
[0023] Note that similar reference numerals and letters refer to similar items in the following figures, and thus once an item is defined in one figure, it may not be discussed for following figures.
[0024] Note that herein when referring to correcting or preventing an error or damage {e.g., hearing damage), a reduction of the damage or error and/or a correction of the damage or error are intended.
[0025] At least one exemplary embodiment of the invention is directed to an earpiece for background noise mitigation. Reference is made to FIG. 1 in which an earpiece device, generally indicated as earpiece 100, is constructed in accordance with at least one exemplary embodiment. As illustrated, earpiece 100 depicts an electro-acoustical assembly 113 for an in-the-ear acoustic assembly, as it would typically be placed in the ear canal 131 of ear 117 of user 135 The earpiece 100 can be an in the ear earpiece, behind the ear earpiece, receiver in the ear, open-fit device, or any other suitable earpiece type. The earpiece 100 can be partially or fully occluded in the ear canal, and is suitable for use with users having healthy or abnormal auditory functioning.
[0026] Earpiece 100 includes an Ambient Sound Microphone (ASM) 111 to capture ambient sound, an Ear Canal Receiver (ECR) 125 to deliver audio to an ear canal 131 , and an Ear Canal Microphone (ECM) 123 to capture internal sounds within the ear canal and also assess a sound exposure level within the ear canal. The earpiece 100 can partially or fully occlude the ear canal 131 to provide various degrees of acoustic isolation. The assembly is designed to be inserted into the users ear canal 131 , and to form an acoustic seal with the ear canal walls 129 at a location 127 between the entrance to the ear canal and the tympanic membrane (or ear drum) 133 Such a seal is typically achieved by means of a soft and compliant housing of assembly 113. Such a seal is pertinent to the performance of the system in that it creates a closed cavity 131 of approximately 5cc between the in-ear assembly 113 and the tympanic membrane 133. As a result of this seal, the ECR (speaker) 125 is able to generate a full range bass response when reproducing sounds for the user. This seal also serves to significantly reduce the sound pressure level at the user's eardrum resulting from the sound field at the entrance to the ear canal. This seal is also the basis for the sound isolating performance of the electro-acoustic assembly. [0027] Located adjacent to the ECR 125, is the ECM 123, which is acoustically coupled to the (closed) ear canal cavity 131. One of its functions is that of measuring the sound pressure level in the ear canal cavity 131 as a part of testing the hearing acuity of the user as well as confirming the integrity of the acoustic seal and the working condition of itself and the ECR. The ECM 123 can also be used for capturing voice that is resonant within the ear canal when the user is speaking to permit voice communication.
[0028] The ASM 111 is housed in an ear seal 113 and monitors sound pressure at the entrance to the occluded or partially occluded ear canal. The ASM 111 can also be used to capture the user's voice externally for permitting voice communication. All transducers shown can receive or transmit audio signals to a processor 121 that undertakes audio signal processing and provides a transceiver for audio or voice via the wired or wireless communication path 119.
[0029] The earpiece 100 can actively monitor a sound pressure level both inside and outside an ear canal and enhance spatial and timbral sound quality while maintaining supervision to ensure safes sound reproduction levels. The earpiece 100 in various embodiments can conduct listening tests, filter sounds in the environment, monitor warning sounds in the environment, present notification based on identified warning sounds, maintain constant audio content to ambient sound levels, and filter sound in accordance with a Personalized Hearing Level (PHL).
[0030] Referring to FIG. 2, a block diagram of the earpiece 100 in accordance with an exemplary embodiment is shown. As illustrated, the earpiece 100 can include a processor 206 operatively coupled to the ASM 111 , ECR 125, and ECM 123 via one or more Analog to Digital Converters (ADC) 202 and Digital to Analog Converters (DAC) 203. The processor 206 can measure ambient sounds in the environment received at the ASM 111 and internal sounds captured at the ECM 130. Ambient sounds correspond to sounds within the environment such as the sound of traffic noise, street noise, conversation babble, or any other acoustic sound.
[0031] Ambient sounds measured by the ASM 111 can also correspond to industrial sounds present in an industrial setting, such as, factory noise, lifting vehicles, automobiles, and robots. The processor 206 can monitor the ambient sound captured by the ASM 110 for sounds in the environment, such as an abrupt high energy sound corresponding to an on-set of a warning sound (e.g., bell, emergency vehicle, security system, etc.), siren (e.g., police car, ambulance, etc.) , voice (e.g., "help", "stop", "police", etc.), or specific noise type (e.g., breaking glass, gunshot, etc.).
[0032] Internal sounds measured by the ECM 123 can correspond to sounds contained within the ear canal 131 such as spoken voice or audio content delivered by way of the ECR 125. The internal sounds can include residual background noise related to ambient sounds in the environment; for example, high level sounds that leak around the ear seal 127 and enter the ear canal 131. The processor 206 can monitor internal sounds captured by the ECM 123 and analyze the internal sounds. The processor 206 can also adjust a mixing between the ambient sound signals measured at the ASM 111 and the internal sound signals measured at the ECM 123, for example, responsive to assessing ambient background noise conditions. [0033] The processor 206 can utilize computing technologies such as a microprocessor, Application Specific Integrated Chip (ASIC), and/or digital signal processor (DSP) with associated storage memory 208 such a Flash, ROM, RAM, SRAM, DRAM or other like technologies for controlling operations of the earpiece device 100. The memory 208 can store program instructions for execution on the processor 206 as well as captured audio processing data.
[0034] The memory 208 can also store program instructions for execution on the processor 206. The memory 208 can be off-chip and external to the processor 208, and include a data buffer to temporarily capture the ambient sound and the internal sound, and a storage memory to save from the data buffer the recent portion of the history in a compressed format responsive to a directive by the processor. The data buffer can be a circular buffer that temporarily stores audio sound at a current time point to a previous time point. It should also be noted that the data buffer can in one configuration reside on the processor 206 to provide high speed data access. The storage memory can be non-volatile memory such as SRAM to store captured or compressed audio data. [0035] The memory 208 can be a machine-readable medium. The term "machine- readable medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed, and/or associated caches and external memory) that store the one or more sets of instructions. The term "machine-readable medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
[0036] The term "machine-readable medium" shall accordingly be taken to include, but not be limited to: solid-state memories such as a memory card or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories; and/or magneto-optical or optical medium; and carrier wave signals such as a signal embodying computer instructions in a transmission medium. Accordingly, the disclosure is considered to include any one or more of a machine-readable medium or a distribution medium, as listed herein and including art-recognized equivalents and successor media, in which the software implementations herein are stored. [0037] The earpiece 100 can include an audio interface 212 operatively coupled to the processor 206 to receive audio content, for example from a media player or cell phone, and deliver the audio content to the processor 206. The processor 206 responsive to detecting ambient sounds can adjust the audio content and pass the ambient sounds directly to the ear canal. For instance, the processor 206 can lower a volume of the audio content played out the ECR 125 responsive to detecting an acute sound for transmitting the ambient sound to the ear canal. The processor 206 can also actively monitor the sound exposure level inside the ear canal via the ECM 123 and adjust the audio content to within a safe and subjectively optimized listening level range.
[0038] The earpiece 100 can further include a transceiver 204 that can support singly or in combination any number of wireless access technologies including without limitation Bluetooth™, Wireless Fidelity (WiFi), Worldwide Interoperability for Microwave Access (WiMAX), and/or other short or long range communication protocols. The transceiver 204 can also provide support for dynamic downloading over-the-air to the earpiece 100. It should be noted also that next generation access technologies can also be applied to the present disclosure.
[0039] The power supply 210 can utilize common power management technologies such as replaceable batteries, supply regulation technologies, and charging system technologies for supplying energy to the components of the earpiece 100 and to facilitate portable applications. A motor (not shown) can be a single supply motor driver coupled to the power supply 210 to improve sensory input via haptic vibration. As an example, the processor 206 can direct the motor to vibrate responsive to an action, such as a detection of a warning sound or an incoming voice call.
[0040] The earpiece 100 can further represent a single operational device or a family of devices configured in a master-slave arrangement, for example, a mobile device and an earpiece. In the latter embodiment, the components of the earpiece 100 can be reused in different form factors for the master and slave devices.
[0041] FIG. 3 is an exemplary schematic of a software system 300 for operating the earpiece 100. The software system 300 can reside at least in part or whole on the processor 206, the memory 208, and/or any associated machine readable storage medium operated on by the processor 206 (see FIG. 2). Generally stated, the software system 300 by way of the processor 206 can manage a configuration of audio input and output (paths) to the earpiece 100 to support audio delivery.
[0042] As illustrated, the software system 300 includes a peripheral interface 310 configured to manage a plurality of sound signals 315 and direct the plurality of sound signals 315 to a plurality of audio processing modules 330. The sound signals can be an ambient sound signal measured by the ASM 111 or an internal sound signal from measured by the ECM 123. The software system 300 can include an audio content interface 320 configured to manage a plurality of audio streams 325 and direct the plurality of audio streams 325 to the plurality of audio processing modules 330. The audio stream can be a voice signal from a Phone, a music signal from a personal media player (PMP), or an audio signal provided by the earpiece (e.g., loopback signal). The plurality of sound signals 315 and the plurality of audio streams 325 are also passed to a filter set 350 as shown by the wide arrows.
[0043] The plurality of audio processing modules 330 can produce audio control information 331 responsive to an analysis of the sound signals. The audio control information 331 is provided to the logic control unit 340 for configuring filtering and mixing operations on the plurality of sound signals 315 and the plurality of audio streams 325 at a filter set 350. The logic control 340 unit is operatively coupled to the plurality of audio processing modules 330 to receive the audio control information 331 from the plurality of audio processing modules 330 and generate configuration data. The configuration data can include filter data 342 for processing the plurality of sound signals, audio control data 343 for assigning a priority to the plurality of sound signals, and router data 344 for mixing a plurality of filtered signals 355 according to the priority
[0044] The software system 300 includes the filter set 350 operatively coupled to the peripheral interface to process the plurality of sound signals 315 in accordance with the configuration data 342 for generating the filtered sound signals 355. The filter set 350 also processes the plurality of audio streams 325 in accordance with the configuration data 344 to produce filtered sound signals 355. The filtered sound signals 355 are passed to a mixing matrix 360.
[0045] The mixing matrix 360 is operatively coupled to the control logic unit 340 and filter set 350 to mix the plurality of filtered sound signals 355 in accordance with the configuration data 344 and generate output sound signals 365 that are routed to at least one output peripheral component 370. The peripheral component 370 can be the ECR 125, a phone, or a data storage. The mixing matrix 360 can include individual gains that are applied to the plurality of filtered sound signals 355 for producing the output sound signals. [0046] FIG. 4 is a more detailed schematic of the software system 300 of FIG. 3. System components of FIG. 3 will be referred to when describing components of the detailed schematic. [0047] The peripheral interface 310 (see FIG. 3) can include the Ambient Sound Microphone (ASM) 111 configured to convert an ambient sound to an ambient sound signal, and the Ear Canal Microphone (ECM) 123 configured to convert an internal sound from an ear canal of a user to an internal sound signal. The peripheral interface 310 receives the ASM sound signals from the external environment including background noise, warning sounds, and the ECM sound signal including the voice of a wearer of the earpiece 100 and any audio content playing out the ECR 125. The components (e.g., ASM-R, ECM- R) of the peripheral interface 310 can contain more than the components shown, for instance, a secondary ASM for sound localization or noise suppression. The peripheral interface 310 can also include components (ASM-L, ECM-L) from a second earpiece such as a L-left or R-right earpiece. In this case, the software system 300 can reside on one of the earpieces, although resources can be shared if the software system 300 is enabled on both earpieces.
[0048] The audio content interface 320 (see FIG. 3) can receive audio streams, for example, from a phone 421 , a personal media player (PMP) 422, a portable communication device (e.g., VOP) 423, or a component 411 of the earpiece. The local component 411 (e.g., ECR 125) can permit audio feedback for allowing the user to hear audio, for example, loop back when the user is speaking on the phone, or playing comfort noise during non- speech intervals. The audio content interface (AC) 420 can mix the plurality of audio streams to produce a single audio stream delivered to the processing modules (AP1 -AP5). In one arrangement, the AC 420 can perform audio content mixing based on a user context. [0049] The user context identifies an operation or mode of the earpiece, such as, an incoming call, a music session, or a voice mail. For example, if the user is listening to music and an incoming call is detected, the earpiece 100 by way of the software system 300 (see FIG. 3) further discussed herein can lower the volume of the music relative to the incoming call ring tone to permit a hearing of the incoming call (e.g., ring tone). As another example, if the user is using the earpiece in a transparent "safe" mode while listening to music, then any harmful ambient sounds (e.g., loud jackhammer, bus noises) can be attenuated with the music, for instance, by reducing the ASM to ECM pass through levels. Similarly, important warning sounds captured at the ASM 111 can be elevated in the mix with respect to other sounds by increasing the ratio of ASM 111 to ECM 123 sound levels. The AC 420 can also mix the sounds in accordance with manual intervention, for example, if the user adjusts a volume level manually by way of a user interface or directly mixes the audio streams.
[0050] Referring to both FIGS. 3 and 4, the sound signals from peripheral interface 310 and the mixed sound signals from the audio content interface 320 can be passed through a signal analysis module 410 operatively coupled to the peripheral interface 310 and the audio content interface 320 to provide a shared analysis of the sound signals and the audio streams. The sound signals and the shared analysis can then be sent to the plurality of audio processing modules (AP1 - AP5).The shared analysis can include spectral analysis, spectral band energy level analysis, spectral envelope analysis, voice activity detection analysis, and cross-correlation analysis.
[0051] The shared analysis module 410 permits a sharing of processing resources among the processing modules (AP1 - AP5) to minimize individual resource use of the processing modules from performing a common analysis. For instance, instead of each processing module performing an FFT analysis, the shared module 410 can perform an FFT and share output results with the modules. In another arrangement, output results from the shared analysis can be conveyed to the processing modules based on frequency band requirements or trigger events and thresholds. For instance, AP1 may register as an event listener for ASM signals that exceed a certain level in a frequency band; AP2 may register as an event listener to receive ECM signals that match a particular masking profile, and so on.
[0052] Alternatively, each processing module can interpret the output results individually and make their own respective determination as how to process the sound signals. For instance, each processing module (AP1 - AP5) can selectively process the plurality of sound signals in accordance with its own requirements. For example, AP1 can be a sound detection module to identify one or more sounds in the user's environment, for example, in accordance with the teachings presented in U.S. Patent Application No11/966,457 filed on December 28, 2007 entitled "Method and Device for Sound Signature Detection" herein incorporated by reference in its entirety. AP 2 can be a sound exposure monitoring module to assess safe listening levels, for instance, in accordance with the teachings presented in U.S. provisional patent application No. 60/887,165 filed on January 30, 2007 entitled "Sound Pressure Level Monitoring and Notification System" herein incorporated by reference in its entirety. AP 3 can be a sound enhancement module or voice control module, for instance, in accordance with the teachings presented in U.S. Provisional Application No. 60/911 ,691 filed on April 13, 2007 entitled "Method and Device for Voice Operated Control", and U.S. Provisional Application No. 60/885,917 filed on January 22, 2007 entitled "Method and Device for Acute Sound Detection and Reproduction" herein incorporated by reference in its entirety. AP 4 can be a sound correction module to modify a sound signal based on a sound level exposure within safe listening levels, for instance, in accordance with the teachings presented in U.S. Provisional Application No. 60/866,420 filed on November 18, 2006, entitled "Method and Device for Personalized Hearing" the entire disclosure of which is incorporated herein by reference.
[0053] Each of the processing modules (AP1 - AP5) generates separate audio control information 331 in view of the audio signals that can be used by the logic control unit 340 to make an informed decision related to filtering and mixing the sound signals and audio streams. The audio control information 331 includes filter control data 342, audio control data 343, and router control data 344. The filter control data 342 can include filter coefficients that each processing module (AP) identifies as being significant to their intended function. The audio control data 343 can control a mixing of the plurality of audio streams at the AC 420 based on an established priority in view of the processing module decisions (e.g., warning signal detected, sound exposure level exceeded, echo feedback condition). The router control data 344 can include mixing gains (g1 -g2) to amplify or attenuate the filtered sound streams.
[0054] The logic control unit 340 responsive to receiving the audio control information 331 generates a priority that is used for generating the configuration data (341 ,342 and 343) used to filter the sound signals and mix the filtered sound signals. The priority controls how the filtered signals are filtered at the filter set 350 (e.g. F1 , F2, F3, see also FIG. 3). As illustrated, F1 is a first filter for the ambient sound signal, and F2 is a second filter for the internal sound signal in the ear canal. The logic control unit 340 varies the level of filtering at both F1 and F2 in accordance with the filter configuration data 342 as established by the determined priority. The filter coefficients for the F1 filter and F2 filter are provided on a frame-by-frame basis in real-time with the filter configuration data 342 from the logic control unit 340. The priority also controls how the audio steams are mixed in the audio control unit 320 (e.g., see VOP, Phone, PMP, see also FIG. 3) based on the audio control data, and how the filtered signals are mixed in the mixing matrix 360 (e.g., g1 -g9, see also FIG. 3). [0055] For example, an AP1 module configured for warning sound detection upon identifying a warning sound in an ASM sound signal, can generate filter coefficients to amplify one or a group of frequency bands of the warning sound (e.g., horn or siren). The AP1 module can consider the other frequency bands as don't cares since they are not part of the warning sound. AP1 can also raise or flag a priority level that a warning sound has been detected. Similarly, AP3 module for speech enhancement may determine from a simultaneous ECM audio stream that the wearer is speaking and generate filter coefficients that accentuate certain portions of the sound spectrum. AP3 can raise a flag or priority level based on an energy level of the spoken voice. AP4 may determine an echo condition feedback that could potentially damage the users hearing and generate filter coefficients to null out the feedback. AP4 can raise a flag or priority level indicating an immediate danger condition. AP2 may determine that a sound level exposure is being exceeded and generate audio control data to turn volume down on a media player. The logic control unit 340 upon receiving the audio control information and corresponding priorities from the respective processing modules (AP1 -AP5) can then determine the appropriate filter configuration given the filter coefficients and the priority.
[0056] Notably, the logic control unit 340 evaluates the audio control information from all the processing modules (AP1 -APN) individually, and then collectively as a whole, for prioritizing the configuration data. For instance, warning sounds that are given a higher priority over voice from the ECM or music from the AC, will be mixed according to the priority. Thus, the output sound signal generated by the mixing matrix 360 emphasizes the warning sound, followed by the voice, followed by the audio content. If voice commands are given a higher priority over music, than the mixing matrix 360 can reduce the music levels from the AC module when an AP module detects voice. Again, the priority can be established manually (e.g., via user interface) or automatically (e.g., user context, or presence). As another example, the mixing matrix 360 can increase ASM to ECM pass through upon detection of a warning sound, while simultaneously lowering a phone volume if the user is checking voice mail messages. As yet another example, the mixing matrix 360 can reduce ASM to ECM pass and simultaneously lower a music level if an incoming phone call is detected. The priority can also be event driven responsive to detecting a sound signature, a background noise condition, a battery life indication, a manual interaction, or a voice recognition command.
[0057] In the foregoing, a detailed description for one exemplary operation of audio input/output configuration of the software system 400 shown in FIG. 4 is provided. It should be noted that other configurations apply, for instance, using more or less than the number of mixing coefficients (g1 -g9) or filters (F 1 -F9) shown.
[0058] As illustrated, The AC control unit 420 mixes audio content (e.g., phone, VOP, PMP) and outputs an AC signal (or pair of stereo channels if two earphone devices are used). The mixing is controlled accordance with configuration data 343 received from the control logic unit 340 and also automatically by the AC control unit 420. The AC signal generated from the AC 420 is then filtered by audio content filter F3. The internal ear-canal audio signal from the ECM 123 is filtered by ECM filter F1 205, and the ambient audio signal from the ASM is filtered by ASM filter F2. All three filters (F1 , F2 and F3) are updated and controlled by the logic control unit 340 via filter configuration data 342. [0059] The filtered audio signals 355 are then passed to the mixing matrix 360. The 9 mixing coefficients (g1 -g9) of the mixing matrix 360 are controlled and updated by the control logic unit 340 via configuration data 344. Each mixing coefficient (e.g., g1 -g9) can be a time-varying positive number, negative number, or zero. As part of the mixing, the output sound signals are routed to the three peripheral output components (Phone 421 , ECR 125, and Storage 430). The mixing gains can be normalized so as to permit balanced audio delivery. The peripheral output components comprise the ear canal receiver ECR 125 for delivering audio to the ear canal, the output signal to the phone 421 (i.e. the signal that is transmitted to another remote individual), and the audio storage device 430 (e.g. hard drive on a PMP).
[0060] The filter configuration data 342 for the 2 filters, the AC configuration data 343 for the AC control unit, and the routing configuration data 344 for the mixing matrix 360 is generated responsive to receiving the audio control information from the plurality of processing modules (AP1 -AP5). Recall, each module (e.g., modules 1 -5, or more) receives as its inputs at least one of the unfiltered audio input signals (e.g. the ECM and ASM signals) or audio sound streams (e.g., music signal from PMP). As illustrated, each module analyzes the input audio signal and generates 3 output types: coefficients for the router (e.g., g1 -g9), filter coefficients for the 3 signal filters (e.g., F1 ,F2 and F3), and audio control signals for the AC control unit. Each of these output types can have "don't care" states- which can be represented by a reserved number value (e.g., -1 ). [0061] The control logic unit 340 prioritizes the sound signals and audio streams for each signal type, for instance, it can prioritize the router coefficient values (e.g., mixing gains g1 -g9) higher for one AP module than the other. In some cases, the control logic can maximally prioritize the signal type from one module so that the output router coefficients, for example, from one module is copied directly to the output of the control logic unit. In other cases, the output signals of the same type from different modules can be combined by a weighted addition. Similarly, the filter coefficients for a particular filter (F1 - F3) can be generated by a weighted addition (frequency domain) or convolution (time domain). [0062] FIG. 5 is a flowchart of a method 500 for generating filter configuration data in accordance with an exemplary embodiment. The method 500 be practiced with more or less than the number of steps shown and is not limited to the order shown. To describe the method 500, reference will be made to components of FIG 2, although it is understood that the method 500 can be implemented in any other manner using other suitable components. The method 500 can be implemented in a single earpiece, a pair of earpieces, headphones, or other suitable headset audio delivery device.
[0063] In particular, method 500 describes one exemplary arrangement in which the logic circuit 340 generates filter configuration data from the audio content received by individual AP modules to produce a combination filter output. In the exemplary embodiment, only one filter of the three filters shown in FIG. 4 is explained (e.g. generation of F1 coefficients for the ECM audio path). In other exemplary embodiments, the filter output for method 300 includes all 3 filters (F1 , F2 and F3) , for instance, the ECM sound signal, the ASM sound signal and the AC sound signal.
[0064] The method 500 can start at step 502. At step 504, the logic control unit 340 can receive N filter vectors from the N processing modules 506 (AP1 -APN). Each processing module generates a filter vector that is a set of filter coefficients (time or frequency domain). Each of the N processing modules (AP1 -APN) also outputs an associated module priority. [0065] At step 508, the logic control unit 340 assesses the priorities raised by each processing module and evaluates the priorities against a corresponding reference module priority at step 510. If a processing module does not assign a particular priority value, then a default priority value is retrieved from the reference module priority list. At step 512, the logic control unit 340 identifies the processing module with the highest priority. If a first processing module returns the same module priority value as a second processing module, then the logic control unit 340 assigns a unique module priority value to both the second and first modules, for example, based on a user context.
[0066] Upon selecting the processing module with the highest priority, the logic control unit 340 proceeds to determine if any configuration data is a "don't care" as shown in step 514. A "don't care" value is a value that does not affect the filter state of the respective processing module. The filter configuration data 342 generated by each processing module can include filter values of a time domain filter vector (e.g. for an FIR type filter) of frequency domain filter (e.g., bi-quads) with "don't care" value; the logic control unit 340 can assign a reserved numerical value of symbol for "don't care" filter states. A "dont care" flag can be associated with each filter output from each module.
[0067] At step 516, the logic control unit 340 can copy the configuration data (e.g., filter data, router data, audio content data) for "care" states to internal memory for output to the filter set 350, mixing matrix 360, and audio control 420 (see FIG. 4). The logic control unit 340 can continue to update the configuration data for "care" states as it is retrieved from each of the respective processing modules according to the priorities. Starting with the next highest priority module, as shown in step 520, the filter values for this high priority module are then checked at step 514to see if they correspond to a "don't care" state in the method loop. The method loop continues until the end of the module priority list as shown in step 518.
[0068] If the logic control unit 340 identifies a "don't care" state is present, the filter values for that state is not updated; else, the filter values for this module are copied to the output and no further update of the filters is conducted until new filter vectors are received from all modules as shown in step 522. In such regard, the logic control unit 340 output filter states using filter values directly from the processing modules based strictly on priority. If all module filter values are "don't care" states, then the filter output is the same as a previous filter output. The method 500 continues back at step 504 to evaluate the priorities of each module in real-time as results are received. The logic control unit 340 thus continually updates the filter values in view of priority for the respective filters (F1 , F2 and F3) to process audio signals and audio streams in real time.
[0069] FIG. 6 is a flowchart of a method 600 for generating filter configuration data where filter data is collectively combined based on priority. The method 600 is an extension of method 500 of FIG. 5 and can practiced with more or less than the number of steps shown and is not limited to the order shown. Briefly, FIG. 6 presents an embodiment wherein processing modules can have a same priority value. In such a case when two modules have the same priority value, and neither contain (don't care" filter states, then the filter values from each of these two module with the same priority value are combined. The combination can be a simple summation of filter weights (e.g. for frequency domain filters) or a convolution of the two sequences (e.g. for time domain weights). [0070] The method 600 can start at step 602. At step 604, the logic control unit 340 can receive N filter vectors from the N processing modules 606 (AP1 -APN). Each processing module generates a filter vector that is a set of filter coefficients (time or frequency domain). Each of the N processing modules (AP1 -APN) also outputs an associated module priority. At step 608, the logic control unit 340 assesses the priorities raised by each processing module. If a processing module does not assign a particular priority value, then a default priority value can be retrieved from a reference module priority list. [0071] At step 610, the logic control unit 340 identifies the processing module with the highest priority. The logic control unit 340 at step 618 then examines the audio control information from each of the N processing modules to identify "don't care" states. Filter values with "don't care" are not considered in generating output filter configuration data 342 or router configuration data 344 (see FIG. 4). Accordingly, the method 600 continues to get the next most important priority modules at step 620 until the end of module priority list is reached at step 622.
[0072] Filter values other than "don't care" are however used in generating a combined filter output; that is, a filter output representative of the collective priority of all the processing modules (AP1 -APN) based on their processing results. The combination also depends on whether the module is uniquely defined; for instance, it has precedence over combination with other filter vales. Thus at step 614, the logic control unit 340 determines whether a processing module with a "care" filter value is uniquely defined. If yes, then the filter values for that unique model are copied to the output (e.g., router configuration, filter configuration data, audio configuration data) at step 616. If however, the module is not unique, then the logic control unit 340 at step 612 combines the filter value with filter data from modules of the same priority. These filter values are then copied to the output at step 616 if no other modules eclipse their collective priority, or until a module with higher priority is identified. Upon evaluating all new filter from all the modules at step 624, the method 600 can return back to step 604 to continue to retrieve the filter vectors in real-time. [0073] FIG. 7 is an exemplary schematic 700 for configuring audio input and output via a mixing matrix in accordance with an exemplary embodiment. Reference will be made to FIG. 4 when describing the schematic 700. For instance, the filtered ECM signal 702 is the output of the ECM filter (F1 ) of FIG. 4. The mixing gains g1 (704), g4 (704) and g7 (707) correspond to the first column of the mixing matrix 360 as shown in FIG. 4. The phone out 710, ECR 712, and audio storage 714 correspond to the output peripheral components (421 , 125, and 430) of the mixing matrix 360.
[0074] The mixing gains (g1 -g9) of the mixing matrix 360 can be applied to the filtered audio signals 355 (see FIG. 3) before they are routed to the corresponding peripheral output components (phone out 710, ECR 712, and audio storage 714). FIG. 7 shows a signal path diagram for the filtered ECM input signal 702 to the 3 peripheral output components of the matrix 260. The mixing gains can be normalized coefficients whose total contribution across a column of the mixing matrix is unity. For example, mixing gains of g1 =0.2, g4=0.3, and g7=0.5 are applied to the filtered ECM signal 702 to produce phone out 710, ECR out 125, and audio storage out 714. The coefficient values can be time variant for each new input audio sample into the mixing matrix 360, or can remain constant for a block of input samples.
[0075] In one exemplary embodiment, the 9 coefficients (g1 -g9) for each of the mixing gains in mixing matrix 360 can be generated using a similar logic process as described in FIG. 4 and FIG. 4, except the inputs to the logic system are not filter vectors but rather 9 coefficient values from each module. The priority value assigned to each module, or the priority value automatically generated by that module, can be a different value for the filter vectors than for the 9 coefficient values. Moreover, 9 coefficients are shown since there are 3 audio paths (ASM, ECM, and AC) and 3 output peripheral components (Phone, ECR, data storage). The number of mixing gains and structure of the mixing matrix 360 can thus be a function of the number of audio input and audio output paths.
[0076] FIG. 8 is another exemplary schematic 800 of the software system 300 providing separate analysis and re-synthesis modules for peripheral inputs in accordance with an exemplary embodiment. As illustrated, the schematic 800 combines the three audio filters F1 , F2 and F3 with the shared analysis module 410 of the software system shown in FIG 4. Analysis/Re-synthesis unit 810 performs shared analysis of the ECM sound signal for processing modules (AP1 -AP3). Analysis/Re-synthesis unit 820 performs shared analysis of the ASN sound signal for processing modules (AP1 -AP3). Analysis/Re-synthesis unit 830 performs shared analysis of the Audio Content (AC) sound signal for processing modules (AP1 -AP3).
[0077] The combined analysis/resysnthesis units 810, 820 and 830 filter the 3 input audio signals; the ECM signal; the ASM signal, and Audio Content (AC) signal. The AC signal comprises at least one of; an earcon signal (e.g. a "low battery" auditory warning cue); a signal from a mobile telephone; and a stereo signal from a PMP (e.g. portable DVD player, music audio player etc). The output signal of the analysis/resynthesis units 810, 820 and 830 are passed to the mixing columns (g1 , g4, g7; g2, g5, g8; and g3, g6, g9) of the mixing matrix 360. The mixing gains g1 -g9 are controlled via the router configuration data from the logic control unit 340, which in turn were determined from the audio control information generated by the processing modules (AP1 -AP3).
[0078] As illustrated the mixing matrix 360 can associate a phone with a first row of a linkage matrix, associates an ECR with a second row of a linkage matrix, associate an audio storage with a third row of a linkage matrix, associate an ECM with a first column of a linkage matrix, associate an ASM with a second column of a linkage matrix, and associate an AC controller with a third column of a linkage matrix. The mixing matrix can modify the output sound signal of the phone using values in the first row of the linkage matrix, modify the output sound signal of the ECR using values in the second row of the linkage matrix, and modify the output sound signal sent to the audio storage using values in the third row of the linkage matrix.
[0079] Recall each processing module has two or more output signals: filter gain coefficients which control the resynthesis of the signals in the analysis/resythesis units; 9 coefficients for the router, and audio configuration data for the audio control. The processing modules (AP1 -AP3) create AC Control signals to control the routing of the input AC signals using AC control unit (420, see FIG. 4). The processing modules are configured and their internal state can be queried by the user interface 850 (e.g. a computer connected to the DSP unit via USB).
[0080] FIG. 9 is a more detailed schematic 900 of an analysis and re-synthesis unit for a particular peripheral input in accordance with an exemplary embodiment. For instance, the analysis and re-synthesis unit 910 corresponds to the ASM sound signal path of the software system 800 shown in FIG. 8. That is, the analysis and re-synthesis unit 910 can receive audio input 901 from the ASM 111. Notably, the schematic 900 can be applied for producing filtered output for also the ECM 123 sound signals and AC 420 sound signals (see FIG. 4) or other signals.
[0081] As illustrated in FIG. 9, the analysis and re-synthesis unit 910 incorporates a filter-bank 912 (e.g., cascade of band-pass filters (BPF)) structure for decomposing the ASM sound signal into a plurality of filter-banks. The filter-bank can be based on a linear frequency scale or a non-linear frequency scale, such as, an octave, 1/3 octave, critical band, equivalent rectangular band (ERB), or melody (mel)-frequency scale. Each output of each filter-bank can then be presented to each of the N processing modules (AP1 920-AP2 930). As previously indicated, the processing modules 920 and 930 generate audio control information 331 (see FIG. 3) that the logic control unit 340 uses in generating filter configuration data 342 (see FIG. 4). In the embodiment shown, the filter configuration data 342 is used by the analysis and re-synthesis unit 910 to scale the filter-bank output signals. For instance, the filter-bank outputs are gain scaled to amplify or attenuate the individual filter-banks with the filter coefficients in the filter configuration data 342. The filter-bank output signals can then be summed to generate a composite filtered sound signal. The filtered sound signal is then presented to the mixing matrix (see FIG. 8) for routing and mixing to the one or more output peripheral components (e.g., Phone, ECR, data storage). [0082] In the following, a detailed description for operation of the analysis and re- synthesis unit 910 for use with an audio input signal is provided. The input signal 901 is called "audio input 1 ", and can be any one of the exemplary three signals described previously (i.e. ECM, ASM or AC signal).The audio input signal is split and processed by the different band-pass-filters 912 (i.e. with different centre frequencies). In one arrangement, the response of each respective band pass filter 912 is such that if the filter outputs are summed, then the power spectral density is the same as the original audio input signal. The band pass filters 912 can be time-domain FIR type filters or MR type filters, or the analysis could be with a frequency domain FFT. The output signal of each band-pass filter is fed to at least one module (for the sake of clarity, 2 modules are shown in the exemplary embodiment of figure 8). Each module then creates an output signal to control the re-sythesis of the band-pass filtered input signals. This output signal is a vector of weights to control the gains of the filtered signals (i.e. the number of gains is equal to the number of filter used in the analysis). The logic control unit 340 determines a single set of gain coefficients for the gains (i.e. G1 , G2, G3 and G4 in the exemplary embodiment of figure 8). After the first set of band pass filtered signals have been multiplied with the gains (G1 -G4) to create a second set of amplified (or attenuated) band-pass filtered signal, the second set of signals are summed with the summing unit to create a single filtered ASM output sound signal, which form one of the three exemplary inputs to the previously described mixing matrix 360. [0083] While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all modifications, equivalent structures and functions of the relevant exemplary embodiments. Thus, the description of the invention is merely exemplary in nature and, thus, variations that do not depart from the gist of the invention are intended to be within the scope of the exemplary embodiments of the present invention. Such variations are not to be regarded as a departure from the spirit and scope of the present invention.

Claims

CLAIMSWhat is claimed is,
1. An earpiece, comprising: a peripheral interface configured to manage a plurality of sound signals and direct the plurality of sound signals to a plurality of audio processing modules that produce audio control information responsive to an analysis of the plurality of sound signals; a logic control unit operatively coupled to the plurality of audio processing modules to receive the audio control information from the plurality of audio processing modules and generate configuration data; a filter set operatively coupled to the peripheral interface to process the plurality of sound signals in accordance with the configuration data to produce filtered sound signals; and a mixing matrix operatively coupled to the control logic unit and filter set to mix the plurality of filtered sound signals in accordance with the configuration data to produce output sound signals and route the output sound signals to at least one peripheral component.
2. The earpiece of claim 1 , wherein the peripheral component is an Ear Canal Receiver (ECR), a phone, a portable communication device, or a storage.
3. The earpiece of claim 1 , wherein the peripheral interface comprises: at least one Ambient Sound Microphone (ASM) configured to convert an ambient sound to an ambient sound signal; and at least one Ear Canal Microphone (ECM) configured to convert an internal sound from an ear canal of a user to an internal sound signal.
4. The earpiece of claim 1 , further comprising: an audio content interface configured to manage a plurality of audio streams and direct the plurality of audio streams to the plurality of audio processing modules.
5. The earpiece of claim 4, wherein the audio content interface receives an audio stream from a phone, a media player, or a portable communication device.
6. The earpiece of claim 4, wherein the audio content interface mixes the plurality of audio streams based on a user context that is one among an incoming call, a music session, or a voice mail.
7. The earpiece of claim 1 , wherein the configuration data comprises filter data for processing the plurality of sound signals, audio control data for assigning a priority to the plurality of sound signals, and router data for mixing the plurality of filtered signals according to the priority
8. The earpiece of claim 4, wherein the priority is event driven responsive to detecting a sound signature, a background noise condition, a battery life indication, a manual interaction, or a voice recognition command.
9. The earpiece of claim 3, wherein the filter set comprises: a first filter for the ambient sound signal; and a second filter for the internal sound signal, where the filter coefficients for the first filter and second filter are provided on a frame-by- frame basis in real-time with the configuration data from the logic control unit.
10. An earpiece, comprising: a peripheral interface configured to manage a plurality of sound signals and direct the plurality of sound signals to a plurality of audio processing modules; an audio content interface configured to manage a plurality of audio streams and direct the plurality of audio streams to the plurality of audio processing modules; at least one signal analysis module operatively coupled to the peripheral interface and audio content interface to provide a shared analysis of the sound signals and the audio streams for the plurality of audio processing modules; a logic control unit operatively coupled to the plurality of audio processing modules and the at least one signal analysis module to receive the shared analysis and audio control information to generate configuration data; a filter set operatively coupled to the peripheral interface to process the plurality of sound signals in accordance with the configuration data to produce filtered sound signals; and a mixing matrix operatively coupled to the control logic unit and filter set to mix the plurality of filtered sound signals in accordance with the audio control information to produce output sound signals and route the output sound signals to at least one peripheral component.
11. The earpiece of claim 10, wherein the shared analysis comprises spectral analysis, spectral band energy level analysis, spectral envelope analysis, voice activity detection analysis, and cross-correlation analysis.
12. The earpiece of claim 10, wherein a separate signal analysis module is provided for components of the peripheral interface and the audio content interface and is shared among the plurality of audio processing modules.
13. The earpiece of claim 10, wherein the peripheral interface comprises at least one Ambient Sound Microphone (ASM) coupled to an ASM signal analysis module configured to analyze an ambient sound signal; and at least one Ear Canal Microphone (ECM) coupled to an ECM signal analysis module configured to analyze an internal sound from an ear canal of a user.
14. The earpiece of claim 10, wherein the audio content interface is coupled to an audio content (AC) signal analysis module and configured to analyze an audio stream from a phone, a media player, or a portable communication device.
15. The earpiece of claim 10, wherein the logic control unit produces a linkage matrix of mixing gains that the mixing matrix applies to the plurality of filtered sound signals for producing the plurality of output sound signals.
16. The earpiece of claim 10, wherein the mixing matrix: associates a phone with a first row of a linkage matrix; associates an ECR with a second row of a linkage matrix; associates an audio storage with a third row of a linkage matrix; associates an ECM with a first column of a linkage matrix; associates an ASM with a second column of a linkage matrix; and associates an AC controller with a third column of a linkage matrix.
17. The earpiece of claim 16, wherein the mixing matrix: modifies the output sound signal of the phone using values in the first row of the linkage matrix; modifies the output sound signal of the ECR using values in the second row of the linkage matrix; and modifies the output sound signal sent to the audio storage using values in the third row of the linkage matrix.
18. A method for configuring audio delivery on an earpiece, the method comprising the steps of: receiving at least one sound signal and at least one audio stream; performing an analysis of at least one sound signal and the at least one audio stream; presenting the analysis, the at least one sound signal, and the at least one audio stream to a plurality of audio processing modules that generate configuration data responsive to the receiving; filtering the at least one sound signal and at least one audio stream according to the configuration data to produce filtered sound signals; mixing the filtered signals according to the configuration data to produce output sound signals; and routing the output sound signals to at least one peripheral component.
19. The method of claim 18, wherein the at least one sound signal is an ambient sound signal, an ear canal sound signal, and the at least one audio stream is received from a phone, a media player, or a portable communication device.
20. The method of claim 18, comprising generating a linkage matrix of mixing gains based on the configuration data that are applied to the plurality of filtered sound signals for producing the plurality of output sound signals.
PCT/US2008/073189 2007-08-14 2008-08-14 Method and device for linking matrix control of an earpiece ii WO2009023784A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US95584807P 2007-08-14 2007-08-14
US60/955,848 2007-08-14
US96823707P 2007-08-27 2007-08-27
US60/968,237 2007-08-27

Publications (1)

Publication Number Publication Date
WO2009023784A1 true WO2009023784A1 (en) 2009-02-19

Family

ID=40351160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/073189 WO2009023784A1 (en) 2007-08-14 2008-08-14 Method and device for linking matrix control of an earpiece ii

Country Status (1)

Country Link
WO (1) WO2009023784A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9401158B1 (en) 2015-09-14 2016-07-26 Knowles Electronics, Llc Microphone signal fusion
CN106714064A (en) * 2017-02-28 2017-05-24 浙江诺尔康神经电子科技股份有限公司 Artificial cochlea audio real-time processing system and method
US9779716B2 (en) 2015-12-30 2017-10-03 Knowles Electronics, Llc Occlusion reduction and active noise reduction based on seal quality
US9812149B2 (en) 2016-01-28 2017-11-07 Knowles Electronics, Llc Methods and systems for providing consistency in noise reduction during speech and non-speech periods
US9830930B2 (en) 2015-12-30 2017-11-28 Knowles Electronics, Llc Voice-enhanced awareness mode
US20220191608A1 (en) 2011-06-01 2022-06-16 Staton Techiya Llc Methods and devices for radio frequency (rf) mitigation proximate the ear
US11489966B2 (en) 2007-05-04 2022-11-01 Staton Techiya, Llc Method and apparatus for in-ear canal sound suppression
US11550535B2 (en) 2007-04-09 2023-01-10 Staton Techiya, Llc Always on headwear recording system
US11589329B1 (en) 2010-12-30 2023-02-21 Staton Techiya Llc Information processing using a population of data acquisition devices
US11610587B2 (en) 2008-09-22 2023-03-21 Staton Techiya Llc Personalized sound management and method
US11659315B2 (en) 2012-12-17 2023-05-23 Staton Techiya Llc Methods and mechanisms for inflation
US11683643B2 (en) 2007-05-04 2023-06-20 Staton Techiya Llc Method and device for in ear canal echo suppression
US11693617B2 (en) 2014-10-24 2023-07-04 Staton Techiya Llc Method and device for acute sound detection and reproduction
US11710473B2 (en) 2007-01-22 2023-07-25 Staton Techiya Llc Method and device for acute sound detection and reproduction
US11741985B2 (en) 2013-12-23 2023-08-29 Staton Techiya Llc Method and device for spectral expansion for an audio signal
US11750965B2 (en) 2007-03-07 2023-09-05 Staton Techiya, Llc Acoustic dampening compensation system
US11818545B2 (en) 2018-04-04 2023-11-14 Staton Techiya Llc Method to acquire preferred dynamic range function for speech enhancement
US11818552B2 (en) 2006-06-14 2023-11-14 Staton Techiya Llc Earguard monitoring system
US11848022B2 (en) 2006-07-08 2023-12-19 Staton Techiya Llc Personal audio assistant device and method
US11856375B2 (en) 2007-05-04 2023-12-26 Staton Techiya Llc Method and device for in-ear echo suppression
US11889275B2 (en) 2008-09-19 2024-01-30 Staton Techiya Llc Acoustic sealing analysis system
US11917367B2 (en) 2016-01-22 2024-02-27 Staton Techiya Llc System and method for efficiency among devices
US11917100B2 (en) 2013-09-22 2024-02-27 Staton Techiya Llc Real-time voice paging voice augmented caller ID/ring tone alias

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6754359B1 (en) * 2000-09-01 2004-06-22 Nacre As Ear terminal with microphone for voice pickup
US20050058313A1 (en) * 2003-09-11 2005-03-17 Victorian Thomas A. External ear canal voice detection
US20060262944A1 (en) * 2003-02-25 2006-11-23 Oticon A/S Method for detection of own voice activity in a communication device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6754359B1 (en) * 2000-09-01 2004-06-22 Nacre As Ear terminal with microphone for voice pickup
US20060262944A1 (en) * 2003-02-25 2006-11-23 Oticon A/S Method for detection of own voice activity in a communication device
US20050058313A1 (en) * 2003-09-11 2005-03-17 Victorian Thomas A. External ear canal voice detection

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11818552B2 (en) 2006-06-14 2023-11-14 Staton Techiya Llc Earguard monitoring system
US11848022B2 (en) 2006-07-08 2023-12-19 Staton Techiya Llc Personal audio assistant device and method
US11710473B2 (en) 2007-01-22 2023-07-25 Staton Techiya Llc Method and device for acute sound detection and reproduction
US11750965B2 (en) 2007-03-07 2023-09-05 Staton Techiya, Llc Acoustic dampening compensation system
US11550535B2 (en) 2007-04-09 2023-01-10 Staton Techiya, Llc Always on headwear recording system
US11856375B2 (en) 2007-05-04 2023-12-26 Staton Techiya Llc Method and device for in-ear echo suppression
US11683643B2 (en) 2007-05-04 2023-06-20 Staton Techiya Llc Method and device for in ear canal echo suppression
US11489966B2 (en) 2007-05-04 2022-11-01 Staton Techiya, Llc Method and apparatus for in-ear canal sound suppression
US11889275B2 (en) 2008-09-19 2024-01-30 Staton Techiya Llc Acoustic sealing analysis system
US11610587B2 (en) 2008-09-22 2023-03-21 Staton Techiya Llc Personalized sound management and method
US11589329B1 (en) 2010-12-30 2023-02-21 Staton Techiya Llc Information processing using a population of data acquisition devices
US20220191608A1 (en) 2011-06-01 2022-06-16 Staton Techiya Llc Methods and devices for radio frequency (rf) mitigation proximate the ear
US11832044B2 (en) 2011-06-01 2023-11-28 Staton Techiya Llc Methods and devices for radio frequency (RF) mitigation proximate the ear
US11736849B2 (en) 2011-06-01 2023-08-22 Staton Techiya Llc Methods and devices for radio frequency (RF) mitigation proximate the ear
US11659315B2 (en) 2012-12-17 2023-05-23 Staton Techiya Llc Methods and mechanisms for inflation
US11917100B2 (en) 2013-09-22 2024-02-27 Staton Techiya Llc Real-time voice paging voice augmented caller ID/ring tone alias
US11741985B2 (en) 2013-12-23 2023-08-29 Staton Techiya Llc Method and device for spectral expansion for an audio signal
US11693617B2 (en) 2014-10-24 2023-07-04 Staton Techiya Llc Method and device for acute sound detection and reproduction
US9401158B1 (en) 2015-09-14 2016-07-26 Knowles Electronics, Llc Microphone signal fusion
US9961443B2 (en) 2015-09-14 2018-05-01 Knowles Electronics, Llc Microphone signal fusion
US9830930B2 (en) 2015-12-30 2017-11-28 Knowles Electronics, Llc Voice-enhanced awareness mode
US9779716B2 (en) 2015-12-30 2017-10-03 Knowles Electronics, Llc Occlusion reduction and active noise reduction based on seal quality
US11917367B2 (en) 2016-01-22 2024-02-27 Staton Techiya Llc System and method for efficiency among devices
US9812149B2 (en) 2016-01-28 2017-11-07 Knowles Electronics, Llc Methods and systems for providing consistency in noise reduction during speech and non-speech periods
CN106714064A (en) * 2017-02-28 2017-05-24 浙江诺尔康神经电子科技股份有限公司 Artificial cochlea audio real-time processing system and method
CN106714064B (en) * 2017-02-28 2022-06-17 浙江诺尔康神经电子科技股份有限公司 Real-time processing method for cochlear prosthesis audio
US11818545B2 (en) 2018-04-04 2023-11-14 Staton Techiya Llc Method to acquire preferred dynamic range function for speech enhancement

Similar Documents

Publication Publication Date Title
WO2009023784A1 (en) Method and device for linking matrix control of an earpiece ii
US11710473B2 (en) Method and device for acute sound detection and reproduction
US11057701B2 (en) Method and device for in ear canal echo suppression
CN110089129B (en) On/off-head detection of personal sound devices using earpiece microphones
US9456268B2 (en) Method and device for background mitigation
US9066167B2 (en) Method and device for personalized voice operated control
US9191740B2 (en) Method and apparatus for in-ear canal sound suppression
US9706280B2 (en) Method and device for voice operated control
WO2009097009A1 (en) Method and device for linking matrix control of an earpiece
CN102484461A (en) A system and a method for providing sound signals
US20230011879A1 (en) Method and apparatus for in-ear canal sound suppression
WO2008128173A1 (en) Method and device for voice operated control
US20220122605A1 (en) Method and device for voice operated control

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08797904

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08797904

Country of ref document: EP

Kind code of ref document: A1