WO2021126981A1 - System, method, and apparatus for generating and digitally processing a head related audio transfer function - Google Patents

System, method, and apparatus for generating and digitally processing a head related audio transfer function Download PDF

Info

Publication number
WO2021126981A1
WO2021126981A1 PCT/US2020/065315 US2020065315W WO2021126981A1 WO 2021126981 A1 WO2021126981 A1 WO 2021126981A1 US 2020065315 W US2020065315 W US 2020065315W WO 2021126981 A1 WO2021126981 A1 WO 2021126981A1
Authority
WO
WIPO (PCT)
Prior art keywords
microphone
ear
user
disposed
wearable apparatus
Prior art date
Application number
PCT/US2020/065315
Other languages
French (fr)
Inventor
Ryan J. COPT
Joseph G. BUTERA III
Robert J. SUMMERS III
Mark Harpster
David LOPEZ JR
Original Assignee
Bongiovi Acoustics Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/917,001 external-priority patent/US11202161B2/en
Application filed by Bongiovi Acoustics Llc filed Critical Bongiovi Acoustics Llc
Priority to CN202080096632.6A priority Critical patent/CN115104323A/en
Publication of WO2021126981A1 publication Critical patent/WO2021126981A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1008Earpieces of the supra-aural or circum-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space
    • H04S7/306For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • Serial No. 11/947,301 is a continuation-in-part of Serial No. 11/703,216, filed February 7,
  • the present invention provides for a system and apparatus for generating a real time head related audio transfer function. Specifically, unique structural components are utilized in connection with a microphone to reproduce certain acoustic characteristics of the human pinna in order to facilitate the communication of the location of a sound in three dimensional space to a user.
  • the invention may further utilize an audio processor to digitally process the head related audio transfer function.
  • Binaural cues relate to the differences of arrival and intensity of the sound between the two ears, which assist with the relative localization of a sound source.
  • Monoaural cues relate to the interaction between the sound source and the human anatomy, in which the original sound is modified by the external ear before it enters the ear canal for processing by the auditory system.
  • the modifications encode the source location relative to the ear location and are known as head-related transfer functions (HRTF).
  • HRTFs describe the filtering of a sound source before it is perceived at the left and right ear drums, in order to characterize how a particular ear receives sound from a particular point in space.
  • These modifications may include the shape of the listener’s ear, the shape of the listener’s head and body, the acoustical characteristics of the space in which the sound is played, and so forth. All these characteristics together influence how a listener can accurately tell what direction a sound is coming from. Thus, a pair of HRTFs accounting for all these characteristics, generated by the two ears, can be used to synthesize a binaural sound and accurately recognize it as originating from a particular point in space.
  • HRTFs have wide ranging applications, from virtual surround sound in media and gaming, to hearing protection in loud noise environments, and hearing assistance for the hearing impaired. Particularly, in fields hearing protection and hearing assistance, the ability to record and reconstruct a particular user’s HRTF presents several challenges as it must occur in real time. In the case of an application for hearing protection in high noise environments, heavy hearing protection hardware must be worn over the ears in the form of bulky headphones, thus, if microphones are placed on the outside of the headphones, the user will hear the outside world but will not receive accurate positional data because the HRTF is not being reconstructed. Similarly, in the case of hearing assistance for the hearing impaired, a microphone is similarly mounted external to the hearing aid, and any hearing aid device that fully blocks a user’s ear canal will not accurately reproduce that user’s HRTF.
  • the present invention meets the existing needs described above by providing for an apparatus, system, and method for generating a head related audio transfer function.
  • the present invention also provides for the ability to enhance audio in real-time and tailors the enhancement to the physical characteristics of a user and the acoustic characteristics of the external environment.
  • an apparatus directed to the present invention also known as an HRTF generator, comprises an external manifold and internal manifold.
  • the external manifold is exposed at least partially to an external environment, while the internal manifold is disposed substantially within an interior of the apparatus and/or a larger device or system housing said apparatus.
  • the external manifold comprises an antihelix structure, a tragus structure, and an opening.
  • the opening is in direct air flow communication with the outside environment, and is structured to receive acoustic waves.
  • the tragus structure is disposed to partially enclose the opening, such that the tragus structure will partially impede and/or affect the characteristics of the incoming acoustic waves going into the opening.
  • the antihelix structure is disposed to further partially enclose the tragus structure as well as the opening, such that the antihelix structure will partially impede and/or affect the characteristics of the incoming acoustic waves flowing onto the tragus structure and into the opening.
  • the antihelix and tragus structures may comprise partial domes or any variation of partial-domes comprising a closed side and an open side.
  • the open side of the antihelix structure and the open side of the tragus structure are disposed in confronting relation to one another.
  • the opening of the external manifold is connected to and in air flow communication with an opening canal inside the external manifold.
  • the opening canal may be disposed in a substantially perpendicular orientation relative to the desired orientation of the user.
  • the opening canal is in further air flow communication with an auditory canal, which is formed within the internal manifold but also be formed partially in the external manifold.
  • the internal manifold comprises the auditory canal and a microphone housing.
  • the microphone housing is attached or connected to an end of the auditory canal on the opposite end to its connection with the opening canal.
  • the auditory canal, or at least the portion of the portion of the auditory canal may be disposed in a substantially parallel orientation relative to the desired listening direction of the user.
  • the microphone housing may further comprise a microphone mounted against the end of the auditory canal.
  • the microphone housing may further comprise an air cavity behind the microphone on an end opposite its connection to the auditory canal, which may be sealed with a cap.
  • the apparatus or HRTF generator may form a part of a larger system. Accordingly, the system may comprise a left HRTF generator, a right HRTF generator, a left preamplifier, a right preamplifier, an audio processor, a left playback module, and a right playback module.
  • the left HRTF generator may be structured to pick up and filter sounds to the left of a user.
  • the right HRTF generator may be structured to pick up and filter sounds to the right of the user.
  • a left preamplifier may be structured and configured to increase the gain of the filtered sound of the left HRTF generator.
  • a right preamplifier may be structured and configured to increase the gain of the filtered sound of the right HRTF generator.
  • the audio processor may be structured and configured to process and enhance the audio signal received from the left and right preamplifiers, and then transmit the respective processed signals to each of the left and right playback modules.
  • the left and right playback modules or transducers are structured and configured to convert the electrical signals into sound to the user, such that the user can then perceive the filtered and enhanced sound from the user’s environment, which includes audio data that allows the user to localize the source of the originating sound.
  • the system of the present invention may comprise a wearable device such as a headset or headphones having the HRTF generator embedded therein.
  • the wearable device may further comprise the preamplifiers, audio processor, and playback modules, as well as other appropriate circuitry and components.
  • a method for generating a head related audio transfer function may be used in accordance with the present invention.
  • external sound is first filtered through an exterior of an HRTF generator which may comprise a tragus structure and an antihelix structure.
  • the filtered sound is then passed to the interior of the HRTF generator, such as through the opening canal and auditory canal described above to create an input sound.
  • the input sound is received at a microphone embedded within the HRTF generator adjacent to and connected to the auditory canal in order to create an input signal.
  • the input signal is amplified with a preamplifier in order to create an amplified signal.
  • the amplified signal is then processed with an audio processor, in order to create a processed signal.
  • the processed signal is transmitted to the playback module in order to relay audio and/or locational audio data to a user.
  • the audio processor may receive the amplified signal and first filter the amplified signal with a high pass filter.
  • the high pass filter in at least one embodiment, is configured to remove ultra-low frequency content from the amplified signal resulting in the generation of a high pass signal.
  • the high pass signal from the high pass filter is then filtered through a first filter module to create a first filtered signal.
  • the first filter module is configured to selectively boost and or attenuate the gain of select frequency ranges in an audio signal, such as the high pass signal.
  • the first filter module boosts frequencies above a first frequency, and attenuates frequencies below a first frequency.
  • the first filtered signal from the first filter module is then modulated with a first compressor to create a modulated signal.
  • the first compressor is configured for the dynamic range compression of a signal, such as the first filtered signal. Because the first filtered signal boosted higher frequencies and attenuated lower frequencies, the first compressor may, in at least one embodiment, be configured to trigger and adjust the higher frequency material, while remaining relatively insensitive to lower frequency material.
  • the modulated signal from the first compressor is then filtered through a second filter module to create a second filtered signal.
  • the second filter module is configured to selectively boost and/or attenuate the gain of select frequency ranges in an audio signal, such as the modulated signal.
  • the second filter module is configured to be of least partially inverse relation relative to the first filter module. For example, if the first filter module boosted content above a first frequency by +X dB and attenuated content below a first frequency by -Y dB, the second filter module may then attenuate the content above the first frequency by -X dB, and boost the content below the first frequency by +Y dB.
  • the purpose of the second filter module in one embodiment may be to “undo” the gain adjustment that was applied by the first filter module.
  • the second filtered signal from the second filter module is then processed with a first processing module to create a processed signal.
  • the first processing module may comprise a peak/dip module.
  • the first processing module may comprise both a peak/dip module and a first gain element.
  • the first gain element may be configured to adjust the gain of the signal, such as the second filtered signal.
  • the peak/dip module may be configured to shape the signal, such as to increase or decrease overshoots or undershoots in the signal.
  • each band may comprise the output of a fourth order section, which may be realized as the cascade of second order biquad filters.
  • the low band signal is modulated with a low band compressor to create a modulated low band signal
  • the high band signal is modulated with a high band compressor to create a modulated high band signal.
  • the low band compressor and high band compressor are each configured to dynamically adjust the gain of a signal.
  • Each of the low band compressor and high band compressor may be computationally and/or configured identically as the first compressor.
  • the modulated low band signal, the mid band signal, and the modulated high band signal are then processed with a second processing module.
  • the second processing module may comprise a summing module configured to combine the signals.
  • the summing module in at least one embodiment may individually alter the gain of each of the modulated low band, mid band, and modulated high band signals.
  • the second processing module may further comprise a second gain element. The second gain element may adjust the gain of the combined signal in order to create a processed signal that is transmitted to the playback module.
  • the method described herein may be configured to capture and transmit locational audio data to a user in real time, such that it can be utilized as a hearing aid, or in loud noise environments to filter out loud noises.
  • the HRTF generator rather than being embedded in a wearable device, may actually be configured as the wearable device itself.
  • the HRTF generator will be configured into at least one, but most preferably two, in-ear assembly apparatus(es). The at least one in- ear assembly is operatively positioned, or in an operative position, when it is disposed on a user’s ear, or worn by a user.
  • the in-ear assembly may comprise at least one shell or chamber to house the various HRTF structures, and provide an exterior surface to place or attach structures on the outside.
  • the in-ear assembly may comprise a primary chamber proximal to a user’s ear(s) and a secondary chamber distal to a user’s ear(s), when in an operative position.
  • the exterior of the in-ear assembly’s secondary chamber comprises a windscreen structure, an antihelix structure, a tragus structure, and a microphone opening or aperture.
  • the windscreen structure, antihelix structure, and tragus structures can be removed from the exterior of the secondary chamber, providing a means of replacing the structures.
  • the windscreen structure, antihelix structure, and tragus structures may vary in size and shape.
  • the windscreen structure can attach to the exterior of the secondary chamber via at least one connecting point.
  • a variety of materials may be utilized, but in a preferred embodiment open cell foam is housed within the windscreen structure to ensure the quality of the incoming signals.
  • the windscreen structure and the material housed inside the windscreen structure will partially cover the antihelix structure, the tragus structure, and the microphone opening or aperture.
  • the antihelix structure and the tragus structure can also cover, partially or fully, the microphone aperture in order to mimic the structure of a human ear.
  • the microphone aperture is in direct air flow communication with the external environment via an opening and microphone channel.
  • the microphone may be attached to an end of the microphone channel.
  • the microphone will receive the external noise that filters through the windscreen structure, the antihelix structure, tragus structure, microphone aperture, and microphone channel, ensuring that the audio signal produced by the HRTF Generator will include the “directionality” that occurs when a human ear detects sound from a point in space.
  • the microphone disposed within the end of the microphone channel is located inside the in-ear assembly, and may be located within the secondary or primary chamber of a preferred embodiment of the in-ear assembly.
  • the microphone channel and the microphone may be in a substantially parallel orientation, or alternatively, perpendicular orientation, relative to the listening direction of a user when wearing the in-ear assembly.
  • the microphone is located adjacent to, or even directly connected to, a playback module, or one or more speakers or transducers, which transmits audio input signals to the playback module, and in turn the playback module transmits an audio output signal to a user via an auditory channel connected to a user’ s ear.
  • the in-ear assembly will also comprise a preamplifier to amplify an audio input signal received from the microphone, and an audio processor to receive the amplified signal for processing. The audio processor will then transmit a processed, higher quality signal to the playback module.
  • the playback module may be housed, and there may also be speaker drivers. The playback module may be mounted flush on an end of the auditory channel, or there may be an air cavity between the playback module and the end of the auditory channel.
  • the auditory channel is disposed within the user’s ear when the in-ear assembly is in an operative position, and a foam ear tip or other material may be attached to the end of the auditory channel to protect a user’s ear(s) as well as insulate the user’s ear(s) from ambient noise.
  • the speaker(s) or playback module(s) is located inside the in-ear assembly, and in one embodiment, the playback module is located in the primary chamber of the in-ear assembly.
  • the microphone that receives audio input from the external environment and the playback module that sends audio output to the user are isolated from one another in a preferred embodiment, in order to avoid unwanted feedback.
  • the interior of the in-ear assembly contains a baffle isolation stmcture that transverses the interior of the in-ear assembly, creating a physical isolation between the microphone and the playback module.
  • an acoustic isolation can be created without a physical barrier between the microphone and playback module.
  • the isolation baffle achieves the goal of creating at least a 30 decibel noise isolation between the microphone and the playback module.
  • the isolation of the microphone and playback module provides for reduction in noise interference and feedback noise between the microphone and playback module in a miniaturized apparatus such as the in-ear assembly, which has its exterior, or more specifically, the secondary chamber, exposed to the external environment’s sound waves.
  • the stabilizing assembly may comprise of a circular collar that is disposed about the exterior of the primary chamber, and a concha-shaped structure connected to the circular collar that is dimensioned and configured to be disposed on the external ear of a user when in an operative position, preferably within the concha.
  • the tragus and anti-helix structure can be oriented properly for generation of accurate HRTF signals, otherwise an improper orientation may generate misleading HRTF signals for the user.
  • At least one in-ear assembly may form a system.
  • the system may comprise a left ear-in assembly structured to pick up and filter sounds incoming from the left side of a user.
  • the right in-ear assembly may be stmctured to pick up and filter sounds incoming from the right side of the user.
  • a left preamplifier within the left in-ear assembly may be structured and configured to increase the gain of the filtered sound of the left in-ear assembly.
  • a right preamplifier within the right in-ear assembly may be structured and configured to increase the gain of the filtered sound of the right in-ear assembly.
  • the audio processor(s) located inside the left and right in-ear assemblies, or housed in a separate structure, may be configured to process and enhance the audio signal received from the left and right preamplifiers, and then transmit the respective processed signals to each of the left and right playback modules located in the left and right in-ear assemblies.
  • the left and right playback modules or transducers are structured and configured to convert the electrical signals into sound waves perceptible by the user, such that the user can then perceive the filtered and enhanced sound form the user’s environment, which includes the “directional” audio data that allows the user to localize the originating sound.
  • the various structures such as but not limited to the preamplifier(s), the audio processor(s), and the playback module(s) may be housed in the in-ear assembly(s) or in a separate interconnecting assembly attached to the in-ear assembly (s).
  • the system of the present invention may comprise an in-ear assembly for each of a user’s ear, as well as an interconnecting member which may further comprise the preamplifier(s), audio processor(s), playback module(s), as well as other appropriate circuitry and components.
  • the interconnecting member may be worn around the neck, and connected to the in-ear assemblies, or in-ear bud assemblies, by wire connections, or may be wireless by use of Bluetooth or other suitable radio-frequency transmission technology.
  • the interconnecting member may be formed from a flexible back section and stiff side sections.
  • the interconnecting member may house a printed circuit board having various componentry, such as the audio processor.
  • the interconnecting member can provide a user with volume control functions, providing a user a level of control with which to mix between environmental audio signals and voice communication signals.
  • the interconnecting member may have a listen mode and mute mode, providing a user with the ability to mute the microphone that receives environmental audio signals, allowing the user to receive and listen to phone calls, thereby providing a means of communication.
  • the interconnecting member can also house a removable battery to charge the apparatus.
  • the present invention lies in hearing protection systems for use in environments where situational awareness is critical.
  • the system of the present invention provides a suitable noise-reduction assembly for the protection of the user’s hearing against loud noises.
  • the microphone assembly allows external audio to be detected and delivered to at a safer level than would otherwise be perceived by the user.
  • the tragus and anti-helix structure allow the “directional” information of ambient noises to be captured and faithfully recreated to the user by way of an HRTF signal.
  • the present invention may be useful in construction sites, where the need for hearing protection and situational awareness is a key component of on-site safety. By utilizing the present invention, user’s need not sacrifice situational awareness for hearing protection.
  • Figure 1 is a perspective external view of an apparatus for generating a head related audio transfer function.
  • Figure 2 is a perspective internal view of an apparatus for generating a head related audio transfer function.
  • Figure 3 is a block diagram directed to a system for generating a head related audio transfer function.
  • Figure 4A illustrates a side profile view of a wearable device comprising an apparatus for generating a head related audio transfer function.
  • Figure 4B illustrates a front profile view of a wearable device comprising an apparatus for generating a head related audio transfer function.
  • Figure 5 illustrates a flowchart directed to a method for generating a head related audio transfer function.
  • Figure 6 illustrates a schematic of one embodiment of an audio processor according to one embodiment of the present invention.
  • Figure 7 illustrates a schematic of another embodiment of an audio processor according to one embodiment of the present invention.
  • Figure 8 illustrates a block diagram of one method for processing an audio signal with an audio processor according to one embodiment of the present invention.
  • Figure 9 illustrates a block diagram of another method for processing an audio signal with an audio processor according to another embodiment of the present invention.
  • Figure 10 illustrates an external view of a wearable in-ear assembly for hearing enhancement and protection capable of generating a head related audio transfer function for a user.
  • Figure 11 is an interior sectional view of the embodiment of Figure 10.
  • Figure 12 illustrates a top perspective view in partially exploded form of a portion of the embodiment of Figures 10 and 11.
  • Figure 13 is a perspective detail view of a portion of the embodiment of Figures 10 and
  • Figure 14 illustrates a view of an isolation baffle disposed within an interior of the embodiment of Figures 10 and 11.
  • Figure 15 illustrates a stabilizer assembly component to be disposed on an exterior of the embodiment of Figures 10 and 11.
  • Figure 16 illustrates an alternative embodiment of a wearable apparatus for hearing enhancement and protection capable of generating a head related audio transfer function for a user.
  • Figure 17A illustrates an interconnecting member of the embodiment of Figure 16.
  • Figure 17B illustrates a partially exploded view of an interconnecting member of the embodiment of Figure 16.
  • the present invention is directed to an apparatus, system, and method for generating a head related audio transfer function for a user.
  • some embodiments relate to capturing surrounding sound in the external environment in real time, filtering that sound through unique structures formed on the apparatus in order to generate audio positional data, and then processing that sound to enhance and relay the positional audio data to a user, such that the user can determine the origination of the sound in three dimensional space.
  • apparatus 100 for generating a head related audio transfer function for a user, or “HRTF generator”.
  • apparatus 100 comprises an external manifold 110 and an internal manifold 120.
  • the external manifold 110 will be disposed at least partially on an exterior of the apparatus 100.
  • the internal manifold 120 on the other hand, will be disposed along an interior of the apparatus 100.
  • the exterior of the apparatus 100 comprises the external environment, such that the exterior is directly exposed to the air of the surrounding environment.
  • the interior of the apparatus 100 comprises at least a partially sealed off environment that partially or fully obstructs the direct flow of acoustic waves.
  • the external manifold 110 may comprise a hexahedron shape having six faces. In at least one embodiment, the external manifold 110 is substantially cuboid. The external manifold 110 may comprise at least one surface that is concave or convex, such as an exterior surface exposed to the external environment.
  • the internal manifold 120 may comprise a substantially cylindrical shape, which may be at least partially hollow. The external manifold 110 and internal manifold 120 may comprise sound dampening or sound proof materials, such as various foams, plastics, and glass known to those skilled in the art.
  • the external manifold 110 comprises an antihelix structure 101, a tragus structure 102, and an opening 103 that are externally visible.
  • the opening 103 is in direct air flow communication with the surrounding environment, and as such will receive a flow of acoustic waves or vibrations in the air that passes through the opening 103.
  • the tragus structure 102 is disposed to partially enclose the opening 103
  • the antihelix structure 101 is disposed to partially enclose both the antihelix structure 102 and the opening 103.
  • the antihelix structure 101 comprises a partial dome structure having a closed side 105 and an open side 106.
  • the open side 106 faces the preferred listening direction 104
  • the closed side 105 faces away from the preferred listening direction 104.
  • the tragus structure 102 may also comprise a partial dome structure having a closed side 107 and an open side 108.
  • the open side 108 faces away from the preferred listening direction 104, while the closed side 107 faces towards the preferred listening direction 104.
  • the open side 106 of the antihelix structure 101 may be in direct confronting relation to the open side 108 of the tragus structure 102, regardless of the preferred listening direction 104.
  • Partial dome as defined for the purposes of this document may comprise a half-dome structure or any combination of partial-dome stmctures.
  • the anti-helix structure 101 of Figure 1 comprises a half-dome
  • the tragus structure 102 comprises a partial-dome wherein the base portion may be less than that of a half-dome, but the top portion may extend to or beyond the halfway point of a half-dome to provide increased coverage or enclosure of the opening 103 and other stmctures.
  • the top portion and bottom portion of the partial dome may vary in respective dimensions to form varying portions of a full dome structure, in order to create varying coverage of the opening 103. This allows the apparatus to produce different or enhanced acoustic input for calculating direction and distance of the source sound relative to the user.
  • the antihelix structure 101 and tragus structure 102 may be modular, such that different sizes or shapes (variations of different partial domes or partial- domes) may be swapped out based on a user’s preference for particular acoustic characteristics.
  • the opening 103 is connected to, and in air flow communication with, an opening canal 111 inside the external manifold 110.
  • the opening canal 111 is disposed in a substantially perpendicular orientation relative to the desired listening direction 104 of the user.
  • the opening canal 111 is further connected in air flow communication with an auditory canal 121.
  • a portion of the auditory canal 121 may be formed in the external manifold 110.
  • the opening canal 111 and auditory canal 121 may be of a single piece construction.
  • a canal connector not shown may be used to connect the two segments.
  • At least a portion of the auditory canal 121 may also be formed within the internal manifold 121.
  • the internal manifold 120 is formed wholly or substantially within an interior of the apparatus, such that it is not exposed directly to the outside air and will not be substantially affected by the external environment.
  • the auditory canal 121 formed within at least a portion of the internal manifold 121 will be disposed in a substantially parallel orientation relative to desired listening direction 104 of the user.
  • the auditory canal comprises a length that is greater than two times its diameter.
  • a microphone housing 122 is attached to an end of the auditory canal 121.
  • a microphone generally at 123 is mounted against the end of the auditory canal 121.
  • the microphone 123 is mounted flush against the auditory canal 121, such that the connection may be substantially air tight to avoid interference sounds.
  • an air cavity generally at 124 is created behind the microphone and at the end of the internal manifold 120. This may be accomplished by inserting the microphone 123 into the microphone housing 122, and then sealing the end of the microphone housing, generally at 124, with a cap.
  • the cap may be substantially air tight in at least one embodiment. Different gasses having different acoustic characteristics may be used within the air cavity.
  • apparatus 100 may form a part of a larger system 300 as illustrated in Figure 3.
  • a system 300 may comprise a left HRTF generator 100, a right HRTF generator 100’, a left preamplifier 210, a right preamplifier 210’, an audio processor 220, a left playback module 230, and a right playback module 230’.
  • the left and right HRTF generators 100 and 100’ may comprise the apparatus 100 described above, each having unique structures such as the antihelix structure 101 and tragus structure 102. Accordingly, the HRTF generators 100/100’ may be structured to generate a head related audio transfer function for a user, such that the sound received by the HRTF generators 100/100’ may be relayed to the user to accurately communicate position data of the sound. In other words, the HRTF generators 100/100’ may replicate and replace the function of the user’s own left and right ears, where the HRTF generators would collect sound, and perform respective spectral transformations or a filtering process to the incoming sounds to enable the process of vertical localization to take place.
  • a left preamplifier 210 and right preamplifier 210’ may then be used to enhance the filtered sound coming from the HRTF generators, in order to enhance certain acoustic characteristics to improve locational accuracy, or to filter out unwanted noise.
  • the preamplifiers 210/210’ may comprise an electronic amplifier, such as a voltage amplifier, current amplifier, transconductance amplifier, transresistance amplifier and/or any combination of circuits known to those skilled in the art for increasing or decreasing the gain of a sound or input signal.
  • the preamplifier comprises a microphone preamplifier configured to prepare a microphone signal to be processed by other processing modules.
  • microphone signals sometimes are too weak to be transmitted to other units, such as recording or playback devices with adequate quality.
  • a microphone preamplifier thus increases a microphone signal to the line level by providing stable gain while preventing induced noise that might otherwise distort the signal.
  • Audio processor 230 may comprise a digital signal processor and amplifier, and may further comprise a volume control. Audio processor 230 may comprise a processor and combination of circuits structured to further enhance the audio quality of the signal coming from the microphone preamplifier, such as but not limited to shelf filters, equalizers, modulators. For example, in at least one embodiment the audio processor 230 may comprise a processor that performs the steps for processing a signal as taught by the present inventor’s US Patent No. 8,160,274, the entire disclosure of which is incorporated herein by reference. Audio processor 230 may incorporate various acoustic profiles customized for a user and/or for an environment, such as those described in the present inventor’s US Patent No. 8,565,449, the entire disclosure of which is incorporated herein by reference.
  • Audio processor 230 may additionally incorporate processing suitable for high noise environments, such as those described in the present inventor’s US Patent No. 8,462,963, the entire disclosure of which is incorporated herein by reference. Parameters of the audio processor 230 may be controlled and modified by a user via any means known to one skilled in the art, such as by a direct interface or a wireless communication interface.
  • the left playback module 230 and right playback module 230’ may comprise headphones, earphones, speakers, or any other transducer known to one skilled in the art.
  • the purpose of the left and right playback modules 230/230’ is to convert the electrical audio signal from the audio processor 230 back into perceptible sound for the user.
  • a moving-coil transducer, electrostatic transducer, electret transducer, or other transducer technologies known to one skilled in the art may be utilized.
  • the present system 200 comprises a device 200 as generally illustrated at Figures 4 A and 4B, which may be a wearable headset 200 having the apparatus 100 embedded therein, as well as various amplifiers including but not limited to 210/210’, processors such as 220, playback modules such as 230/230’, and other appropriate circuits or combinations thereof for receiving, transmitting, enhancing, and reproducing sound.
  • a wearable headset 200 having the apparatus 100 embedded therein, as well as various amplifiers including but not limited to 210/210’, processors such as 220, playback modules such as 230/230’, and other appropriate circuits or combinations thereof for receiving, transmitting, enhancing, and reproducing sound.
  • a method for generating a head related audio transfer function is shown. Accordingly, external sound is first filtered through at least a tragus structure and an antihelix structure formed along an exterior of an HRTF generator, as in 201, in order to create a filtered sound. Next, the filtered sound is passed through an opening and auditory canal along an interior of the HRTF generator, as in 202, in order to create an input sound. The input sound is received at a microphone embedded within the HRTF generator, as in 203, in order to create an input signal. The input signal is then amplified with a preamplifier, as in 204, in order to create an amplified signal. The amplified signal is processed with an audio processor, as in 205, in order to create a processed signal. Finally, the processed signal is transmitted to a playback module, as in 206, in order to relay the audio and/or locational audio data to the user.
  • a preamplifier as in order to create an amplified signal.
  • the method of Figure 5 may perform the locational audio capture and transmission to a user in real time. This facilitates usage in a hearing assistance situation, such as a hearing aid for a user with impaired hearing. This also facilitates usage in a high noise environment, such as to filter out noises and/or enhancing human speech.
  • the method of Figure 5 may further comprise a calibration process, such that each user can replicate his or her unique HRTF in order to provide for accurate localization of a sound in three dimensional space.
  • the calibration may comprise adjusting the antihelix and tragus structures as described above, which may be formed of modular and/or moveable components.
  • the antihelix and/or tragus structure may be repositioned, and/or differently shaped and/or sized structures may be used.
  • the audio processor 230 described above may be further calibrated to adjust the acoustic enhancement of certain sound waves relative to other sound waves and/or signals.
  • an audio processor 230 is represented schematically as a system 1000.
  • Figure 6 illustrates at least one preferred embodiment of a system 1000
  • Figure 7 provides examples of several subcomponents and combinations of subcomponents of the modules of Figure 6.
  • the systems 1000 and 3000 generally comprise an input device 1010 (such as the left preamplifier 210 and/or right preamplifier 210’), a high pass filter 1110, a first filter module 3010, a first compressor 1140, a second filter module 3020, a first processing module 3030, a band splitter 1190, a low band compressor 1300, a high band compressor 1310, a second processing module 3040, and an output device 1020.
  • an input device 1010 such as the left preamplifier 210 and/or right preamplifier 210’
  • a high pass filter 1110 such as the left preamplifier 210 and/or right preamplifier 210’
  • a high pass filter 1110 such as the left preamplifier 210 and/or right pre
  • the input device 1010 is at least partially structured or configured to transmit an input audio signal 2010, such as an amplified signal from a left or right preamplifier 210, 210’, into the system 1000 of the present invention, and in at least one embodiment into the high pass filter 1110.
  • an input audio signal 2010, such as an amplified signal from a left or right preamplifier 210, 210’
  • the high pass filter 1110 is configured to pass through high frequencies of an audio signal, such as the input signal 2010, while attenuating lower frequencies, based on a predetermined frequency.
  • the frequencies above the predetermined frequency may be transmitted to the first filter module 3010 in accordance with the present invention.
  • ultra-low frequency content is removed from the input audio signal, where the predetermined frequency may be selected from a range between 300 Hz and 3 kHz.
  • the predetermined frequency may vary depending on the source signal, and vary in other embodiments to comprise any frequency selected from the full audible range of frequencies between 20 Hz to 20 kHz.
  • the predetermined frequency may be tunable by a user, or alternatively be statically set.
  • the high pass filter 1110 may further comprise any circuits or combinations thereof structured to pass through high frequencies above a predetermined frequency, and attenuate or filter out the lower frequencies.
  • the first filter module 3010 is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the high pass signal 2110. For example, and in at least one embodiment, frequencies below a first frequency may be adjusted by ⁇ X dB, while frequencies above a first frequency may be adjusted by ⁇ Y dB. In other embodiments, a plurality of frequencies may be used to selectively adjust the gain of various frequency ranges within an audio signal.
  • the first filter module 3010 may be implemented with a first low shelf filter 1120 and a first high shelf filter 1130, as illustrated in Figure 6. The first low shelf filter 1120 and first high shelf filter 1130 may both be second-order filters.
  • the first low shelf filter 1120 attenuates content below a first frequency, and the first high shelf filter 1120 boosts content above a first frequency.
  • the frequency used for the first low shelf filter 1120 and first high shelf filter 1130 may comprise two different frequencies. The frequencies may be static or adjustable. Similarly, the gain adjustment (boost or attenuation) may be static or adjustable.
  • the first compressor 1140 is configured to modulate a signal, such as the first filtered signal 4010.
  • the first compressor 1120 may comprise an automatic gain controller.
  • the first compressor 1120 may comprise standard dynamic range compression controls such as threshold, ratio, attack and release. Threshold allows the first compressor 1120 to reduce the level of the filtered signal 2110 if its amplitude exceeds a certain threshold. Ratio allows the first compressor 1120 to reduce the gain as determined by a ratio. Attack and release determines how quickly the first compressor 1120 acts.
  • the attack phase is the period when the first compressor 1120 is decreasing gain to reach the level that is determined by the threshold.
  • the release phase is the period that the first compressor 1120 is increasing gain to the level determined by the ratio.
  • the first compressor 1120 may also feature soft and hard knees to control the bend in the response curve of the output or modulated signal 2120, and other dynamic range compression controls appropriate for the dynamic compression of an audio signal.
  • the first compressor 1120 may further comprise any device or combination of circuits that is structured and configured for dynamic range compression.
  • the second filter module 3020 is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the modulated signal 2140.
  • the second filter module 3020 is of the same configuration as the first filter module 3010.
  • the second filter module 3020 may comprise a second low shelf filter 1150 and a second high shelf filter 1160.
  • the second low shelf filter 1150 may be configured to filter signals between 100Hz and 3000Hz, with an attenuation of between -5dB to -20dB.
  • the second high shelf filter 1160 may be configured to filter signals between 100Hz and 3000Hz, with a boost of between +5dB to +20dB.
  • the second filter module 3020 may be configured in at least a partially inverse configuration to the first filter module 3010. For instance, the second filter module may use the same frequency, for instance the first frequency, as the first filter module. Further, the second filter module may adjust the gain inversely to the gain or attenuation of the first filter module, of content above the first frequency. Similarly second filter module may also adjust the gain inversely to the gain or attenuation of the of the first filter module, of content below the first frequency. In other words, the purpose of the second filter module in one embodiment may be to “undo” the gain adjustment that was applied by the first filter module.
  • the first processing module 3030 is configured to process a signal, such as the second filtered signal 4020.
  • the first processing module 3030 may comprise a peak/dip module, such as 1180 represented in Figure 7.
  • the first processing module 3030 may comprise a first gain element 1170.
  • the processing module 3030 may comprise both a first gain element 1170 and a peak/dip module 1180 for the processing of a signal.
  • the first gain element 1170 in at least one embodiment, may be configured to adjust the level of a signal by a static amount.
  • the first gain element 1170 may comprise an amplifier or a multiplier circuit. In other embodiments, dynamic gain elements may be used.
  • the peak/dip module 1180 is configured to shape the desired output spectrum, such as to increase or decrease overshoots or undershoots in the signal. In some embodiments, the peak/dip module may further be configured to adjust the slope of a signal, for instance for a gradual scope that gives a smoother response, or alternatively provide for a steeper slope for more sudden sounds. In at least one embodiment, the peak/dip module 1180 comprises a bank of ten cascaded peak/dipping filters. The bank of ten cascaded peaking/dipping filters may further be second-order filters. In at least one embodiment, the peak/dip module 1180 may comprise an equalizer, such as parametric or graphic equalizers.
  • the band splitter 1190 is configured to split a signal, such as the processed signal 4030.
  • the signal is split into a low band signal 2200, a mid band signal 2210, and a high band signal 2220.
  • Each band may be the output of a fourth order section, which may be further realized as the cascade of second order biquad filters.
  • the band splitter may comprise any combination of circuits appropriate for splitting a signal into three frequency bands.
  • the low, mid, and high bands may be predetermined ranges, or may be dynamically determined based on the frequency itself, i.e. a signal may be split into three even frequency bands, or by percentage.
  • the different bands may further be defined or configured by a user and/or control mechanism.
  • a low band compressor 1300 is configured to modulate the low band signal 2200
  • a high band compressor 1310 is configured to modulate the high band signal 2220.
  • each of the low band compressor 1300 and high band compressor 1310 may be the same as the first compressor 1140. Accordingly, each of the low band compressor 1300 and high band compressor 1310 may each be configured to modulate a signal.
  • Each of the compressors 1300, 1310 may comprise an automatic gain controller, or any combination of circuits appropriate for the dynamic range compression of an audio signal.
  • a second processing module 3040 is configured to process at least one signal, such as the modulated low band signal 2300, the mid band signal 2210, and the modulated high band signal 2310.
  • the second processing module 3040 may comprise a summing module 1320 configured to combine a plurality of signals.
  • the summing module 1320 may comprise a mixer structured to combine two or more signals into a composite signal.
  • the summing module 1320 may comprise any circuits or combination thereof structured or configured to combine two or more signals.
  • the summing module 1320 comprises individual gain controls for each of the incoming signals, such as the modulated low band signal 2300, the mid band signal 2210, and the modulated high band signal 2310.
  • the second processing module 3040 may further comprise a second gain element 1330.
  • the second gain element 1330 in at least one embodiment, may be the same as the first gain element 1170.
  • the second gain element 1330 may thus comprise an amplifier or multiplier circuit to adjust the signal, such as the combined signal, by a predetermined amount.
  • the output device 1020 may comprise the left playback module 230 and/or right playback module 230’.
  • Figure 8 illustrates a block diagram of one method for processing an audio signal with an audio processor 220, which may in at least one embodiment incorporate the components or combinations thereof from the systems 1000 and/or 3000 referenced above.
  • Each step of the method in Figure 8 as detailed below may also be in the form of a code segment stored on a non-transitory computer readable medium for execution by the audio processor 220.
  • an input audio signal such as the amplified signal
  • a high pass filter to create a high pass signal.
  • the high pass filter is configured to pass through high frequencies of a signal, such as the input signal, while attenuating lower frequencies.
  • ultra-low frequency content is removed by the high- pass filter.
  • the high pass filter may comprise a fourth-order filter realized as the cascade of two second-order biquad sections. The reason for using a fourth order filter broken into two second order sections is that it allows the filter to retain numerical precision in the presence of finite word length effects, which can happen in both fixed and floating point implementations.
  • An example implementation of such an embodiment may assume a form similar to the following:
  • d(k-l) and d(k-2) Two memory locations are allocated, designated as d(k-l) and d(k-2), with each holding a quantity known as a state variable.
  • d(k) x(k) - al * d(k-l) - a2 * d(k-2)
  • the high pass signal from the high pass filter is then filtered, as in 5020, with a first filter module to create a first filtered signal.
  • the first filter module is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the high pass signal.
  • the first filter module may comprise a second order low shelf filter and a second order high shelf filter in at least one embodiment.
  • the first filter module boosts the content above a first frequency by a certain amount, and attenuates the content below a first frequency by a certain amount, before presenting the signal to a compressor or dynamic range controller. This allows the dynamic range controller to trigger and adjust higher frequency material, whereas it is relatively insensitive to lower frequency material.
  • the first filtered signal from the first filter module is then modulated, as in 5030, with a first compressor.
  • the first compressor may comprise an automatic or dynamic gain controller, or any circuits appropriate for the dynamic compression of an audio signal. Accordingly, the compressor may comprise standard dynamic range compression controls such as threshold, ratio, attack and release.
  • An example implementation of the first compressor may assume a form similar to the following:
  • the ratio of the signal’s level to invThr then determines the next step. If the ratio is less than one, the signal is passed through unaltered.
  • the modulated signal from the first compressor is then filtered, as in 5040, with a second filter module to create a second filtered signal.
  • the second filter module is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the modulated signal.
  • the second filter module may comprise a second order low shelf filter and a second order high shelf filter in at least one embodiment.
  • the second filter module boosts the content above a second frequency by a certain amount, and attenuates the content below a second frequency by a certain amount.
  • the second filter module adjusts the content below the first specified frequency by a fixed amount, inverse to the amount that was removed by the first filter module.
  • the second filter module may then attenuate the content above the first frequency by -X dB, and boost the content below the first frequency by +Y dB.
  • the purpose of the second filter module in one embodiment may be to “undo” the filtering that was applied by the first filter module.
  • the second filtered signal from the second filter module is then processed, as in 5050, with a first processing module to create a processed signal.
  • the processing module may comprise a gain element configured to adjust the level of the signal. This adjustment, for instance, may be necessary because the peak-to-average ratio was modified by the first compressor.
  • the processing module may comprise a peak/dip module.
  • the peak/dip module may comprise ten cascaded second-order filters in at least one embodiment.
  • the peak/dip module may be used to shape the desired output spectrum of the signal.
  • the first processing module comprises only the peak/dip module.
  • the first processing module comprises a gain element followed by a peak/dip module.
  • the processed signal from the first processing module is then split, as in 5060, with a band splitter into a low band signal, a mid band signal, and a high band signal.
  • the band splitter may comprise any circuit or combination of circuits appropriate for splitting a signal into a plurality of signals of different frequency ranges.
  • the band splitter comprises a fourth-order band- splitting bank.
  • each of the low band, mid band, and high band are yielded as the output of a fourth-order section, realized as the cascade of second-order biquad filters.
  • the low band signal is modulated, as in 5070, with a low band compressor to create a modulated low band signal.
  • the low band compressor may be configured and/or computationally identical to the first compressor in at least one embodiment.
  • the high band signal is modulated, as in 5080, with a high band compressor to create a modulated high band signal.
  • the high band compressor may be configured and/or computationally identical to the first compressor in at least one embodiment.
  • the modulated low band signal, mid band signal, and modulated high band signal are then processed, as in 5090, with a second processing module.
  • the second processing module comprises at least a summing module.
  • the summing module is configured to combine a plurality of signals into one composite signal.
  • the summing module may further comprise individual gain controls for each of the incoming signals, such as the modulated low band signal, the mid band signal, and the modulated high band signal.
  • the coefficients wO, wl, and w2 represent different gain adjustments.
  • the second processing module may further comprise a second gain element.
  • the second gain element may be the same as the first gain element in at least one embodiment.
  • the second gain element may provide a final gain adjustment.
  • the second processed signal is transmitted as the output signal.
  • Figure 9 illustrates a block diagram of one method for processing an audio signal with an audio processor 220, which may in at least one embodiment incorporate the components or combinations thereof from the systems 1000 and/or 3000 referenced above. Because the individual components of Figure 9 have been discussed in detail above, they will not be discussed here. Further, each step of the method in Figure 9 as detailed below may also be in the form of a code segment directed to at least one embodiment of the present invention, which is stored on a non-transitory computer readable medium, for execution by the audio processor 220 of the present invention.
  • an input audio signal is first filtered, as in 5010, with a high pass filter.
  • the high pass signal from the high pass filter is then filtered, as in 6010, with a first low shelf filter.
  • the signal from the first low shelf filter is then filtered with a first high shelf filter, as in 6020.
  • the first filtered signal from the first low shelf filter is then modulated with a first compressor, as in 5030.
  • the modulated signal from the first compressor is filtered with a second low shelf filter as in 6110.
  • the signal from the low shelf filter is then filtered with a second high shelf filter, as in 6120.
  • the second filtered signal from the second low shelf filter is then gain-adjusted with a first gain element, as in 6210.
  • the signal from the first gain element is further processed with a peak/dip module, as in 6220.
  • the processed signal from the peak/dip module is then split into a low band signal, a mid band signal, and a high band signal, as in 5060.
  • the low band signal is modulated with a low band compressor, as in 5070.
  • the high band signal is modulated with a high band compressor, as in 5080.
  • the modulated low band signal, mid band signal, and modulated high band signal are then combined with a summing module, as in 6310.
  • the combined signal is then gain adjusted with a second gain element in order to create the output signal, as in 6320.
  • Figure 10 illustrates a wearable apparatus for hearing enhancement and protection, capable of generating a head related audio transfer function (HRTF) for a user, comprising at least one in-ear assembly 400.
  • HRTF head related audio transfer function
  • the in-ear assembly 400 is structured to be disposed inside and/or partially outside of at least one of a user’s ears, when in an operative position, or when operatively positioned.
  • One purpose of the in-ear assembly 400 is to capture sound from a user’s external environment in real time, filter the sound through the unique stmctures formed on and in the in-ear assembly 400 in order to generate audio positional or directional data, process the sound to enhance the quality of the audio positional data, enhance and amplify the sound by means of various preamplifiers, and relay the audio positional data to a user by means of a playback module, speaker, or a variety of other transducers, allowing the user to effectively determine the origination of the sound in three dimensional space.
  • the in-ear assembly 400 comprises at least one chamber, shell, or chassis, which houses the various stmctures on the interior of the in-ear assembly 400, and provides exterior surfaces to house the structures that mimic the functions of a human ear for generating a head related audio transfer function (“HRTF”).
  • HRTF head related audio transfer function
  • the in-ear assembly 400 comprises at least a primary chamber 403 and a secondary chamber 406.
  • the primary chamber 403 is situated proximally to a user’s ear and the secondary chamber 406 is located distally to a user’s ear when the in-ear assembly 400 is worn by a user.
  • the exterior, or outside surface, of the secondary chamber 406 of the in-ear assembly 400 will be at least partially open or exposed to the external environment, providing a means for the in-ear assembly 400 to receive sound, captured by a microphone 415.
  • the interior of the in-ear assembly 400 comprises at least a partially sealed off environment that partially or fully obstructs the direct flow of acoustic waves, ensuring that noise interference from the external environment will not impede the quality of the audio input received by the microphone 415.
  • the microphone 415 will relay the audio input sound to a playback module 230, which will transmit the audio output sound to a user by means of an auditory channel 428 connected to a user’s ear(s) in an operative position.
  • the secondary chamber 406 and the primary chamber 403 may comprise sound dampening or sound proof materials such as, but not limited to, various foams, plastics, and glass.
  • the primary chamber and the secondary chamber 406 can be made out of a hard, strong plastic or a plurality of other materials.
  • the exterior surface of the secondary chamber 406 comprises at least an antihelix structure 101, a tragus structure 102, and a microphone aperture 409.
  • the microphone aperture 409 is in direct air flow communication with the surrounding environment, and as such will receive a flow of acoustic sound waves or vibrations in the air that are filtered and passed through the antihelix structure 101 and the tragus structure 102.
  • the antihelix structure 101 and the tragus structure 102 mimic the function of the external part of the human ear, the pinna, which assist and act as a funnel in directing and filtering the sound or audio input into the microphone aperture 409, through the microphone channel 412, and received into the microphone 415.
  • the in-ear assembly 400 may also include a preamplifier 210, as schematically illustrated in Figure 3, to amplify the filtered audio input signal, as well as an audio processor 220, also illustrated in Figure 3, to process the amplified signal, and create a processed signal to be received by the playback module 230’, which will communicate the audio and/or locational audio data to the user.
  • a preamplifier 210 as schematically illustrated in Figure 3, to amplify the filtered audio input signal
  • an audio processor 220 also illustrated in Figure 3
  • process the amplified signal and create a processed signal to be received by the playback module 230’, which will communicate the audio and/or locational audio data to the user.
  • the tragus stmcture 102 is disposed to partially enclose the microphone aperture 409, and the antihelix structure 101 is disposed to partially enclose both the tragus structure 102 and the microphone aperture 406.
  • the antihelix structure 101 comprises a partial dome structure having a close side 105 and an open side 106.
  • the tragus structure 102 may also comprise an at least partial dome stmcture having a closed side 107 and an open side 108.
  • the open side 106 of the antihelix structure 101 may be in direct confronting relation to the open side 108 of the tragus structure 102.
  • the anti-helix structure 101 of Figure 11 and 12 comprises ahalf-dome
  • the tragus stmcture 102 comprises a partial-dome wherein the base portion may be less than that of a half-dome, but the top portion may extend to or beyond the halfway point of a half-dome to provide increased coverage or enclosure of the microphone aperture 409 and other structures.
  • the top portion and bottom portion of the partial dome may vary in respective dimensions to form varying portions of a full dome structure, in order to create varying coverage of the microphone aperture 409. This allows the in-ear assembly 400 to produce different or enhanced acoustic input for calculating direction and distance of the source sound relative to the user.
  • the antihelix structure 101 and the tragus structure 102 may be modular, such that different sizes or shapes (variations of different partial domes) may be swapped out based on a user’s preference for particular acoustic characteristics.
  • a windscreen structure 418 may be disposed on the exterior surface of the secondary chamber 406 of the in-ear assembly 400.
  • the windscreen structure 418 provides a mechanism to reduce unwanted noise and wind interference from the external environment, enhancing and filtering the quality of the incoming sound or audio input signal to be received by the in-ear assembly 400.
  • the exterior surface of the secondary chamber 406 can comprise a plurality of windscreen attachment regions 424/424’ to connect the windscreen structure 418, which comprises of a plurality of windscreen connectors 425/425’, providing the ability to attach and remove the windscreen structure on the exterior of the in-ear assembly 400.
  • the windscreen structure 418 further comprises or houses an open-cell foam component 421, or a variety of other materials, which will together reduce noise interference from being received by the in-ear assembly 400.
  • the windscreen structure 418 comprising the open-cell foam 421 can be disposed to partially or fully cover the antihelix stmcture 101, the tragus structure 102, and the microphone aperture 409.
  • the windscreen stmcture 418 can be configured into variety of shapes.
  • the windscreen stmcture 418 will take on a square shape with rounded edges, with an open-style hexagon like structure, providing a plurality of open slots, which may vary in number, such as six open slots.
  • the open-cell foam 421 housed within can receive and filter noise disturbances, and transmit a higher quality sound to the antihelix structure 101 , the tragus structure 102, the microphone aperture 409, down into the microphone channel 412, and into the microphone 415.
  • the windscreen structure 418 can be made of a variety of materials, including a strong, flexible plastic, which can also provide protection to the underlying structures on the exterior of the in-ear assembly 400.
  • the windscreen structure 418 comprises of windscreen connector stmctures 425 and 425’, which snap into the windscreen attachment regions 424 and 424’ on the exterior of the secondary chamber 406, and extend inside the secondary chamber 406 of the in-ear assembly 400.
  • the windscreen attachment areas 424 and 424’, and the windscreen connector structures 425 and 425’ are sealed off and physically isolated from the microphone manifold 408, which comprises of microphone aperture 409, microphone channel 412, microphone 415, and microphone housing 416, as well as the playback module 230 and the other stmctures of inside the in-ear assembly 400.
  • the isolation and sealed environment ensure that noise disturbances are reduced, and do not interfere with the audio input of the sound received by the microphone 415, and the output of sound transmitted by the playback module 230 to the user. Additionally, the windscreen structure 418 can be removed, allowing a user to replace the open-cell foam 421 with substitute materials as desired. Similarly, as depicted in Figure 12, the antihelix structure 101 and the tragus structure 102 on the exterior of the secondary chamber 406 of the in-ear assembly 400 can be a removed and swapped out with different sizes and shapes of the antihelix structure 101 and tragus structure 102 to provide a user with different acoustic characters as desired.
  • a microphone manifold 408 is an independent structure embedded within the in-ear assembly 400, comprising at least the microphone aperture 409, the microphone channel 412, the microphone 415, and the microphone housing 416.
  • the microphone manifold 408 may reside wholly within the secondary chamber 406, or may also extend into the primary chamber 403.
  • the microphone aperture 409 is exposed to the external environment, providing a means of receiving a sound signal or audio input, and is connected to and in air flow communication with, the microphone channel 412.
  • the microphone channel 412 comprises a length that is at least two times its diameter. In one embodiment, the microphone channel 412 comprises a length that is three times its diameter.
  • the microphone channel 412 is connected to the microphone 415, providing a means of communicating the sound signals and audio input received from the external environment to the microphone 415, which may be housed in a microphone housing 416.
  • the microphone manifold 408 isolates the microphone channel 412 and the microphone 415 within the interior of the in-ear assembly 400, ensuring that the microphone 415 receives undisturbed sound and acoustic signals that funnel at least through the microphone aperture 409.
  • the microphone 415 can also be housed within a microphone housing 416, further isolating the microphone 415 within the interior of the in-ear assembly 400.
  • the microphone channel 412 can be disposed in a substantially parallel orientation relative to the desired listening direction 104 of the user when the ear- in assembly 400 is worn by a user, generally illustrated in Figure 10. In other embodiments, the microphone channel 412 can be disposed in a substantially perpendicular orientation relative to the listening direction 104 of the user. Similarly, the microphone 415 can be disposed in a substantially parallel orientation relative to the desired listening direction 104 of the user, or in a substantially perpendicular orientation when the in-ear assembly is worn by a user. However, the microphone channel 412 and microphone 415 can be disposed in various orientations, independent of the listening direction 104 of the user. The microphone 415 may be mounted flush on an end of the microphone manifold 408. In a preferred embodiment, an air cavity or gap 417 is situated between the microphone 415 and an end of the microphone manifold 408. Different gasses having different acoustic characteristics may be used with the air cavity.
  • the microphone 415 can be connected directly to the playback module 230, or speaker, housed within the primary chamber 403, or more generally in the interior of the in-ear assembly 400.
  • the microphone 415 may be connected to the playback module 230 by means of a connective wire 430, or by a variety of means to allow communication between the microphone 415 and the playback module 230.
  • the microphone 415 receives audio input from the external environment, which are communicated to the playback module 230, converting the audio input, into a sound or audio output that is relayed through the auditory channel 428, connected to an ear of the user, allowing the user to effectively determine the origination of the sound in three dimensional space.
  • an isolation baffle 431 physically isolates the microphone 415 from the playback module 230 in order to prevent feedback noise during operation of the in-ear assembly 400.
  • the isolation baffle 431 can achieve a 30 decibel or greater noise isolation between the microphone 415 and the playback module 230.
  • the isolation baffle 431 achieves the goal of ensuring that the sound pressure or output of the playback module will not interfere with the microphone’s 415 ability to effectively receive undisturbed sound input from the environment.
  • the isolation baffle 431 allows a user to effectively receive undisturbed sound output from the playback module 230, allowing the user to effectively pinpoint the origination of sound from the external environment.
  • the isolation baffle 431 can comprise of a single piece of a strong, flexible plastic.
  • the isolation baffle 431 may transverse the length and width of the in-ear assembly 400, and connect to the inside surface of the top of the in-ear assembly 400, or specifically the inside surface of the secondary chamber 406.
  • the isolation baffle 431 also comprises of an isolation post 434, that connects to a cylindrical structure 435 attached to the primary chamber 431.
  • the isolation baffle 431 may comprise interconnecting units of a variety of materials to achieve the desired isolation between the microphone 415 and the playback module 230.
  • the playback module 230 resides in the primary chamber 403 of the in- ear assembly 400, and the playback module 230.
  • the playback module 230 is connected to an auditory channel 428, which resides in a user’s ear, in the operative position, to communicate the audio output to the user.
  • the playback module 230 converts the electrical audio input signal received from the microphone 415 and various structures, such as the preamplifier 210 and the audio processor 220, producing audio output data, which travels through the auditory channel 428 to the user.
  • a stabilizer assembly 437 can be attached to the exterior of the in-ear assembly 400, or the exterior of the primary chamber 403 of the in-ear assembly, to stabilize the in-ear assembly 400 and the various structures in the proper orientation, when in the user’s ear, the operative position, as represented in Figure 10.
  • the stabilizer assembly 437 ensures that the antihelix structure 101, tragus structure 102, and the other structures on the exterior of the secondary chamber 406 of the in- ear assembly 400 are facing the listening direction 104 of the user.
  • the stabilizer assembly 437 provides the support to keep the microphone manifold 408 in a substantially parallel direction to the listening direction 104 of the user.
  • the stabilizer assembly 431 comprises a circular collar structure 440, which in the preferred embodiment is attached to an exterior portion of the primary chamber 403, and a concha-shaped structure 443 connected to the circular collar stmcture 440, that is situated comfortably within the outside portion of a user’s ear.
  • the stabilizer assembly 437 properly fixes the in-ear assembly 400 on a user’s ear and restricts movement of the in-ear assembly to facilitate proper orientation.
  • the at least one in-ear assembly 400 also comprises the previously mentioned preamplifier 210 and audio processor 220, as schematically illustrated in Figure 3.
  • the preamplifier 210 can enhance the sound filtered through the in-ear assembly, enhancing certain acoustic characteristics to improve locational accuracy, or to further filter out unwanted noise.
  • the preamplifier 210 may comprise an electronic amplifier, such as a voltage amplifier, current amplifier, transconductance amplifier, transresistance amplifier and/or any combination of circuits known to those skilled in the art for increasing or decreasing the gain of a sound or input signal.
  • the preamplifier comprises a microphone preamplifier configured to prepare a microphone signal to be processed by other processing modules.
  • microphone signals sometimes are too weak to be transmitted to other units, such as recording or playback devices with adequate quality.
  • a microphone preamplifier thus increases a microphone signal to the line level by providing stable gain while preventing induced noise that might otherwise distort the signal.
  • the audio processor 220 may comprise a digital signal processor and amplifier, and may further comprise a volume control. Audio processor 220 may comprise a processor and combination of circuits structured to further enhance the audio quality of the signal coming from the microphone preamplifier, such as but not limited to shelf filters, equalizers, modulators. For example, in at least one embodiment the audio processor 220 may comprise a processor that performs the steps for processing a signal as taught by the present inventor’s US Patent No. 8,160,274, the entire disclosure of which is incorporated herein by reference. Audio processor 220 may incorporate various acoustic profiles customized for a user and/or for an environment, such as those described in the present inventor’s US Patent No. 8,565,449, the entire disclosure of which is incorporated herein by reference.
  • Audio processor 220 may additionally incorporate processing suitable for high noise environments, such as those described in the present inventor’s US Patent No. 8,462,963, the entire disclosure of which is incorporated herein by reference. Parameters of the audio processor 220 may be controlled and modified by a user via any means known to one skilled in the art, such as by a direct interface or a wireless communication interface.
  • the at least one in-ear assembly 400 may form part of a larger wearable apparatus 500.
  • the apparatus 500 comprises a left in-ear bud assembly 400, a right in-ear bud assembly 400’, and an interconnecting member 502.
  • a connective wire 501 can connect the left in-ear bud assembly 400 to the interconnecting member 502, and a connective wire 501’ can connect the right in-ear bud assembly 400’ to the interconnecting member 502.
  • the interconnecting member 502 may comprise various components, as well as various amplifiers including but not limited to the preamplifiers 210/210’, an audio processor 220, and playback modules such as 230/230’, and other appropriate circuits or combinations thereof for receiving, transmitting, enhancing and reproducing sound.
  • the interconnecting member 502 can comprise of a flexible back section 504 that wraps around or extends into a first side section 506 and a second side section 506’ , and may be worn by a user around his or her neck.
  • the interconnecting member 502 can comprise of a volume control function 509 to enhance or reduce the volume level received from the playback module 230, or to reduce the audio input received from the microphone 415.
  • the interconnecting member 502 can comprise of a call microphone function 512, providing a user the ability to make and receive calls without removing the wearable apparatus 500.
  • the interconnecting member 502 can also comprise of a mute mode function 515 to prevent the transmission of audio output from the playback modules 230/230’.
  • the interconnecting member 502 also comprises a removable battery 518, illustrated in Figure 17A, capable of charging the apparatus.
  • the interconnecting member 502 can be connected to the in-ear bus assemblies 400 and 400’ by means of a connective wire as illustrated in Figure 16, or a wireless connections, such as Bluetooth technology.

Abstract

The present invention provides for an apparatus, system, and method for generating a head related audio transfer function in real time. Specifically, the present invention utilizes unique structural components including a tragus structure and an antihelix structure in connection with a microphone in order to communicate the location of a sound in three- dimensional space to a user. The invention also utilizes an audio processor to digitally process the head related audio transfer function.

Description

SYSTEM, METHOD, AND APPARATUS FOR GENERATING AND DIGITALLY
PROCESSING A HEAD RELATED AUDIO TRANSFER FUNCTION
Claim of Priority
The present application claims priority to previously filed U.S. Patent Application having Serial No. 16/917,001, filed on June 30, 2020, which claims priority to a provisional patent application, namely, having Serial No. 62/948,409 filed on December 16, 2020, the contents of which is incorporated herein by reference in its entirety.
Additionally, U.S. Patent Application having Serial No. 16/917,001 is a continuation- in-part of a previously filed, now pending application having Serial No. 15/864,190 and a filing date of January 8, 2018, which is a continuation-in-part of a previously filed application having Serial No. 15/478,696 and a filing date of April 4, 2017, which is a continuation application of a previously filed application having Serial No. 14/485,145 and a filing date of September 12, 2014, which matured into U.S. Patent No. 9,615,189, and which is based on, and a claim of priority was made under 35 U.S.C. Section 119(e), to a provisional patent application having Serial No. 62/035,025 and a filing date of August 8, 2014, all of which are explicitly incorporated herein by reference, in their entireties. The previously filed, now pending application having Serial No. 15/864,190, and a filing date of January 8, 2018, is also a continuation-in-part of a previously filed application having Serial No. 15/163,353 and a filing date of May 24, 2016, which matured into U.S. Patent No. 10,069,471, and which is a continuation-in-part of Serial No. 14/059,948, which matured into U.S. Patent No. 9,348,904, and which is a continuation-in-part of Serial No. 12/648,007 filed on December 28, 2009, which matured into U.S. Patent No. 8,565,449, and which is a continuation-in-part of Serial No. 11/947,301, filed November 29, 2007, which matured into U.S. Patent No. 8,160,274, and which claims priority to U.S. Provisional Application No. 60/861,711 filed November 30, 2006, each which are explicitly incorporated herein by reference, in there entireties. Further, Serial No. 11/947,301 is a continuation-in-part of Serial No. 11/703,216, filed February 7,
2007, and which claims priority to U.S. Provisional Application No. 60/765,722 filed February 7, 2006, each which are explicitly incorporated herein by reference, in their entireties.
FIELD OF THE INVENTION
The present invention provides for a system and apparatus for generating a real time head related audio transfer function. Specifically, unique structural components are utilized in connection with a microphone to reproduce certain acoustic characteristics of the human pinna in order to facilitate the communication of the location of a sound in three dimensional space to a user. The invention may further utilize an audio processor to digitally process the head related audio transfer function.
BACKGROUND OF THE INVENTION
Human beings have just two ears, but can locate sounds in three dimensions, in distance and in direction. This is possible because the brain, the inner ears, and the external ears (pinna) work together to make inferences about the location of a sound. The location of a sound is estimated by taking cues derived from one ear (monoaural cues), as well as by comparing the difference between the cues received in both ears (binaural cues).
Binaural cues relate to the differences of arrival and intensity of the sound between the two ears, which assist with the relative localization of a sound source. Monoaural cues relate to the interaction between the sound source and the human anatomy, in which the original sound is modified by the external ear before it enters the ear canal for processing by the auditory system. The modifications encode the source location relative to the ear location and are known as head-related transfer functions (HRTF). In other words, HRTFs describe the filtering of a sound source before it is perceived at the left and right ear drums, in order to characterize how a particular ear receives sound from a particular point in space. These modifications may include the shape of the listener’s ear, the shape of the listener’s head and body, the acoustical characteristics of the space in which the sound is played, and so forth. All these characteristics together influence how a listener can accurately tell what direction a sound is coming from. Thus, a pair of HRTFs accounting for all these characteristics, generated by the two ears, can be used to synthesize a binaural sound and accurately recognize it as originating from a particular point in space.
HRTFs have wide ranging applications, from virtual surround sound in media and gaming, to hearing protection in loud noise environments, and hearing assistance for the hearing impaired. Particularly, in fields hearing protection and hearing assistance, the ability to record and reconstruct a particular user’s HRTF presents several challenges as it must occur in real time. In the case of an application for hearing protection in high noise environments, heavy hearing protection hardware must be worn over the ears in the form of bulky headphones, thus, if microphones are placed on the outside of the headphones, the user will hear the outside world but will not receive accurate positional data because the HRTF is not being reconstructed. Similarly, in the case of hearing assistance for the hearing impaired, a microphone is similarly mounted external to the hearing aid, and any hearing aid device that fully blocks a user’s ear canal will not accurately reproduce that user’s HRTF.
Thus, there is a need for an apparatus and system for reconstructing a user’s HRTF in accordance to the user’s physical characteristics, in order to accurately relay positional sound information to the user in real time.
SUMMARY OF THE INVENTION
The present invention meets the existing needs described above by providing for an apparatus, system, and method for generating a head related audio transfer function. The present invention also provides for the ability to enhance audio in real-time and tailors the enhancement to the physical characteristics of a user and the acoustic characteristics of the external environment.
Accordingly, in initially broad terms, an apparatus directed to the present invention, also known as an HRTF generator, comprises an external manifold and internal manifold. The external manifold is exposed at least partially to an external environment, while the internal manifold is disposed substantially within an interior of the apparatus and/or a larger device or system housing said apparatus.
The external manifold comprises an antihelix structure, a tragus structure, and an opening. The opening is in direct air flow communication with the outside environment, and is structured to receive acoustic waves. The tragus structure is disposed to partially enclose the opening, such that the tragus structure will partially impede and/or affect the characteristics of the incoming acoustic waves going into the opening. The antihelix structure is disposed to further partially enclose the tragus structure as well as the opening, such that the antihelix structure will partially impede and/or affect the characteristics of the incoming acoustic waves flowing onto the tragus structure and into the opening. The antihelix and tragus structures may comprise partial domes or any variation of partial-domes comprising a closed side and an open side. In a preferred embodiment, the open side of the antihelix structure and the open side of the tragus structure are disposed in confronting relation to one another.
The opening of the external manifold is connected to and in air flow communication with an opening canal inside the external manifold. The opening canal may be disposed in a substantially perpendicular orientation relative to the desired orientation of the user. The opening canal is in further air flow communication with an auditory canal, which is formed within the internal manifold but also be formed partially in the external manifold.
The internal manifold comprises the auditory canal and a microphone housing. The microphone housing is attached or connected to an end of the auditory canal on the opposite end to its connection with the opening canal. The auditory canal, or at least the portion of the portion of the auditory canal, may be disposed in a substantially parallel orientation relative to the desired listening direction of the user. The microphone housing may further comprise a microphone mounted against the end of the auditory canal. The microphone housing may further comprise an air cavity behind the microphone on an end opposite its connection to the auditory canal, which may be sealed with a cap.
In at least one embodiment, the apparatus or HRTF generator may form a part of a larger system. Accordingly, the system may comprise a left HRTF generator, a right HRTF generator, a left preamplifier, a right preamplifier, an audio processor, a left playback module, and a right playback module.
As such, the left HRTF generator may be structured to pick up and filter sounds to the left of a user. Similarly, the right HRTF generator may be structured to pick up and filter sounds to the right of the user. A left preamplifier may be structured and configured to increase the gain of the filtered sound of the left HRTF generator. A right preamplifier may be structured and configured to increase the gain of the filtered sound of the right HRTF generator. The audio processor may be structured and configured to process and enhance the audio signal received from the left and right preamplifiers, and then transmit the respective processed signals to each of the left and right playback modules. The left and right playback modules or transducers are structured and configured to convert the electrical signals into sound to the user, such that the user can then perceive the filtered and enhanced sound from the user’s environment, which includes audio data that allows the user to localize the source of the originating sound.
In at least one embodiment, the system of the present invention may comprise a wearable device such as a headset or headphones having the HRTF generator embedded therein. The wearable device may further comprise the preamplifiers, audio processor, and playback modules, as well as other appropriate circuitry and components.
In a further embodiment, a method for generating a head related audio transfer function may be used in accordance with the present invention. As such, external sound is first filtered through an exterior of an HRTF generator which may comprise a tragus structure and an antihelix structure. The filtered sound is then passed to the interior of the HRTF generator, such as through the opening canal and auditory canal described above to create an input sound. The input sound is received at a microphone embedded within the HRTF generator adjacent to and connected to the auditory canal in order to create an input signal. The input signal is amplified with a preamplifier in order to create an amplified signal. The amplified signal is then processed with an audio processor, in order to create a processed signal. Finally, the processed signal is transmitted to the playback module in order to relay audio and/or locational audio data to a user.
In certain embodiments, the audio processor may receive the amplified signal and first filter the amplified signal with a high pass filter. The high pass filter, in at least one embodiment, is configured to remove ultra-low frequency content from the amplified signal resulting in the generation of a high pass signal.
The high pass signal from the high pass filter is then filtered through a first filter module to create a first filtered signal. The first filter module is configured to selectively boost and or attenuate the gain of select frequency ranges in an audio signal, such as the high pass signal. In at least one embodiment, the first filter module boosts frequencies above a first frequency, and attenuates frequencies below a first frequency.
The first filtered signal from the first filter module is then modulated with a first compressor to create a modulated signal. The first compressor is configured for the dynamic range compression of a signal, such as the first filtered signal. Because the first filtered signal boosted higher frequencies and attenuated lower frequencies, the first compressor may, in at least one embodiment, be configured to trigger and adjust the higher frequency material, while remaining relatively insensitive to lower frequency material.
The modulated signal from the first compressor is then filtered through a second filter module to create a second filtered signal. The second filter module is configured to selectively boost and/or attenuate the gain of select frequency ranges in an audio signal, such as the modulated signal. In at least one embodiment, the second filter module is configured to be of least partially inverse relation relative to the first filter module. For example, if the first filter module boosted content above a first frequency by +X dB and attenuated content below a first frequency by -Y dB, the second filter module may then attenuate the content above the first frequency by -X dB, and boost the content below the first frequency by +Y dB. In other words, the purpose of the second filter module in one embodiment may be to “undo” the gain adjustment that was applied by the first filter module.
The second filtered signal from the second filter module is then processed with a first processing module to create a processed signal. In at least one embodiment, the first processing module may comprise a peak/dip module. In other embodiments, the first processing module may comprise both a peak/dip module and a first gain element. The first gain element may be configured to adjust the gain of the signal, such as the second filtered signal. The peak/dip module may be configured to shape the signal, such as to increase or decrease overshoots or undershoots in the signal.
The processed signal from the first processing module is then split with a band splitter into a low band signal, a mid band signal and a high band signal. In at least one embodiment, each band may comprise the output of a fourth order section, which may be realized as the cascade of second order biquad filters.
The low band signal is modulated with a low band compressor to create a modulated low band signal, and the high band signal is modulated with a high band compressor to create a modulated high band signal. The low band compressor and high band compressor are each configured to dynamically adjust the gain of a signal. Each of the low band compressor and high band compressor may be computationally and/or configured identically as the first compressor.
The modulated low band signal, the mid band signal, and the modulated high band signal are then processed with a second processing module. The second processing module may comprise a summing module configured to combine the signals. The summing module in at least one embodiment may individually alter the gain of each of the modulated low band, mid band, and modulated high band signals. The second processing module may further comprise a second gain element. The second gain element may adjust the gain of the combined signal in order to create a processed signal that is transmitted to the playback module.
The method described herein may be configured to capture and transmit locational audio data to a user in real time, such that it can be utilized as a hearing aid, or in loud noise environments to filter out loud noises.
In a further embodiment for generating a head related audio transfer function, the HRTF generator, rather than being embedded in a wearable device, may actually be configured as the wearable device itself. In the preferred embodiment, the HRTF generator will be configured into at least one, but most preferably two, in-ear assembly apparatus(es). The at least one in- ear assembly is operatively positioned, or in an operative position, when it is disposed on a user’s ear, or worn by a user.
The in-ear assembly may comprise at least one shell or chamber to house the various HRTF structures, and provide an exterior surface to place or attach structures on the outside. The in-ear assembly may comprise a primary chamber proximal to a user’s ear(s) and a secondary chamber distal to a user’s ear(s), when in an operative position.
The exterior of the in-ear assembly’s secondary chamber comprises a windscreen structure, an antihelix structure, a tragus structure, and a microphone opening or aperture. The windscreen structure, antihelix structure, and tragus structures can be removed from the exterior of the secondary chamber, providing a means of replacing the structures. Also, the windscreen structure, antihelix structure, and tragus structures may vary in size and shape.
One of the many purposes of the windscreen structure is to reduce wind and noise interference to ensure the in-ear assembly receives high-quality and undisturbed sound and audio signals from the external environment. The windscreen structure can attach to the exterior of the secondary chamber via at least one connecting point. A variety of materials may be utilized, but in a preferred embodiment open cell foam is housed within the windscreen structure to ensure the quality of the incoming signals. The windscreen structure and the material housed inside the windscreen structure will partially cover the antihelix structure, the tragus structure, and the microphone opening or aperture. The antihelix structure and the tragus structure can also cover, partially or fully, the microphone aperture in order to mimic the structure of a human ear. The microphone aperture is in direct air flow communication with the external environment via an opening and microphone channel. The microphone may be attached to an end of the microphone channel. In this way, the microphone will receive the external noise that filters through the windscreen structure, the antihelix structure, tragus structure, microphone aperture, and microphone channel, ensuring that the audio signal produced by the HRTF Generator will include the “directionality” that occurs when a human ear detects sound from a point in space.
The microphone disposed within the end of the microphone channel is located inside the in-ear assembly, and may be located within the secondary or primary chamber of a preferred embodiment of the in-ear assembly. The microphone channel and the microphone may be in a substantially parallel orientation, or alternatively, perpendicular orientation, relative to the listening direction of a user when wearing the in-ear assembly. The microphone is located adjacent to, or even directly connected to, a playback module, or one or more speakers or transducers, which transmits audio input signals to the playback module, and in turn the playback module transmits an audio output signal to a user via an auditory channel connected to a user’ s ear. In a preferred embodiment, the in-ear assembly will also comprise a preamplifier to amplify an audio input signal received from the microphone, and an audio processor to receive the amplified signal for processing. The audio processor will then transmit a processed, higher quality signal to the playback module. The playback module may be housed, and there may also be speaker drivers. The playback module may be mounted flush on an end of the auditory channel, or there may be an air cavity between the playback module and the end of the auditory channel. The auditory channel is disposed within the user’s ear when the in-ear assembly is in an operative position, and a foam ear tip or other material may be attached to the end of the auditory channel to protect a user’s ear(s) as well as insulate the user’s ear(s) from ambient noise. The speaker(s) or playback module(s) is located inside the in-ear assembly, and in one embodiment, the playback module is located in the primary chamber of the in-ear assembly.
The microphone that receives audio input from the external environment and the playback module that sends audio output to the user are isolated from one another in a preferred embodiment, in order to avoid unwanted feedback. In one preferred embodiment, the interior of the in-ear assembly contains a baffle isolation stmcture that transverses the interior of the in-ear assembly, creating a physical isolation between the microphone and the playback module. In alternative embodiments, an acoustic isolation can be created without a physical barrier between the microphone and playback module. The isolation baffle achieves the goal of creating at least a 30 decibel noise isolation between the microphone and the playback module. Thus, the isolation of the microphone and playback module provides for reduction in noise interference and feedback noise between the microphone and playback module in a miniaturized apparatus such as the in-ear assembly, which has its exterior, or more specifically, the secondary chamber, exposed to the external environment’s sound waves.
On the exterior of in-ear assembly, or the exterior of the primary chamber, is a stabilizing assembly or wingtip assembly, to ensure the in-ear assembly is securely placed on a user’s ear, when in an operative position, and to provide the proper orientation(s) for the various structures, by way of example, the antihelix stmcture, to receive the input signals. The stabilizing assembly may comprise of a circular collar that is disposed about the exterior of the primary chamber, and a concha-shaped structure connected to the circular collar that is dimensioned and configured to be disposed on the external ear of a user when in an operative position, preferably within the concha. As such, the tragus and anti-helix structure can be oriented properly for generation of accurate HRTF signals, otherwise an improper orientation may generate misleading HRTF signals for the user.
In one embodiment, at least one in-ear assembly may form a system. In one embodiment of the system, the system may comprise a left ear-in assembly structured to pick up and filter sounds incoming from the left side of a user. The right in-ear assembly may be stmctured to pick up and filter sounds incoming from the right side of the user. A left preamplifier within the left in-ear assembly may be structured and configured to increase the gain of the filtered sound of the left in-ear assembly. A right preamplifier within the right in-ear assembly may be structured and configured to increase the gain of the filtered sound of the right in-ear assembly. The audio processor(s) located inside the left and right in-ear assemblies, or housed in a separate structure, may be configured to process and enhance the audio signal received from the left and right preamplifiers, and then transmit the respective processed signals to each of the left and right playback modules located in the left and right in-ear assemblies. The left and right playback modules or transducers are structured and configured to convert the electrical signals into sound waves perceptible by the user, such that the user can then perceive the filtered and enhanced sound form the user’s environment, which includes the “directional” audio data that allows the user to localize the originating sound. The various structures, such as but not limited to the preamplifier(s), the audio processor(s), and the playback module(s) may be housed in the in-ear assembly(s) or in a separate interconnecting assembly attached to the in-ear assembly (s).
In at least one embodiment, the system of the present invention may comprise an in-ear assembly for each of a user’s ear, as well as an interconnecting member which may further comprise the preamplifier(s), audio processor(s), playback module(s), as well as other appropriate circuitry and components. The interconnecting member may be worn around the neck, and connected to the in-ear assemblies, or in-ear bud assemblies, by wire connections, or may be wireless by use of Bluetooth or other suitable radio-frequency transmission technology. The interconnecting member may be formed from a flexible back section and stiff side sections. The interconnecting member may house a printed circuit board having various componentry, such as the audio processor. The interconnecting member can provide a user with volume control functions, providing a user a level of control with which to mix between environmental audio signals and voice communication signals. In one embodiment, the interconnecting member may have a listen mode and mute mode, providing a user with the ability to mute the microphone that receives environmental audio signals, allowing the user to receive and listen to phone calls, thereby providing a means of communication. The interconnecting member can also house a removable battery to charge the apparatus.
As can be seen, one particular use for the present invention lies in hearing protection systems for use in environments where situational awareness is critical. With suitable sound insulation and/or “anti-noise” signal generation capabilities, the system of the present invention provides a suitable noise-reduction assembly for the protection of the user’s hearing against loud noises. Additionally, the microphone assembly allows external audio to be detected and delivered to at a safer level than would otherwise be perceived by the user. Finally, the tragus and anti-helix structure allow the “directional” information of ambient noises to be captured and faithfully recreated to the user by way of an HRTF signal. By way of example, the present invention may be useful in construction sites, where the need for hearing protection and situational awareness is a key component of on-site safety. By utilizing the present invention, user’s need not sacrifice situational awareness for hearing protection.
These and other objects, features and advantages of the present invention will become clearer when the drawings as well as the detailed description are taken into consideration.
BRIEF DESCRIPTION OF THE DRAWINGS
For a fuller understanding of the nature of the present invention, reference should be had to the following detailed description taken in connection with the accompanying drawings in which:
Figure 1 is a perspective external view of an apparatus for generating a head related audio transfer function.
Figure 2 is a perspective internal view of an apparatus for generating a head related audio transfer function.
Figure 3 is a block diagram directed to a system for generating a head related audio transfer function.
Figure 4A illustrates a side profile view of a wearable device comprising an apparatus for generating a head related audio transfer function.
Figure 4B illustrates a front profile view of a wearable device comprising an apparatus for generating a head related audio transfer function.
Figure 5 illustrates a flowchart directed to a method for generating a head related audio transfer function. Figure 6 illustrates a schematic of one embodiment of an audio processor according to one embodiment of the present invention.
Figure 7 illustrates a schematic of another embodiment of an audio processor according to one embodiment of the present invention.
Figure 8 illustrates a block diagram of one method for processing an audio signal with an audio processor according to one embodiment of the present invention.
Figure 9 illustrates a block diagram of another method for processing an audio signal with an audio processor according to another embodiment of the present invention.
Figure 10 illustrates an external view of a wearable in-ear assembly for hearing enhancement and protection capable of generating a head related audio transfer function for a user.
Figure 11 is an interior sectional view of the embodiment of Figure 10.
Figure 12 illustrates a top perspective view in partially exploded form of a portion of the embodiment of Figures 10 and 11.
Figure 13 is a perspective detail view of a portion of the embodiment of Figures 10 and
11.
Figure 14 illustrates a view of an isolation baffle disposed within an interior of the embodiment of Figures 10 and 11.
Figure 15 illustrates a stabilizer assembly component to be disposed on an exterior of the embodiment of Figures 10 and 11.
Figure 16 illustrates an alternative embodiment of a wearable apparatus for hearing enhancement and protection capable of generating a head related audio transfer function for a user.
Figure 17A illustrates an interconnecting member of the embodiment of Figure 16.
Figure 17B illustrates a partially exploded view of an interconnecting member of the embodiment of Figure 16.
Like reference numerals refer to like parts throughout the several views of the drawings.
DETAILED DESCRIPTION OF THE EMBODIMENT
As illustrated by the accompanying drawings, the present invention is directed to an apparatus, system, and method for generating a head related audio transfer function for a user. Specifically, some embodiments relate to capturing surrounding sound in the external environment in real time, filtering that sound through unique structures formed on the apparatus in order to generate audio positional data, and then processing that sound to enhance and relay the positional audio data to a user, such that the user can determine the origination of the sound in three dimensional space.
As schematically represented, Figures 1 and 2 illustrate at least one preferred embodiment of an apparatus 100 for generating a head related audio transfer function for a user, or “HRTF generator”. Accordingly, apparatus 100 comprises an external manifold 110 and an internal manifold 120. The external manifold 110 will be disposed at least partially on an exterior of the apparatus 100. The internal manifold 120, on the other hand, will be disposed along an interior of the apparatus 100. For further clarification, the exterior of the apparatus 100 comprises the external environment, such that the exterior is directly exposed to the air of the surrounding environment. The interior of the apparatus 100 comprises at least a partially sealed off environment that partially or fully obstructs the direct flow of acoustic waves.
The external manifold 110 may comprise a hexahedron shape having six faces. In at least one embodiment, the external manifold 110 is substantially cuboid. The external manifold 110 may comprise at least one surface that is concave or convex, such as an exterior surface exposed to the external environment. The internal manifold 120 may comprise a substantially cylindrical shape, which may be at least partially hollow. The external manifold 110 and internal manifold 120 may comprise sound dampening or sound proof materials, such as various foams, plastics, and glass known to those skilled in the art.
Drawing attention to Figure 1, the external manifold 110 comprises an antihelix structure 101, a tragus structure 102, and an opening 103 that are externally visible. The opening 103 is in direct air flow communication with the surrounding environment, and as such will receive a flow of acoustic waves or vibrations in the air that passes through the opening 103. The tragus structure 102 is disposed to partially enclose the opening 103, and the antihelix structure 101 is disposed to partially enclose both the antihelix structure 102 and the opening 103.
In at least one embodiment, the antihelix structure 101 comprises a partial dome structure having a closed side 105 and an open side 106. In a preferred embodiment, the open side 106 faces the preferred listening direction 104, and the closed side 105 faces away from the preferred listening direction 104. The tragus structure 102 may also comprise a partial dome structure having a closed side 107 and an open side 108. In a preferred embodiment, the open side 108 faces away from the preferred listening direction 104, while the closed side 107 faces towards the preferred listening direction 104. In other embodiments, the open side 106 of the antihelix structure 101 may be in direct confronting relation to the open side 108 of the tragus structure 102, regardless of the preferred listening direction 104.
Partial dome as defined for the purposes of this document may comprise a half-dome structure or any combination of partial-dome stmctures. For instance, the anti-helix structure 101 of Figure 1 comprises a half-dome, while the tragus structure 102 comprises a partial-dome wherein the base portion may be less than that of a half-dome, but the top portion may extend to or beyond the halfway point of a half-dome to provide increased coverage or enclosure of the opening 103 and other stmctures. Of course, in other variations, the top portion and bottom portion of the partial dome may vary in respective dimensions to form varying portions of a full dome structure, in order to create varying coverage of the opening 103. This allows the apparatus to produce different or enhanced acoustic input for calculating direction and distance of the source sound relative to the user.
In at least one embodiment, the antihelix structure 101 and tragus structure 102 may be modular, such that different sizes or shapes (variations of different partial domes or partial- domes) may be swapped out based on a user’s preference for particular acoustic characteristics.
Drawing attention now to Figure 2, the opening 103 is connected to, and in air flow communication with, an opening canal 111 inside the external manifold 110. In at least one embodiment, the opening canal 111 is disposed in a substantially perpendicular orientation relative to the desired listening direction 104 of the user. The opening canal 111 is further connected in air flow communication with an auditory canal 121. A portion of the auditory canal 121 may be formed in the external manifold 110. In various embodiments, the opening canal 111 and auditory canal 121 may be of a single piece construction. In other embodiments, a canal connector not shown may be used to connect the two segments. At least a portion of the auditory canal 121 may also be formed within the internal manifold 121.
As previously discussed, the internal manifold 120 is formed wholly or substantially within an interior of the apparatus, such that it is not exposed directly to the outside air and will not be substantially affected by the external environment. In at least one embodiment, the auditory canal 121 formed within at least a portion of the internal manifold 121, will be disposed in a substantially parallel orientation relative to desired listening direction 104 of the user. In a preferred embodiment, the auditory canal comprises a length that is greater than two times its diameter.
A microphone housing 122 is attached to an end of the auditory canal 121. Within the microphone housing 122, a microphone generally at 123, not shown, is mounted against the end of the auditory canal 121. In at least one embodiment, the microphone 123 is mounted flush against the auditory canal 121, such that the connection may be substantially air tight to avoid interference sounds. In a preferred embodiment, an air cavity generally at 124 is created behind the microphone and at the end of the internal manifold 120. This may be accomplished by inserting the microphone 123 into the microphone housing 122, and then sealing the end of the microphone housing, generally at 124, with a cap. The cap may be substantially air tight in at least one embodiment. Different gasses having different acoustic characteristics may be used within the air cavity.
In at least one embodiment, apparatus 100 may form a part of a larger system 300 as illustrated in Figure 3. Accordingly, a system 300 may comprise a left HRTF generator 100, a right HRTF generator 100’, a left preamplifier 210, a right preamplifier 210’, an audio processor 220, a left playback module 230, and a right playback module 230’.
The left and right HRTF generators 100 and 100’ may comprise the apparatus 100 described above, each having unique structures such as the antihelix structure 101 and tragus structure 102. Accordingly, the HRTF generators 100/100’ may be structured to generate a head related audio transfer function for a user, such that the sound received by the HRTF generators 100/100’ may be relayed to the user to accurately communicate position data of the sound. In other words, the HRTF generators 100/100’ may replicate and replace the function of the user’s own left and right ears, where the HRTF generators would collect sound, and perform respective spectral transformations or a filtering process to the incoming sounds to enable the process of vertical localization to take place.
A left preamplifier 210 and right preamplifier 210’ may then be used to enhance the filtered sound coming from the HRTF generators, in order to enhance certain acoustic characteristics to improve locational accuracy, or to filter out unwanted noise. The preamplifiers 210/210’ may comprise an electronic amplifier, such as a voltage amplifier, current amplifier, transconductance amplifier, transresistance amplifier and/or any combination of circuits known to those skilled in the art for increasing or decreasing the gain of a sound or input signal. In at least one embodiment, the preamplifier comprises a microphone preamplifier configured to prepare a microphone signal to be processed by other processing modules. As it may be known in the art, microphone signals sometimes are too weak to be transmitted to other units, such as recording or playback devices with adequate quality. A microphone preamplifier thus increases a microphone signal to the line level by providing stable gain while preventing induced noise that might otherwise distort the signal.
Audio processor 230 may comprise a digital signal processor and amplifier, and may further comprise a volume control. Audio processor 230 may comprise a processor and combination of circuits structured to further enhance the audio quality of the signal coming from the microphone preamplifier, such as but not limited to shelf filters, equalizers, modulators. For example, in at least one embodiment the audio processor 230 may comprise a processor that performs the steps for processing a signal as taught by the present inventor’s US Patent No. 8,160,274, the entire disclosure of which is incorporated herein by reference. Audio processor 230 may incorporate various acoustic profiles customized for a user and/or for an environment, such as those described in the present inventor’s US Patent No. 8,565,449, the entire disclosure of which is incorporated herein by reference. Audio processor 230 may additionally incorporate processing suitable for high noise environments, such as those described in the present inventor’s US Patent No. 8,462,963, the entire disclosure of which is incorporated herein by reference. Parameters of the audio processor 230 may be controlled and modified by a user via any means known to one skilled in the art, such as by a direct interface or a wireless communication interface.
The left playback module 230 and right playback module 230’ may comprise headphones, earphones, speakers, or any other transducer known to one skilled in the art. The purpose of the left and right playback modules 230/230’ is to convert the electrical audio signal from the audio processor 230 back into perceptible sound for the user. As such, a moving-coil transducer, electrostatic transducer, electret transducer, or other transducer technologies known to one skilled in the art may be utilized.
In at least one embodiment, the present system 200 comprises a device 200 as generally illustrated at Figures 4 A and 4B, which may be a wearable headset 200 having the apparatus 100 embedded therein, as well as various amplifiers including but not limited to 210/210’, processors such as 220, playback modules such as 230/230’, and other appropriate circuits or combinations thereof for receiving, transmitting, enhancing, and reproducing sound.
In a further embodiment as illustrated in Figure 5, a method for generating a head related audio transfer function is shown. Accordingly, external sound is first filtered through at least a tragus structure and an antihelix structure formed along an exterior of an HRTF generator, as in 201, in order to create a filtered sound. Next, the filtered sound is passed through an opening and auditory canal along an interior of the HRTF generator, as in 202, in order to create an input sound. The input sound is received at a microphone embedded within the HRTF generator, as in 203, in order to create an input signal. The input signal is then amplified with a preamplifier, as in 204, in order to create an amplified signal. The amplified signal is processed with an audio processor, as in 205, in order to create a processed signal. Finally, the processed signal is transmitted to a playback module, as in 206, in order to relay the audio and/or locational audio data to the user.
In a preferred embodiment of the present invention, the method of Figure 5 may perform the locational audio capture and transmission to a user in real time. This facilitates usage in a hearing assistance situation, such as a hearing aid for a user with impaired hearing. This also facilitates usage in a high noise environment, such as to filter out noises and/or enhancing human speech.
In at least one embodiment, the method of Figure 5 may further comprise a calibration process, such that each user can replicate his or her unique HRTF in order to provide for accurate localization of a sound in three dimensional space. The calibration may comprise adjusting the antihelix and tragus structures as described above, which may be formed of modular and/or moveable components. Thus, the antihelix and/or tragus structure may be repositioned, and/or differently shaped and/or sized structures may be used. In further embodiments, the audio processor 230 described above may be further calibrated to adjust the acoustic enhancement of certain sound waves relative to other sound waves and/or signals.
With regard to Figure 6, one embodiment of an audio processor 230 is represented schematically as a system 1000. As schematically represented, Figure 6 illustrates at least one preferred embodiment of a system 1000, and Figure 7 provides examples of several subcomponents and combinations of subcomponents of the modules of Figure 6. Accordingly, and in these embodiments, the systems 1000 and 3000 generally comprise an input device 1010 (such as the left preamplifier 210 and/or right preamplifier 210’), a high pass filter 1110, a first filter module 3010, a first compressor 1140, a second filter module 3020, a first processing module 3030, a band splitter 1190, a low band compressor 1300, a high band compressor 1310, a second processing module 3040, and an output device 1020.
The input device 1010 is at least partially structured or configured to transmit an input audio signal 2010, such as an amplified signal from a left or right preamplifier 210, 210’, into the system 1000 of the present invention, and in at least one embodiment into the high pass filter 1110.
The high pass filter 1110 is configured to pass through high frequencies of an audio signal, such as the input signal 2010, while attenuating lower frequencies, based on a predetermined frequency. In other words, the frequencies above the predetermined frequency may be transmitted to the first filter module 3010 in accordance with the present invention. In at least one embodiment, ultra-low frequency content is removed from the input audio signal, where the predetermined frequency may be selected from a range between 300 Hz and 3 kHz. The predetermined frequency however, may vary depending on the source signal, and vary in other embodiments to comprise any frequency selected from the full audible range of frequencies between 20 Hz to 20 kHz. The predetermined frequency may be tunable by a user, or alternatively be statically set. The high pass filter 1110 may further comprise any circuits or combinations thereof structured to pass through high frequencies above a predetermined frequency, and attenuate or filter out the lower frequencies.
The first filter module 3010 is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the high pass signal 2110. For example, and in at least one embodiment, frequencies below a first frequency may be adjusted by ±X dB, while frequencies above a first frequency may be adjusted by ±Y dB. In other embodiments, a plurality of frequencies may be used to selectively adjust the gain of various frequency ranges within an audio signal. In at least one embodiment, the first filter module 3010 may be implemented with a first low shelf filter 1120 and a first high shelf filter 1130, as illustrated in Figure 6. The first low shelf filter 1120 and first high shelf filter 1130 may both be second-order filters. In at least one embodiment, the first low shelf filter 1120 attenuates content below a first frequency, and the first high shelf filter 1120 boosts content above a first frequency. In other embodiments, the frequency used for the first low shelf filter 1120 and first high shelf filter 1130 may comprise two different frequencies. The frequencies may be static or adjustable. Similarly, the gain adjustment (boost or attenuation) may be static or adjustable.
The first compressor 1140 is configured to modulate a signal, such as the first filtered signal 4010. The first compressor 1120 may comprise an automatic gain controller. The first compressor 1120 may comprise standard dynamic range compression controls such as threshold, ratio, attack and release. Threshold allows the first compressor 1120 to reduce the level of the filtered signal 2110 if its amplitude exceeds a certain threshold. Ratio allows the first compressor 1120 to reduce the gain as determined by a ratio. Attack and release determines how quickly the first compressor 1120 acts. The attack phase is the period when the first compressor 1120 is decreasing gain to reach the level that is determined by the threshold. The release phase is the period that the first compressor 1120 is increasing gain to the level determined by the ratio. The first compressor 1120 may also feature soft and hard knees to control the bend in the response curve of the output or modulated signal 2120, and other dynamic range compression controls appropriate for the dynamic compression of an audio signal. The first compressor 1120 may further comprise any device or combination of circuits that is structured and configured for dynamic range compression.
The second filter module 3020 is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the modulated signal 2140. In at least one embodiment, the second filter module 3020 is of the same configuration as the first filter module 3010. Specifically, the second filter module 3020 may comprise a second low shelf filter 1150 and a second high shelf filter 1160. In certain embodiments, the second low shelf filter 1150 may be configured to filter signals between 100Hz and 3000Hz, with an attenuation of between -5dB to -20dB. In certain embodiments the second high shelf filter 1160 may be configured to filter signals between 100Hz and 3000Hz, with a boost of between +5dB to +20dB.
The second filter module 3020 may be configured in at least a partially inverse configuration to the first filter module 3010. For instance, the second filter module may use the same frequency, for instance the first frequency, as the first filter module. Further, the second filter module may adjust the gain inversely to the gain or attenuation of the first filter module, of content above the first frequency. Similarly second filter module may also adjust the gain inversely to the gain or attenuation of the of the first filter module, of content below the first frequency. In other words, the purpose of the second filter module in one embodiment may be to “undo” the gain adjustment that was applied by the first filter module.
The first processing module 3030 is configured to process a signal, such as the second filtered signal 4020. In at least one embodiment, the first processing module 3030 may comprise a peak/dip module, such as 1180 represented in Figure 7. In other embodiments, the first processing module 3030 may comprise a first gain element 1170. In various embodiments, the processing module 3030 may comprise both a first gain element 1170 and a peak/dip module 1180 for the processing of a signal. The first gain element 1170, in at least one embodiment, may be configured to adjust the level of a signal by a static amount. The first gain element 1170 may comprise an amplifier or a multiplier circuit. In other embodiments, dynamic gain elements may be used. The peak/dip module 1180 is configured to shape the desired output spectrum, such as to increase or decrease overshoots or undershoots in the signal. In some embodiments, the peak/dip module may further be configured to adjust the slope of a signal, for instance for a gradual scope that gives a smoother response, or alternatively provide for a steeper slope for more sudden sounds. In at least one embodiment, the peak/dip module 1180 comprises a bank of ten cascaded peak/dipping filters. The bank of ten cascaded peaking/dipping filters may further be second-order filters. In at least one embodiment, the peak/dip module 1180 may comprise an equalizer, such as parametric or graphic equalizers.
The band splitter 1190 is configured to split a signal, such as the processed signal 4030. In at least one embodiment, the signal is split into a low band signal 2200, a mid band signal 2210, and a high band signal 2220. Each band may be the output of a fourth order section, which may be further realized as the cascade of second order biquad filters. In other embodiments, the band splitter may comprise any combination of circuits appropriate for splitting a signal into three frequency bands. The low, mid, and high bands may be predetermined ranges, or may be dynamically determined based on the frequency itself, i.e. a signal may be split into three even frequency bands, or by percentage. The different bands may further be defined or configured by a user and/or control mechanism.
A low band compressor 1300 is configured to modulate the low band signal 2200, and a high band compressor 1310 is configured to modulate the high band signal 2220. In at least one embodiment, each of the low band compressor 1300 and high band compressor 1310 may be the same as the first compressor 1140. Accordingly, each of the low band compressor 1300 and high band compressor 1310 may each be configured to modulate a signal. Each of the compressors 1300, 1310 may comprise an automatic gain controller, or any combination of circuits appropriate for the dynamic range compression of an audio signal.
A second processing module 3040 is configured to process at least one signal, such as the modulated low band signal 2300, the mid band signal 2210, and the modulated high band signal 2310. Accordingly, the second processing module 3040 may comprise a summing module 1320 configured to combine a plurality of signals. The summing module 1320 may comprise a mixer structured to combine two or more signals into a composite signal. The summing module 1320 may comprise any circuits or combination thereof structured or configured to combine two or more signals. In at least one embodiment, the summing module 1320 comprises individual gain controls for each of the incoming signals, such as the modulated low band signal 2300, the mid band signal 2210, and the modulated high band signal 2310. In at least one embodiment, the second processing module 3040 may further comprise a second gain element 1330. The second gain element 1330, in at least one embodiment, may be the same as the first gain element 1170. The second gain element 1330 may thus comprise an amplifier or multiplier circuit to adjust the signal, such as the combined signal, by a predetermined amount.
The output device 1020 may comprise the left playback module 230 and/or right playback module 230’.
As diagrammatically represented, Figure 8 illustrates a block diagram of one method for processing an audio signal with an audio processor 220, which may in at least one embodiment incorporate the components or combinations thereof from the systems 1000 and/or 3000 referenced above. Each step of the method in Figure 8 as detailed below may also be in the form of a code segment stored on a non-transitory computer readable medium for execution by the audio processor 220.
Accordingly, an input audio signal, such as the amplified signal, is first filtered, as in 5010, with a high pass filter to create a high pass signal. The high pass filter is configured to pass through high frequencies of a signal, such as the input signal, while attenuating lower frequencies. In at least one embodiment, ultra-low frequency content is removed by the high- pass filter. In at least one embodiment, the high pass filter may comprise a fourth-order filter realized as the cascade of two second-order biquad sections. The reason for using a fourth order filter broken into two second order sections is that it allows the filter to retain numerical precision in the presence of finite word length effects, which can happen in both fixed and floating point implementations. An example implementation of such an embodiment may assume a form similar to the following:
Two memory locations are allocated, designated as d(k-l) and d(k-2), with each holding a quantity known as a state variable. For each input sample x(k), a quantity d(k) is calculated using the coefficients al and a2: d(k) = x(k) - al * d(k-l) - a2 * d(k-2)
The output y(k) is then computed, based on coefficients bO, bl, and b2, according to: y(k) = b0*d(k) + b d(k-l) + b2*d(k-2)
The above computation comprising five multiplies and four adds is appropriate for a single channel of second-order biquad section. Accordingly, because the fourth-order high pass filter is realized as a cascade of two second-order biquad sections, a single channel of fourth order input high pass filter would require ten multiples, four memory locations, and eight adds.
The high pass signal from the high pass filter is then filtered, as in 5020, with a first filter module to create a first filtered signal. The first filter module is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the high pass signal. Accordingly, the first filter module may comprise a second order low shelf filter and a second order high shelf filter in at least one embodiment. In at least one embodiment, the first filter module boosts the content above a first frequency by a certain amount, and attenuates the content below a first frequency by a certain amount, before presenting the signal to a compressor or dynamic range controller. This allows the dynamic range controller to trigger and adjust higher frequency material, whereas it is relatively insensitive to lower frequency material.
The first filtered signal from the first filter module is then modulated, as in 5030, with a first compressor. The first compressor may comprise an automatic or dynamic gain controller, or any circuits appropriate for the dynamic compression of an audio signal. Accordingly, the compressor may comprise standard dynamic range compression controls such as threshold, ratio, attack and release. An example implementation of the first compressor may assume a form similar to the following:
The compressor first computes an approximation of the signal level, where att represents attack time; rel represents release time; and invThr represents a precomputed threshold: temp = abs(x(k)) if temp > level (k-1) level(k) = att * (level(k-l) - temp) + temp else level = rel * (level(k-l) - temp) + temp This level computation is done for each input sample. The ratio of the signal’s level to invThr then determines the next step. If the ratio is less than one, the signal is passed through unaltered. If the ratio exceeds one, a table in the memory may provide a constant that’s a function of both invThr and level: if (level * thr < 1) output(k) = x(k) else index = floor(level * invThr) if (index > 99) index = 99 gainReduction = table[index] output(k) = gainReduction * x(k)
The modulated signal from the first compressor is then filtered, as in 5040, with a second filter module to create a second filtered signal. The second filter module is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the modulated signal. Accordingly, the second filter module may comprise a second order low shelf filter and a second order high shelf filter in at least one embodiment. In at least one embodiment, the second filter module boosts the content above a second frequency by a certain amount, and attenuates the content below a second frequency by a certain amount. In at least one embodiment, the second filter module adjusts the content below the first specified frequency by a fixed amount, inverse to the amount that was removed by the first filter module. By way of example, if the first filter module boosted content above a first frequency by +X dB and attenuated content below a first frequency by -Y dB, the second filter module may then attenuate the content above the first frequency by -X dB, and boost the content below the first frequency by +Y dB. In other words, the purpose of the second filter module in one embodiment may be to “undo” the filtering that was applied by the first filter module.
The second filtered signal from the second filter module is then processed, as in 5050, with a first processing module to create a processed signal. The processing module may comprise a gain element configured to adjust the level of the signal. This adjustment, for instance, may be necessary because the peak-to-average ratio was modified by the first compressor. The processing module may comprise a peak/dip module. The peak/dip module may comprise ten cascaded second-order filters in at least one embodiment. The peak/dip module may be used to shape the desired output spectrum of the signal. In at least one embodiment, the first processing module comprises only the peak/dip module. In other embodiments, the first processing module comprises a gain element followed by a peak/dip module.
The processed signal from the first processing module is then split, as in 5060, with a band splitter into a low band signal, a mid band signal, and a high band signal. The band splitter may comprise any circuit or combination of circuits appropriate for splitting a signal into a plurality of signals of different frequency ranges. In at least one embodiment, the band splitter comprises a fourth-order band- splitting bank. In this embodiment, each of the low band, mid band, and high band are yielded as the output of a fourth-order section, realized as the cascade of second-order biquad filters.
The low band signal is modulated, as in 5070, with a low band compressor to create a modulated low band signal. The low band compressor may be configured and/or computationally identical to the first compressor in at least one embodiment. The high band signal is modulated, as in 5080, with a high band compressor to create a modulated high band signal. The high band compressor may be configured and/or computationally identical to the first compressor in at least one embodiment.
The modulated low band signal, mid band signal, and modulated high band signal are then processed, as in 5090, with a second processing module. The second processing module comprises at least a summing module. The summing module is configured to combine a plurality of signals into one composite signal. In at least one embodiment, the summing module may further comprise individual gain controls for each of the incoming signals, such as the modulated low band signal, the mid band signal, and the modulated high band signal. By way of example, an output of the summing module may be calculated by: out = w0*low + w mid + w2*high
The coefficients wO, wl, and w2 represent different gain adjustments. The second processing module may further comprise a second gain element. The second gain element may be the same as the first gain element in at least one embodiment. The second gain element may provide a final gain adjustment. Finally, the second processed signal is transmitted as the output signal.
As diagrammatically represented, Figure 9 illustrates a block diagram of one method for processing an audio signal with an audio processor 220, which may in at least one embodiment incorporate the components or combinations thereof from the systems 1000 and/or 3000 referenced above. Because the individual components of Figure 9 have been discussed in detail above, they will not be discussed here. Further, each step of the method in Figure 9 as detailed below may also be in the form of a code segment directed to at least one embodiment of the present invention, which is stored on a non-transitory computer readable medium, for execution by the audio processor 220 of the present invention.
Accordingly, an input audio signal is first filtered, as in 5010, with a high pass filter. The high pass signal from the high pass filter is then filtered, as in 6010, with a first low shelf filter. The signal from the first low shelf filter is then filtered with a first high shelf filter, as in 6020. The first filtered signal from the first low shelf filter is then modulated with a first compressor, as in 5030. The modulated signal from the first compressor is filtered with a second low shelf filter as in 6110. The signal from the low shelf filter is then filtered with a second high shelf filter, as in 6120. The second filtered signal from the second low shelf filter is then gain-adjusted with a first gain element, as in 6210. The signal from the first gain element is further processed with a peak/dip module, as in 6220. The processed signal from the peak/dip module is then split into a low band signal, a mid band signal, and a high band signal, as in 5060. The low band signal is modulated with a low band compressor, as in 5070. The high band signal is modulated with a high band compressor, as in 5080. The modulated low band signal, mid band signal, and modulated high band signal are then combined with a summing module, as in 6310. The combined signal is then gain adjusted with a second gain element in order to create the output signal, as in 6320.
It should be understood that the above steps may be conducted exclusively or nonexclusively and in any order. Further, the physical devices recited in the methods may comprise any apparatus and/or systems described within this document or known to those skilled in the art.
Since many modifications, variations and changes in detail can be made to the described preferred embodiment of the invention, it is intended that all matters in the foregoing description and shown in the accompanying drawings be interpreted as illustrative and not in a limiting sense. Thus, the scope of the invention should be determined by the appended claims and their legal equivalents.
In one preferred embodiment, Figure 10 illustrates a wearable apparatus for hearing enhancement and protection, capable of generating a head related audio transfer function (HRTF) for a user, comprising at least one in-ear assembly 400. As illustrated in Figure 10, the in-ear assembly 400 is structured to be disposed inside and/or partially outside of at least one of a user’s ears, when in an operative position, or when operatively positioned. One purpose of the in-ear assembly 400 is to capture sound from a user’s external environment in real time, filter the sound through the unique stmctures formed on and in the in-ear assembly 400 in order to generate audio positional or directional data, process the sound to enhance the quality of the audio positional data, enhance and amplify the sound by means of various preamplifiers, and relay the audio positional data to a user by means of a playback module, speaker, or a variety of other transducers, allowing the user to effectively determine the origination of the sound in three dimensional space.
The in-ear assembly 400 comprises at least one chamber, shell, or chassis, which houses the various stmctures on the interior of the in-ear assembly 400, and provides exterior surfaces to house the structures that mimic the functions of a human ear for generating a head related audio transfer function (“HRTF”). Drawing attention to the embodiment in Figures 10 and 11, the in-ear assembly 400 comprises at least a primary chamber 403 and a secondary chamber 406. As illustrated in Figure 10, the primary chamber 403 is situated proximally to a user’s ear and the secondary chamber 406 is located distally to a user’s ear when the in-ear assembly 400 is worn by a user.
As illustrated in Figure 11, the exterior, or outside surface, of the secondary chamber 406 of the in-ear assembly 400 will be at least partially open or exposed to the external environment, providing a means for the in-ear assembly 400 to receive sound, captured by a microphone 415. The interior of the in-ear assembly 400 comprises at least a partially sealed off environment that partially or fully obstructs the direct flow of acoustic waves, ensuring that noise interference from the external environment will not impede the quality of the audio input received by the microphone 415. Generally, the microphone 415 will relay the audio input sound to a playback module 230, which will transmit the audio output sound to a user by means of an auditory channel 428 connected to a user’s ear(s) in an operative position. The secondary chamber 406 and the primary chamber 403 may comprise sound dampening or sound proof materials such as, but not limited to, various foams, plastics, and glass. The primary chamber and the secondary chamber 406 can be made out of a hard, strong plastic or a plurality of other materials.
Drawing attention to Figures 11 and 12, the exterior surface of the secondary chamber 406 comprises at least an antihelix structure 101, a tragus structure 102, and a microphone aperture 409. The microphone aperture 409 is in direct air flow communication with the surrounding environment, and as such will receive a flow of acoustic sound waves or vibrations in the air that are filtered and passed through the antihelix structure 101 and the tragus structure 102. The antihelix structure 101 and the tragus structure 102 mimic the function of the external part of the human ear, the pinna, which assist and act as a funnel in directing and filtering the sound or audio input into the microphone aperture 409, through the microphone channel 412, and received into the microphone 415. As noted previously, in one embodiment, the in-ear assembly 400 may also include a preamplifier 210, as schematically illustrated in Figure 3, to amplify the filtered audio input signal, as well as an audio processor 220, also illustrated in Figure 3, to process the amplified signal, and create a processed signal to be received by the playback module 230’, which will communicate the audio and/or locational audio data to the user.
As illustrated in Figure 11 and 12, the tragus stmcture 102 is disposed to partially enclose the microphone aperture 409, and the antihelix structure 101 is disposed to partially enclose both the tragus structure 102 and the microphone aperture 406. The antihelix structure 101 comprises a partial dome structure having a close side 105 and an open side 106. The tragus structure 102 may also comprise an at least partial dome stmcture having a closed side 107 and an open side 108. In a preferred embodiment, the open side 106 of the antihelix structure 101 may be in direct confronting relation to the open side 108 of the tragus structure 102. In a preferred embodiment, the anti-helix structure 101 of Figure 11 and 12 comprises ahalf-dome, while the tragus stmcture 102 comprises a partial-dome wherein the base portion may be less than that of a half-dome, but the top portion may extend to or beyond the halfway point of a half-dome to provide increased coverage or enclosure of the microphone aperture 409 and other structures. Of course, in other variations, the top portion and bottom portion of the partial dome may vary in respective dimensions to form varying portions of a full dome structure, in order to create varying coverage of the microphone aperture 409. This allows the in-ear assembly 400 to produce different or enhanced acoustic input for calculating direction and distance of the source sound relative to the user. The antihelix structure 101 and the tragus structure 102 may be modular, such that different sizes or shapes (variations of different partial domes) may be swapped out based on a user’s preference for particular acoustic characteristics.
In a preferred embodiment, as illustrated in Figures 10-13, a windscreen structure 418 may be disposed on the exterior surface of the secondary chamber 406 of the in-ear assembly 400. The windscreen structure 418 provides a mechanism to reduce unwanted noise and wind interference from the external environment, enhancing and filtering the quality of the incoming sound or audio input signal to be received by the in-ear assembly 400. Drawing your attention to Figures 10-13, the exterior surface of the secondary chamber 406 can comprise a plurality of windscreen attachment regions 424/424’ to connect the windscreen structure 418, which comprises of a plurality of windscreen connectors 425/425’, providing the ability to attach and remove the windscreen structure on the exterior of the in-ear assembly 400.
As illustrated in Figure 11 and Figure 13, the windscreen structure 418 further comprises or houses an open-cell foam component 421, or a variety of other materials, which will together reduce noise interference from being received by the in-ear assembly 400. As such, in the preferred embodiment as depicted in Figure 11, the windscreen structure 418 comprising the open-cell foam 421 can be disposed to partially or fully cover the antihelix stmcture 101, the tragus structure 102, and the microphone aperture 409. The windscreen stmcture 418 can be configured into variety of shapes. In one embodiment depicted in Figures 10 and 13, the windscreen stmcture 418 will take on a square shape with rounded edges, with an open-style hexagon like structure, providing a plurality of open slots, which may vary in number, such as six open slots. The open-cell foam 421 housed within can receive and filter noise disturbances, and transmit a higher quality sound to the antihelix structure 101 , the tragus structure 102, the microphone aperture 409, down into the microphone channel 412, and into the microphone 415. The windscreen structure 418 can be made of a variety of materials, including a strong, flexible plastic, which can also provide protection to the underlying structures on the exterior of the in-ear assembly 400.
As illustrated in Figure 11, the windscreen structure 418 comprises of windscreen connector stmctures 425 and 425’, which snap into the windscreen attachment regions 424 and 424’ on the exterior of the secondary chamber 406, and extend inside the secondary chamber 406 of the in-ear assembly 400. The windscreen attachment areas 424 and 424’, and the windscreen connector structures 425 and 425’ are sealed off and physically isolated from the microphone manifold 408, which comprises of microphone aperture 409, microphone channel 412, microphone 415, and microphone housing 416, as well as the playback module 230 and the other stmctures of inside the in-ear assembly 400. The isolation and sealed environment ensure that noise disturbances are reduced, and do not interfere with the audio input of the sound received by the microphone 415, and the output of sound transmitted by the playback module 230 to the user. Additionally, the windscreen structure 418 can be removed, allowing a user to replace the open-cell foam 421 with substitute materials as desired. Similarly, as depicted in Figure 12, the antihelix structure 101 and the tragus structure 102 on the exterior of the secondary chamber 406 of the in-ear assembly 400 can be a removed and swapped out with different sizes and shapes of the antihelix structure 101 and tragus structure 102 to provide a user with different acoustic characters as desired.
Drawing attention now to Figure 11, a microphone manifold 408 is an independent structure embedded within the in-ear assembly 400, comprising at least the microphone aperture 409, the microphone channel 412, the microphone 415, and the microphone housing 416. The microphone manifold 408 may reside wholly within the secondary chamber 406, or may also extend into the primary chamber 403. The microphone aperture 409 is exposed to the external environment, providing a means of receiving a sound signal or audio input, and is connected to and in air flow communication with, the microphone channel 412. The microphone channel 412 comprises a length that is at least two times its diameter. In one embodiment, the microphone channel 412 comprises a length that is three times its diameter. The microphone channel 412 is connected to the microphone 415, providing a means of communicating the sound signals and audio input received from the external environment to the microphone 415, which may be housed in a microphone housing 416. The microphone manifold 408 isolates the microphone channel 412 and the microphone 415 within the interior of the in-ear assembly 400, ensuring that the microphone 415 receives undisturbed sound and acoustic signals that funnel at least through the microphone aperture 409. As noted, the microphone 415 can also be housed within a microphone housing 416, further isolating the microphone 415 within the interior of the in-ear assembly 400.
The microphone channel 412 can be disposed in a substantially parallel orientation relative to the desired listening direction 104 of the user when the ear- in assembly 400 is worn by a user, generally illustrated in Figure 10. In other embodiments, the microphone channel 412 can be disposed in a substantially perpendicular orientation relative to the listening direction 104 of the user. Similarly, the microphone 415 can be disposed in a substantially parallel orientation relative to the desired listening direction 104 of the user, or in a substantially perpendicular orientation when the in-ear assembly is worn by a user. However, the microphone channel 412 and microphone 415 can be disposed in various orientations, independent of the listening direction 104 of the user. The microphone 415 may be mounted flush on an end of the microphone manifold 408. In a preferred embodiment, an air cavity or gap 417 is situated between the microphone 415 and an end of the microphone manifold 408. Different gasses having different acoustic characteristics may be used with the air cavity.
Drawing attention to Figure 11, the microphone 415 can be connected directly to the playback module 230, or speaker, housed within the primary chamber 403, or more generally in the interior of the in-ear assembly 400. The microphone 415 may be connected to the playback module 230 by means of a connective wire 430, or by a variety of means to allow communication between the microphone 415 and the playback module 230. The microphone 415 receives audio input from the external environment, which are communicated to the playback module 230, converting the audio input, into a sound or audio output that is relayed through the auditory channel 428, connected to an ear of the user, allowing the user to effectively determine the origination of the sound in three dimensional space.
Drawing further attention to Figures 11 and 14, an isolation baffle 431 physically isolates the microphone 415 from the playback module 230 in order to prevent feedback noise during operation of the in-ear assembly 400. The isolation baffle 431 can achieve a 30 decibel or greater noise isolation between the microphone 415 and the playback module 230. The isolation baffle 431 achieves the goal of ensuring that the sound pressure or output of the playback module will not interfere with the microphone’s 415 ability to effectively receive undisturbed sound input from the environment. The isolation baffle 431 allows a user to effectively receive undisturbed sound output from the playback module 230, allowing the user to effectively pinpoint the origination of sound from the external environment. As illustrated in Figures 11 and 14, the isolation baffle 431 can comprise of a single piece of a strong, flexible plastic. The isolation baffle 431 may transverse the length and width of the in-ear assembly 400, and connect to the inside surface of the top of the in-ear assembly 400, or specifically the inside surface of the secondary chamber 406. The isolation baffle 431 also comprises of an isolation post 434, that connects to a cylindrical structure 435 attached to the primary chamber 431. In other embodiments, the isolation baffle 431 may comprise interconnecting units of a variety of materials to achieve the desired isolation between the microphone 415 and the playback module 230. The playback module 230 resides in the primary chamber 403 of the in- ear assembly 400, and the playback module 230. The playback module 230 is connected to an auditory channel 428, which resides in a user’s ear, in the operative position, to communicate the audio output to the user. The playback module 230 converts the electrical audio input signal received from the microphone 415 and various structures, such as the preamplifier 210 and the audio processor 220, producing audio output data, which travels through the auditory channel 428 to the user. There may also be an air cavity 417’ between the playback module 230 and the isolation baffle 431, providing the playback module with ample room to vibrate and produce different acoustic outputs.
Furthermore, as illustrated in Figures 10 and 15, a stabilizer assembly 437 can be attached to the exterior of the in-ear assembly 400, or the exterior of the primary chamber 403 of the in-ear assembly, to stabilize the in-ear assembly 400 and the various structures in the proper orientation, when in the user’s ear, the operative position, as represented in Figure 10. The stabilizer assembly 437, for example, ensures that the antihelix structure 101, tragus structure 102, and the other structures on the exterior of the secondary chamber 406 of the in- ear assembly 400 are facing the listening direction 104 of the user. In one preferred embodiment, the stabilizer assembly 437 provides the support to keep the microphone manifold 408 in a substantially parallel direction to the listening direction 104 of the user. As illustrated in Figure 15, the stabilizer assembly 431 comprises a circular collar structure 440, which in the preferred embodiment is attached to an exterior portion of the primary chamber 403, and a concha-shaped structure 443 connected to the circular collar stmcture 440, that is situated comfortably within the outside portion of a user’s ear. The stabilizer assembly 437 properly fixes the in-ear assembly 400 on a user’s ear and restricts movement of the in-ear assembly to facilitate proper orientation.
In a preferred embodiment, the at least one in-ear assembly 400 also comprises the previously mentioned preamplifier 210 and audio processor 220, as schematically illustrated in Figure 3. The preamplifier 210 can enhance the sound filtered through the in-ear assembly, enhancing certain acoustic characteristics to improve locational accuracy, or to further filter out unwanted noise. The preamplifier 210 may comprise an electronic amplifier, such as a voltage amplifier, current amplifier, transconductance amplifier, transresistance amplifier and/or any combination of circuits known to those skilled in the art for increasing or decreasing the gain of a sound or input signal. In at least one embodiment, the preamplifier comprises a microphone preamplifier configured to prepare a microphone signal to be processed by other processing modules. As it may be known in the art, microphone signals sometimes are too weak to be transmitted to other units, such as recording or playback devices with adequate quality. A microphone preamplifier thus increases a microphone signal to the line level by providing stable gain while preventing induced noise that might otherwise distort the signal.
The audio processor 220 may comprise a digital signal processor and amplifier, and may further comprise a volume control. Audio processor 220 may comprise a processor and combination of circuits structured to further enhance the audio quality of the signal coming from the microphone preamplifier, such as but not limited to shelf filters, equalizers, modulators. For example, in at least one embodiment the audio processor 220 may comprise a processor that performs the steps for processing a signal as taught by the present inventor’s US Patent No. 8,160,274, the entire disclosure of which is incorporated herein by reference. Audio processor 220 may incorporate various acoustic profiles customized for a user and/or for an environment, such as those described in the present inventor’s US Patent No. 8,565,449, the entire disclosure of which is incorporated herein by reference. Audio processor 220 may additionally incorporate processing suitable for high noise environments, such as those described in the present inventor’s US Patent No. 8,462,963, the entire disclosure of which is incorporated herein by reference. Parameters of the audio processor 220 may be controlled and modified by a user via any means known to one skilled in the art, such as by a direct interface or a wireless communication interface.
In another embodiment as illustrated in Figure 16, the at least one in-ear assembly 400 may form part of a larger wearable apparatus 500. The apparatus 500 comprises a left in-ear bud assembly 400, a right in-ear bud assembly 400’, and an interconnecting member 502. A connective wire 501 can connect the left in-ear bud assembly 400 to the interconnecting member 502, and a connective wire 501’ can connect the right in-ear bud assembly 400’ to the interconnecting member 502. The interconnecting member 502 may comprise various components, as well as various amplifiers including but not limited to the preamplifiers 210/210’, an audio processor 220, and playback modules such as 230/230’, and other appropriate circuits or combinations thereof for receiving, transmitting, enhancing and reproducing sound. The interconnecting member 502, as illustrated in Figure 17A, can comprise of a flexible back section 504 that wraps around or extends into a first side section 506 and a second side section 506’ , and may be worn by a user around his or her neck. Drawing attention to Figure 17B, the interconnecting member 502 can comprise of a volume control function 509 to enhance or reduce the volume level received from the playback module 230, or to reduce the audio input received from the microphone 415. Additionally, the interconnecting member 502 can comprise of a call microphone function 512, providing a user the ability to make and receive calls without removing the wearable apparatus 500. The interconnecting member 502 can also comprise of a mute mode function 515 to prevent the transmission of audio output from the playback modules 230/230’. The interconnecting member 502 also comprises a removable battery 518, illustrated in Figure 17A, capable of charging the apparatus. The interconnecting member 502 can be connected to the in-ear bus assemblies 400 and 400’ by means of a connective wire as illustrated in Figure 16, or a wireless connections, such as Bluetooth technology.
Since many modifications, variations and changes in detail can be made to the described preferred embodiment of the invention, it is intended that all matters in the foregoing description and shown in the accompanying drawings be interpreted as illustrative and not in a limiting sense. Thus, the scope of the invention should be determined by the appended claims and their legal equivalents.

Claims

What is claimed is:
1. A wearable apparatus for hearing enhancement and protection capable of generating a head related audio transfer function for a user, said wearable apparatus comprising: at least one in-ear assembly disposed in an operative position comprising: a tragus structure on an exterior surface of said at least one in-ear assembly, an antihelix structure on said exterior surface of said at least one in-ear assembly, a microphone aperture on said exterior surface of said at least one in-ear assembly, said microphone aperture in air flow communication with an external environment, a microphone channel on an interior of said at least one in-ear assembly, said microphone channel in air flow communication with said microphone aperture, a microphone attached to an end of said microphone channel, a playback module connected to said microphone, an isolation baffle disposed to isolate said microphone from said playback module, an auditory channel connected to said playback module, said auditory channel disposed in communication with a user’s ear, when in said operative position, and a preamplifier configured to receive an audio signal, an audio processor configured to receive an amplified signal, and said playback module configured to receive a processed signal.
2. The wearable apparatus as recited in claim 1 further comprising a windscreen structure on said exterior surface of said at least one in-ear assembly.
3. The wearable apparatus as recited in claim 2 wherein said windscreen structure is disposed to partially enclose said microphone aperture, said tragus structure, and said antihelix structure.
4. The wearable apparatus as recited in claim 1 wherein said tragus structure is disposed to partially enclose said microphone aperture.
5. The wearable apparatus as recited in claim 1 wherein said antihelix structure is disposed to partially enclose said tragus stmcture and said microphone aperture.
6. The wearable apparatus as recited in claim 1 wherein said microphone channel comprises a length that is at least two times its diameter.
7. The wearable apparatus as recited in claim 1 wherein said isolation baffle achieves at least a 30 decibel noise isolation between said microphone and said playback module.
8. The wearable apparatus as recited in claim 1 wherein said microphone channel and said microphone are in a substantially parallel orientation relative to a listening direction of said user.
9. The wearable apparatus as recited in claim 1 further comprising a stabilizer assembly connected to said exterior of said at least one in-ear assembly.
10. The wearable apparatus as recited in claim 9 wherein said stabilizer assembly comprises a circular collar attached to said exterior of said at least one in-ear assembly, and a concha shaped stmcture attached to said circular collar and structured for disposition on a user’s ear when in said operative position.
11. A wearable apparatus for hearing enhancement and protection capable of generating a head related audio transfer function for a user, said wearable apparatus comprising: a left in-ear assembly and a right in-ear assembly disposable in an operative position, said left in-ear assembly and said right in-ear assembly each comprise of a primary chamber and a secondary chamber, said primary chamber disposed proximal to a user’s ear and said secondary chamber disposed distal to a user’s ear, when in said operative position, said secondary chamber comprises: a microphone aperture on an exterior surface of said secondary chamber, said microphone aperture in air flow communication with an external environment; a tragus structure on said exterior surface of said secondary chamber, said tragus structure disposed to partially enclose said microphone aperture, an antihelix structure on said exterior surface of said secondary chamber, said antihelix disposed to partially enclose said tragus structure and said microphone aperture, a microphone channel on an interior of said secondary chamber, said microphone channel in air flow communication with said microphone aperture, a microphone disposed within an end of said microphone channel, said primary chamber comprises: a playback module connected to said microphone, an auditory channel connected to said playback module, said auditory channel disposed in communication with a user’s ear, when in said operative position, an isolation baffle disposed to isolate said microphone from said playback module, and a preamplifier configured to receive an audio signal, an audio processor configured to receive an amplified signal, and said playback module configured to receive a processed signal.
12. The wearable apparatus as recited in claim 11 further comprising a windscreen structure on said exterior surface of said secondary chamber.
13. The wearable apparatus as recited in claim 12 wherein said windscreen structure is disposed to partially enclose said microphone aperture, said tragus structure, and said antihelix structure.
14. The wearable apparatus as recited in claim 11 wherein said microphone channel comprises a length that is at least two times its diameter.
15. The wearable apparatus as recited in claim 11 wherein said isolation baffle achieves at least a 30 decibel noise isolation between said microphone and said playback module.
16. The wearable apparatus as recited in claim 11 wherein said microphone channel and said microphone are in a substantially parallel orientation relative to a listening direction of said user.
17. The wearable apparatus as recited in claim 11 further comprising a stabilizer assembly, said stabilizer comprising a circular collar connected to said exterior of said primary chamber, and a concha-shaped stmcture disposed on a user’s ear, when in said operative position.
18. A wearable apparatus for hearing enhancement and protection capable of generating a head related audio transfer function for a user, said wearable apparatus comprising: at least one in-ear assembly disposable in an operative position, said at least one in-ear assembly comprising a primary chamber and a secondary chamber, said primary chamber disposed proximal to a user’s ear and said secondary chamber disposed distal to a user’s ear, when in said operative position, and an interconnecting member connected to said at least one in-ear assembly, said secondary chamber including: a microphone aperture on an exterior surface of said secondary chamber, said microphone aperture in air flow communication with the external environment; a tragus structure on said exterior surface of said secondary chamber, said tragus structure disposed to partially enclose said microphone aperture, an antihelix structure on said exterior surface of said secondary chamber, said antihelix disposed to partially enclose said tragus structure and said microphone aperture, a microphone channel on an interior of said secondary chamber, said microphone channel in air flow communication with said microphone aperture, a microphone attached to an end of said microphone channel, an isolation baffle disposed to isolate said microphone from a playback module; said primary chamber including: a playback module connected to said microphone, said playback module isolated from said microphone, an auditory channel connected to said playback module, said auditory channel disposed in communication with a user’s ear, when in an operative position, a stabilizer assembly, said stabilizer comprising a circular collar connected to said exterior of said primary chamber, and a concha-shaped structure attached to said circular collar and structure for disposition on a user’s ear, when in an operative position, said interconnecting member including: a flexible back section connected to a side section on one end and a side section on a second end, said interconnecting member connected to said at least one in-ear assembly, at least one audio processor configured to receive an audio signal from said at least one in-ear assembly, and at least one preamplifier configured to receive said audio signal, said audio processor further configured to receive an amplified signal, and said playback module configured to receive a processed signal.
19. The wearable apparatus as recited in claim 18 wherein said microphone channel and said microphone are in a substantially parallel orientation relative to a listening direction of a user.
20. The wearable apparatus as recited in claim 18 wherein said isolation baffle achieves at least a 30 decibel noise isolation between said microphone and said playback module.
21. The wearable apparatus as recited in claim 18 further comprising a windscreen structure on said exterior surface of said at least one in-ear assembly, disposed to partially enclose said microphone aperture, said tragus structure, and said antihelix structure.
PCT/US2020/065315 2019-12-16 2020-12-16 System, method, and apparatus for generating and digitally processing a head related audio transfer function WO2021126981A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080096632.6A CN115104323A (en) 2019-12-16 2020-12-16 System, method and apparatus for generating and digitally processing head related audio transfer functions

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962948409P 2019-12-16 2019-12-16
US62/948,409 2019-12-16
US16/917,001 US11202161B2 (en) 2006-02-07 2020-06-30 System, method, and apparatus for generating and digitally processing a head related audio transfer function
US16/917,001 2020-06-30

Publications (1)

Publication Number Publication Date
WO2021126981A1 true WO2021126981A1 (en) 2021-06-24

Family

ID=76476791

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/065315 WO2021126981A1 (en) 2019-12-16 2020-12-16 System, method, and apparatus for generating and digitally processing a head related audio transfer function

Country Status (2)

Country Link
CN (1) CN115104323A (en)
WO (1) WO2021126981A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11202161B2 (en) 2006-02-07 2021-12-14 Bongiovi Acoustics Llc System, method, and apparatus for generating and digitally processing a head related audio transfer function
US11211043B2 (en) 2018-04-11 2021-12-28 Bongiovi Acoustics Llc Audio enhanced hearing protection system
US11418881B2 (en) 2013-10-22 2022-08-16 Bongiovi Acoustics Llc System and method for digital signal processing
US11425499B2 (en) 2006-02-07 2022-08-23 Bongiovi Acoustics Llc System and method for digital signal processing
US11431312B2 (en) 2004-08-10 2022-08-30 Bongiovi Acoustics Llc System and method for digital signal processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100008530A1 (en) * 2004-01-07 2010-01-14 Logitech Europe S.A. Personal audio-set with adjustable sliding ear clip mount
US20170195802A1 (en) * 2016-01-05 2017-07-06 Bose Corporation Binaural Hearing Assistance Operation
US20180213343A1 (en) * 2006-02-07 2018-07-26 Ryan J. Copt System, method, and apparatus for generating and digitally processing a head related audio transfer function

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100008530A1 (en) * 2004-01-07 2010-01-14 Logitech Europe S.A. Personal audio-set with adjustable sliding ear clip mount
US20180213343A1 (en) * 2006-02-07 2018-07-26 Ryan J. Copt System, method, and apparatus for generating and digitally processing a head related audio transfer function
US20170195802A1 (en) * 2016-01-05 2017-07-06 Bose Corporation Binaural Hearing Assistance Operation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11431312B2 (en) 2004-08-10 2022-08-30 Bongiovi Acoustics Llc System and method for digital signal processing
US11202161B2 (en) 2006-02-07 2021-12-14 Bongiovi Acoustics Llc System, method, and apparatus for generating and digitally processing a head related audio transfer function
US11425499B2 (en) 2006-02-07 2022-08-23 Bongiovi Acoustics Llc System and method for digital signal processing
US11418881B2 (en) 2013-10-22 2022-08-16 Bongiovi Acoustics Llc System and method for digital signal processing
US11211043B2 (en) 2018-04-11 2021-12-28 Bongiovi Acoustics Llc Audio enhanced hearing protection system

Also Published As

Publication number Publication date
CN115104323A (en) 2022-09-23

Similar Documents

Publication Publication Date Title
US11202161B2 (en) System, method, and apparatus for generating and digitally processing a head related audio transfer function
US10701505B2 (en) System, method, and apparatus for generating and digitally processing a head related audio transfer function
US10959035B2 (en) System, method, and apparatus for generating and digitally processing a head related audio transfer function
US9615189B2 (en) Artificial ear apparatus and associated methods for generating a head related audio transfer function
WO2021126981A1 (en) System, method, and apparatus for generating and digitally processing a head related audio transfer function
EP2250822B1 (en) A sound system and a method for providing sound
US8855343B2 (en) Method and device to maintain audio content level reproduction
JP6069830B2 (en) Ear hole mounting type sound collecting device, signal processing device, and sound collecting method
US20080118078A1 (en) Acoustic system, acoustic apparatus, and optimum sound field generation method
EP3468228B1 (en) Binaural hearing system with localization of sound sources
CN107039029B (en) Sound reproduction with active noise control in a helmet
US11405723B2 (en) Method and apparatus for processing an audio signal based on equalization filter
CN112995825A (en) Sound output device
WO2004016037A1 (en) Method of increasing speech intelligibility and device therefor
US20090154738A1 (en) Mixable earphone-microphone device with sound attenuation
CN112866864A (en) Environment sound hearing method and device, computer equipment and earphone
EP3840402B1 (en) Wearable electronic device with low frequency noise reduction
RU2797339C1 (en) Audio output device
US20230283970A1 (en) Method for operating a hearing device
EP4207804A1 (en) Headphone arrangement
JP2019087869A (en) Sound output device
US20230169948A1 (en) Signal processing device, signal processing program, and signal processing method
Liski Adaptive hear-through headset
WO2022250854A1 (en) Wearable hearing assist device with sound pressure level shifting
Horiuchi et al. Adaptive estimation of transfer functions for sound localization using stereo earphone-microphone combination

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20903420

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20903420

Country of ref document: EP

Kind code of ref document: A1