US20210141595A1 - Calibration Method for Customizable Personal Sound Delivery Systems - Google Patents

Calibration Method for Customizable Personal Sound Delivery Systems Download PDF

Info

Publication number
US20210141595A1
US20210141595A1 US17/155,465 US202117155465A US2021141595A1 US 20210141595 A1 US20210141595 A1 US 20210141595A1 US 202117155465 A US202117155465 A US 202117155465A US 2021141595 A1 US2021141595 A1 US 2021141595A1
Authority
US
United States
Prior art keywords
user
processor
delivery system
sounds
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/155,465
Inventor
Christopher Arnold Jeffery
James Alexander Fielding
Alex John Afflick
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Audeara Pty Ltd
Original Assignee
Audeara Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2015902532A external-priority patent/AU2015902532A0/en
Priority claimed from US15/196,256 external-priority patent/US20170046120A1/en
Application filed by Audeara Pty Ltd filed Critical Audeara Pty Ltd
Priority to US17/155,465 priority Critical patent/US20210141595A1/en
Assigned to AUDEARA PTY. LTD. reassignment AUDEARA PTY. LTD. NUNC PRO TUNC ASSIGNMENT (SEE DOCUMENT FOR DETAILS). Assignors: AFFLICK, ALEX JOHN, FIELDING, JAMES ALEXANDER, JEFFERY, CHRISTOPHER ARNOLD
Publication of US20210141595A1 publication Critical patent/US20210141595A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • A61B5/121Audiometering evaluating hearing capacity
    • A61B5/123Audiometering evaluating hearing capacity subjective methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6898Portable consumer electronic devices, e.g. music players, telephones, tablet computers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7475User input or interface means, e.g. keyboard, pointing device, joystick
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2505/00Evaluating, monitoring or diagnosing in the context of a particular type of medical care
    • A61B2505/09Rehabilitation or training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/002Monitoring the patient using a local or closed circuit, e.g. in a room or building
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/486Bio-feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2205/00Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
    • H04R2205/041Adaptation of stereophonic signal reproduction for the hearing impaired
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication

Definitions

  • the present invention relates to a calibration method for sound delivery systems of the kind which involve audio transducers such as headphones, ear plugs or in some circumstances bone conduction transducers and which can be customized by a user to take into account the user's auditory response, and to sound delivery systems subject to the calibration method of the invention.
  • a method for calibrating a sound delivery system having a processing assembly, a data communications assembly coupled to the processing assembly, and at least one audio transducer mounted with at least one processor of the processing assembly and responsive thereto for delivering sound to a user, the method including the steps of:
  • the transmitting step involves use of wireless transmission employing a local or near field communications standard, such as Wi-Fi or BluetoothTM.
  • a local or near field communications standard such as Wi-Fi or BluetoothTM.
  • the user interface device suitably comprises a portable computational device, such as a smartwatch, smartphone, tablet or laptop computer.
  • test sounds or tones include a sequence of discrete sounds of different frequencies and sound pressure levels (SPL) within each frequency, suitably covering a typical range of human hearing.
  • SPL sound pressure levels
  • the test sounds are in the range of frequencies from 10 Hz to 30 kHz, suitably 20 Hz to 20 kHz, most preferably including 100 Hz, 250 Hz, 500 Hz, 1 kHz, 2 kHz, 4 KHz, 8 kHz and 16 kHz, and of sound pressure level (SPL) ranging from ⁇ 10 dB to 120 dB, suitably 0 dB to 110 dB, within each discrete sound frequency.
  • SPL sound pressure level
  • Each of the discrete sounds in the sequence is desirably of equal duration and suitably spaced apart from adjacent sounds by periods of silence.
  • the sound duration is in a range from 0.1 milliseconds to 5 seconds, suitably 100 milliseconds to 1 second and the intervening silence period is in a range from 0.1 milliseconds to 5 seconds, suitably 100 milliseconds to 1 second.
  • the storing step involves storing the test sound mapping in a code base utilized by an audio application interface of the sound delivery system.
  • the sound delivery system includes a non-volatile electronic memory arranged to store the code base.
  • the code base is stored remotely in a database and associated with an interface application for the sound delivery system, for down-loading with the interface application on request.
  • the sound delivery system may be an audiological testing apparatus, such as a hearing aid, set of headphones, or other head-mountable hearing apparatus incorporating an audio transducer.
  • an audiological testing apparatus such as a hearing aid, set of headphones, or other head-mountable hearing apparatus incorporating an audio transducer.
  • a sound delivery system including: a processing assembly including at least one processor and an electronic memory; a user interface coupled to the at least one processing assembly; at least one audio transducer responsive to the processing assembly for delivering sound to a user; and the electronic memory accessible by the at least one processor and storing: instructions for the processor to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via said audio transducer; a code base utilised by an audio application interface of the sound delivery system; wherein the sounds delivered via the transducer for determining the compensatory weights are generated by a transducer processor mounted within a transducer portion which includes the at least one audio transducer; and wherein the sound delivery system is calibrated in accordance with the method set out above.
  • the processing assembly is mounted with the at least one audio transducer; suitably in the form of a set of headphones including a pair of speakers.
  • a sound delivery system including: at least one processing assembly; an interface coupled to the at least one processing assembly; and at least one audio transducer responsive to the at least one processing assembly for delivering sound to a user; wherein the at least one processing assembly is arranged to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via said audio transducer.
  • the sound delivery system comprises an interface portion which includes the user interface and a transducer portion which includes the at least one audio transducer, wherein the first interface portion and the transducer portion include corresponding data communication assemblies for data communication there between.
  • the at least one processing assembly includes: at least one interface processor that is mounted within the interface portion and coupled to the user interface; and at least one transducer processor that is mounted within the transducer portion and arranged to process sound signals for delivery as sound by said audio transducer.
  • the data communication assemblies are arranged for wireless data communication.
  • the data communication assemblies may be arranged to implement data communication according to the Bluetooth standard.
  • the interface portion comprises a smartphone though it could alternatively be a tablet, laptop or desktop computer, for example.
  • a sound delivery system including: at least one processing assembly; an interface coupled to the at least one processing assembly; at least one audio transducer responsive to the at least one processing assembly for delivering sound to a user; and an electronic memory accessible by the at least one processing assembly storing: instructions for the processor to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via said audio transducer.
  • an automatic audiological testing apparatus including: a processing assembly having at least one processor; an electronic memory in communication with the processing assembly and containing instructions for execution by said at least one processor; a user interface in communication with the processing assembly; and at least one audio transducer mounted with the processing assembly and responsive to the at least one processor for delivering sound to a user; wherein the electronic memory stores instructions for the processor to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds at a number of different frequencies wherein the sounds delivered via the transducer for determining the compensatory weights are generated by a transducer processor mounted within a transducer portion which includes the at least one audio transducer; and wherein the audiological testing apparatus is calibrated in accordance with the method set out above.
  • a set of headphones including right and left loudspeakers for delivery of sounds to a user: at least one processor configured to receive gain adjustment weights for the user for each of a number of predetermined frequencies; wherein the processor is arranged to convert an audio signal into the frequency domain, apply the gain adjustment weights to the audio signal in the frequency domain and convert the adjusted audio signal back into the time domain for delivery of an adjusted audio signal to the user via the loudspeakers.
  • a method for sound delivery to a user including: presenting sounds of different frequencies and prompts to a user in order to determine an audiological model of the user comprising a set of gain adjustment weights for each of the different frequencies; and adjusting audio signals according to the adjustment weights to thereby deliver adjusted audio signals to the user to compensate for hearing deficiencies of the user.
  • the method includes facilitating adjustment of the weights by the user to introduce frequency equalization parameters selected by the user for each of a number of frequency bands.
  • a sound delivery system that includes a processing assembly with a user interface coupled thereto. At least one audio transducer is provided for delivering sound to a user, which is responsive to the processing assembly. Typically the audio transducer is a loudspeaker of a pair of headphones or earbuds, though it may also be a bone conduction transducer.
  • the at least one processing assembly is arranged to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via the audio transducer.
  • FIG. 1 is a high level diagram of a sound delivery system according to a preferred embodiment of a first aspect of the present invention, in use;
  • FIG. 2 is a block diagram of electronic circuitry of a transducer portion of the sound delivery system
  • FIGS. 3A-3D are a first portion of a circuit schematic generally corresponding to the block diagram of FIG. 2 ;
  • FIGS. 4A-4B are a second portion of the circuit schematic of FIG. 3 ;
  • FIG. 5 is a high level block diagram of a user interface portion, in the form of a smartphone, of the sound delivery system.
  • FIGS. 6 to 10 are screen shots of screens presented to a user by the smartphone
  • FIG. 11 is a block diagram illustrating a modelling method in accordance with an embodiment of the present invention.
  • FIGS. 12 and 13 are screen shots of screens presented to a user by the smartphone
  • FIG. 14 comprises three frequency domain spectrograms. At left is the hearing response spectrogram of a person with normal hearing to a test audio signal. In the middle there is the hearing response spectrogram of a person with deteriorated hearing response in high frequency bands to the test audio signal. At right is the perceived audio response to the test signal subsequent to the test signal being gain adjusted to compensate for the high frequency loss;
  • FIG. 15 is a flowchart of the steps performed by the sound delivery system delivery of audio to a user
  • FIG. 16 is a schematic diagram showing the equipment employed in a calibration method of another aspect of the present invention.
  • FIG. 17 is a table illustrating an example of results obtained from the calibration method of the embodiment.
  • FIG. 18 is a flowchart of steps in a method for carrying out the calibration method employing the components illustrated in FIG. 16 to produce the results tabulated in FIG. 17 .
  • the sound delivery system 1 is comprised of two major portions.
  • a first portion comprises a smartphone 5 , or other computational device such as a laptop, desktop or tablet computer.
  • the smartphone 5 is in data communication with a second portion of the sound delivery system being a transducer portion, which in the present embodiment comprises headphones 7 though it might equally be a set of earbuds or some other sound delivery apparatus.
  • the data communication between the smartphone 5 and the headphones 7 is by Bluetooth wireless in the presently described embodiment though of course it could be established otherwise, for example through a wired connection or by other wireless protocols.
  • FIG. 2 is a high level block diagram of the electronic circuitry that is contained within the headphones 7 .
  • the circuitry includes a communications port 9 in the form of a Bluetooth port for communicating with the smartphone 5 .
  • a processor in the form of a field programmable gate array 11 is coupled to the Bluetooth port 8 .
  • the FPGA 11 is configured by uploading data from the smartphone 5 to apply “weights”, i.e. gain adjustment parameters, for different frequencies to an audio signal that it receives from smartphone.
  • An output side of the FPGA 11 is coupled to a digital to analogue converter (DAC) 13 .
  • DAC digital to analogue converter
  • the DAC converts the digital audio signal from the FPGA into right and left stereo analogue signals which are applied via pre-amplifiers 15 a , 15 b , through noise cancelling modules 17 a , 17 b to output amplifiers 19 a , 19 b .
  • the output amplifiers 19 a , 19 b drive electric signal to vibration transducers 21 a , 21 b .
  • the transducers 21 a , 21 b are typically loudspeakers though they could alternatively be bone conduction transducers.
  • FIGS. 3A-3D are a first part of a circuit schematic corresponding to block diagram 3 and showing the Bluetooth port 9 , FPGA 11 and DAC 13 .
  • FIGS. 3A-3D also shows programmable flash components 23 and 25 which are used to configure the FPGA, for example to set the frequency gain adjustments weights that the FPGA will apply to an audio signal in use.
  • the FPGA 11 is a Cyclone IV EP4CE40F integrated circuit that is manufactured by Altera Corporation and which is configured to perform Fast Fourier Transforms on audio signals received via the Bluetooth port, apply the gain weights in the frequency domain and then perform an Inverse Fast Fourier Transform to convert the digital signal back to the time domain.
  • FIGS. 3A-3D also shows a clock module 27 and a power supply chip 29 for applying power to the various components. All of these components are readily available commercially.
  • FIGS. 4A-4B are a second part of the circuit schematic (the first part being depicted in FIGS. 3A-3D ).
  • FIGS. 4A-4B show the component level integrated circuit 15 that implements the left and right channel pre-amplifiers 15 a , 15 b . It also shows the component level integrated circuit 19 that implements the left and right output amplifiers 19 a and 19 b.
  • the output transducers 21 a , 21 b shown in the block diagram of FIG. 2 comprise loudspeakers. However, they could instead comprise bone conduction transducers 31 a , 31 b as shown in FIGS. 4A-4B . In that case a demux chip 33 is provided to switch the output signal from the power amplifier 19 between the loudspeaker and bone conduction transducers as desired.
  • the demux chip 33 is controlled by the FPGA via the DEMUX interface 32 on that chip.
  • FIG. 5 there is depicted a block diagram of the smartphone 5 .
  • the smartphone 5 comprises a number of modules which are able to exchange data and commands via a data bus 47 .
  • the various modules comprise:
  • FIG. 5 is highly simplified and omits many components that are found in a smartphone. However, it will be sufficient for those skilled in the art to understand the preferred embodiment of the invention.
  • the memory 37 stores instructions that comprise a custom application, i.e. “App” 38 which the processor 35 executes in use in order to perform a method according to a preferred embodiment of an aspect of the present invention which will now be described.
  • App 38 a custom application
  • the programming of the App 38 is straightforward once the method, which will become apparent from the following discussion, is understood.
  • the user 3 in use the user 3 dons the headphones 7 and switches the headphones and the smartphone 5 on so that Bluetooth communication is established there between.
  • the user then operates the smartphone to initiate execution of the App 38 , for example by clicking on an icon for the App, which displayed on touch screen 41 .
  • a splash screen 49 shown in FIG. 6 is then displayed to the user 3 and shortly thereafter a control menu screen 51 is displayed as shown in FIG. 7 .
  • the control menu screen 51 presents the user 3 with three configuration options, 51 a , 51 b , 51 c .
  • the first option is “My Headphones” 51 a . If the user has never used the App before and wants to quickly upload some equaliser style adjustments then he/she can choose the “My Headphones” option 51 a . in response to that selection the processor 35 presents an equaliser screen 59 shown in FIG. 12 .
  • the App is programmed so that the user 3 can quickly set and upload equaliser preferences.
  • the user can choose a second option being “Test History” option 51 b .
  • the processor causes the display of a list view of previous models from which the user can select from and upload to the headphones 7 with or without an equaliser overlay.
  • the user can select the “My Profile” option 51 c .
  • Selecting the “My Profile” option 51 c causes the processor to call up an audio modelling routine and set a personalised model to be uploaded with or without an equaliser overlay to the headphones 7 . If an equalizer overlay is applied then the gain adjustment weights that have been determined based on the audiological testing are varied to take into account the user's equalization preferences. For example if the user prefers a bassier sound then the weights corresponding to lower frequency bands are increased.
  • the processor 43 On selecting the “My Profile” option 51 c the processor 43 causes screens to display prompting the user to help optimise the acoustic model as displayed in prompt screens 53 ( FIG. 8 ) and 55 ( FIG. 9 ).
  • the app 43 displays the interface screen 57 and the user 43 is directed to respond to the software by pressing the “left” 57 a or “right” 57 b buttons.
  • the processor communicates with the headphones 7 via the Bluetooth link to cause the loudspeakers in the headsets to present beeps in the user's left or right ear respectively.
  • the App then presents screens to step the user through a modelling method 59 that is shown in FIG. 11 and which is in accordance with a preferred embodiment of the invention.
  • the modelling method 59 includes steps to identify the user's minimal perceived headset specific decibel threshold at each of a number of frequency assessment points.
  • the determined specific decibel thresholds for the user are then saved into the audio model.
  • the dB Threshold variable for each frequency is set in the boxes titled “Stop Threshold”, in this step the assessment is stopped and the calculated dB value is saved.
  • the procedure is stopped and the correction dB is set to the maximum as the user has profound hearing loss at this frequency and has maximised the capabilities of the hardware.
  • the box labelled “2 for 3 at this level ?” comprises part of an error check loop.
  • the user will actually be presented with that dB level three times before the program exits. If the user can hear it two out of three times then they are deemed to be able to hear it. The purpose of this is to avoid input error and the like.
  • the audio model is a set of parameters for the user, including the decibel thresholds that are saved in the digital memory 37 .
  • the App On completion of the method 59 for each of the frequency assessment points, the App has successfully modelled the way in which the user perceives sound through the headset 7 .
  • the app 3 then converts the perceived model into a graphical depiction 60 as shown in FIG. 13 for review.
  • the graph in FIG. 13 showing shows the user's left ear and right ear hearing response as assessed at each of a number of frequencies.
  • the method 59 finds the user's perceived gain deficiency 63 in each specific frequency band in the audio spectrum. It then calculates weights (i.e. gain correction factors) by which future audio waveforms need to be gain adjusted in the frequency domain to correct the user's waveform back to perceived unity as intended. These weighting coefficients are then up loaded from the smartphone 5 into the headset FPGA 7 . The FPGA then uses the uploaded weights in run time for the dynamic real time processing of uncompensated audio from the smartphone.
  • weights i.e. gain correction factors
  • the App 43 then displays the equaliser screen 58 ( FIG. 12 ) to the user 3 .
  • the user has the option to remain with a unity correction as per the audiological model or to use this base level correction with an equaliser overlay to allow the additional personalisation.
  • the model with or without the equaliser overlay as per the users preference are transitioned into frequency based coefficient corrections and are uploaded to the paired headset for configuration of the on-board signal processing corrections.
  • FIG. 15 is a flowchart showing how the FPGA is programmed to use the determined gain adjustment weights, i.e. the audiological model, to process a WAV file (or other audio file) to thereby apply the weightings produced in the audiological assessment with or without an equalizer overlay.
  • the determined gain adjustment weights i.e. the audiological model
  • a digital audio signal 61 is transmitted to the headset 7 via a Bluetooth link 63 .
  • This received signal is transmitted into the on board FPGA (item 11 of FIG. 2 ) for signal processing.
  • the time domain audio signal 61 is converted to the frequency domain 65 by the utilisation of an FFT this frequency domain signal is then gain adjusted against the patient's personal correction weightings 67 for each corresponding frequency bin to create a gain adjusted frequency domain representation of the signal.
  • the FPGA 11 then undertakes an IFFT to render the signal in to a user specific time domain digital audio waveform 71 .
  • the digital audio waveform is then processed by the DAC (item 13 of FIG. 2 ) and analogue amplifiers and possibly noise cancellation modules to drive the transducers of the headphones.
  • a sound delivery system 1 ( FIG. 1 ) is provided.
  • the sound delivery system 1 includes at least one processing assembly which in the presently described embodiment includes the smartphone processor 35 and the headphone FPGA 11 .
  • a user interface is provided in the form of smartphone touchscreen 41 and touchscreen display driver unit 39 .
  • the touchscreen display driver unit 39 is coupled to the processor via a data bus 47 .
  • the sound delivery system 1 also includes at least one audio transducer. For example, either or both loudspeakers 21 a , 21 b ( FIG. 4 ) and bone conduction transducers 31 a , 31 b ( FIG. 4 ) are provided.
  • the bone transducers are responsive to signals from the FPGA via suitable digital to analog converts and analog amplifiers.
  • the at least one processing assembly includes the processor 35 of smartphone 5 . That processor is arranged, by virtue of it executing the instructions comprising app 38 that are stored in digital memory 37 , to determine compensatory weights at each of a number of audio frequencies for the user. The processor determines the weights on the basis of user responses via the interface (e.g. touchscreen 41 ) to sounds delivered via the audio transducer (for example the loudspeakers of headset 7 ). The at least one processor also includes the FPGA 11 which is configured with the determined weights and which is therefore able to deliver audio signals to the user by modifying the audio signals in accordance with the determined weights.
  • the user interface portion and the transducer portion of the sound delivery system are physically separate, though in data communication via a Bluetooth connection.
  • the separation of the two units may not be so.
  • the headset could have a user interface, for example one or more buttons, mounted to the side which are coupled to an internal processor so that a user may initiate the automatic audiological assessment and then press one or other of the buttons to indicate a hearing threshold for a presented audio signal.
  • the processing assembly might comprise a single, suitably programmed, high frequency processor that is capable of both running the audiological assessment method and also performing the FFT and IFFT functions with gain adjustment according to the determined weights for the user.
  • FIG. 16 there is shown a schematic diagram of an embodiment of calibration equipment employed for the factory calibration of a sound delivery system, here in the form of the customisable sound delivery system (or “SDS”) 1 , as described above.
  • the SDS of the present embodiment includes a set of headphones 7 having a pair of audio transducers 21 and a remote computational device, here in the form of a laptop computer or tablet 6 .
  • the laptop computer or tablet has many components—for example touch screen 41 ′—in common, at least functionally, with the smartphone 5 , described hereinabove.
  • the headphones include a processor 11 and associated memory 12 that communicates with remote devices, such as laptop computer 6 , via a communications module 9 .
  • the laptop computer 6 may also communicate with remote storage, such as database 82 held in a remote storage facility—sometimes referred to a “cloud storage”—accessible via a network (not shown), whether public or virtual private network.
  • the equipment necessary for calibration of an individual headset 7 includes a reference SPL meter 70 which is attached to a selected acoustic transducer, here left speaker 21 a , by an acoustic coupler 72 in order to exclude external noise during calibration testing.
  • Suitable reference SPL meters include the DigiTech QM1592 Pro Sound Level Meter supplied by Jaycar Electronics of Australia or, particularly for headphone sets, the bilateral EARS stand supplied by miniDSP of Hong Kong (see www.minidsp.com). It will be appreciated that it is not always economically viable to calibrate every headset produced.
  • headsets of a particular design or “model” may be manufactured to quality standards of, for example +/ ⁇ 2 dB A, and a representative headset from a given production run subject to the calibration procedures described in relation to the present embodiment.
  • quality standards for example +/ ⁇ 2 dB A
  • a representative headset from a given production run subject to the calibration procedures described in relation to the present embodiment.
  • a fresh calibration would be conducted for the model variant. It will be appreciated that, in other embodiments for some particular medical applications, it may be desirable to calibrate each and every headset individually to achieve higher accuracy.
  • FIG. 18 is a top level flow diagram of the sequence of steps in an embodiment of the calibration method 100 of the second aspect of the invention, here employing the equipment and optional infrastructure illustrated in FIG. 16 .
  • the headphone body or headset 7 of a representative SDS containing a first audio transducer in the form of speaker 21 a is coupled by to the reference sound pressure meter 70 by acoustic coupler 72 , directing produced sounds to a microphone 74 of the SPL meter.
  • the first of a sequence of command codes, targeted at speaker 21 a are then sent to the headphone assembly 7 in step 104 requesting output of a discrete test tone of specific frequency and sound pressure level, for example a command code requesting 100 Hz at 0 dB.
  • step 106 the command code is acknowledged by the communications interface 9 of the headphone assembly 7 .
  • step 108 the processor 11 operates in response to the command code to cause the speaker 21 a to reproduce a test tone, which in turn is measured by the reference sound level meter 70 in step 110 .
  • step 112 the SPL reading obtained by the reference SPL meter 70 is recorded for transfer to a database associated with a user interface application for the SDS 1 .
  • a mapping of command codes input to the processor 11 and SPL readings obtained from a transducer 21 via the microphone 74 of reference SPL meter 70 is built up to produce a mapping table in the database.
  • the SPL mapping resulting from the calibration may be stored, at least temporarily, in a database held locally in the local memory 12 associated with processor 11 , in memory of the handheld device 6 controlling calibration, or most desirably and eventually in a remote database 82 held in a cloud storage facility 80 .
  • the remote database 82 suitably also contains an interface application for selective down-loading to any compatible user interface device, and incorporates the SPL mapping for the particular model and/or production run of the headset 7 that has undergone calibration. This effectively provides a single point of calibration, thus obviating the need for “paired” interface devices and transducer hardware which typically adds to costs and/or inconvenience to achieve a similar level of accuracy.
  • step 114 control is passed back to step 104 where after a delay of 0.5 s the command code for a subsequent test tone having the same frequency but a different SPL level, for example 10 dB, is produced in step 104 .
  • Return loop 124 is then repeated through each of the desired SPL levels (for example in 10 dB steps to 100 dB).
  • control drops through from the Next SPL in decision box 114 to decision box 116 wherein a subsequent frequency step is selected, for example 250 Hz recommencing at an SPL level of 0 dB. Control then passes back to loop 124 and the 250 Hz in stepped through each of the desired SPL levels.
  • control drops to decision box 118 wherein a user will be prompted to move (if required) or switch (in the case of a bilateral meter) the acoustic coupler and reference SPL meter 70 to the other of the acoustic transducers, e.g. speaker 21 b , in step 102 . Subsequently control returns to step 104 to repeat the test tone process for the other transducer at each selected frequency and SPL level.
  • the mapping associated with the interface application appropriate to the headset model would make the appropriate adjustment during 1 kHz tone production by processor 11 . See the example results table depicted in FIG. 17 wherein the mapping may be derived from the difference or offset between the requested and measured SPL results for the left transducer 21 a . It will be appreciated that there will be a similar table portion generated for the right transducer 21 b , across the full range of desired frequencies and SPLs.

Abstract

A method (100) for calibrating a sound delivery system (1) having a processing assembly, a data communications assembly (9) coupled to the processing assembly, and at least one audio transducer (21a, 21b) mounted with at least one processor (11) of the processing assembly and responsive thereto for delivering sound to a user (3), the method including the steps of: transmitting from a remote user interface device (6) for the sound delivery system, a sequence of command codes for specifying predetermined characteristics of test sounds; receiving the command code sequence at the communications assembly of the sound delivery system; providing the command code sequence to the processing assembly of the sound delivery system; reproducing by a selected at least one audio transducer, the predetermined test sounds under control of said at least one processor according to the command code sequence; measuring with a reference SPL meter (70) proximate to the audio transducer, characteristics of test sounds reproduced by the sound delivery system; comparing the measured characteristics of the reproduced sounds with the predetermined characteristics of the test sounds; producing a mapping of specified test sounds to sounds reproduced by said at least one audio transducer; and storing the mapping in an electronic memory (12, 82) associated with the processing assembly or remote interface device (6).

Description

    RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 16/043,351, filed 24 Jul. 2018, which is a continuation-in-part to U.S. patent application Ser. No. 15/196,256 filed 29 Jun. 2016 in the name of the present applicant and published as No. US 2017/0046120 A1 on 16 Feb. 2017, all incorporated herein by reference.
  • TECHNICAL FIELD
  • The present invention relates to a calibration method for sound delivery systems of the kind which involve audio transducers such as headphones, ear plugs or in some circumstances bone conduction transducers and which can be customized by a user to take into account the user's auditory response, and to sound delivery systems subject to the calibration method of the invention.
  • BACKGROUND
  • Any references to methods, apparatus or documents of the prior art are not to be taken as constituting any evidence or admission that they formed, or form part of the common general knowledge in Australia or any other country.
  • Different people have different auditory responses. For example, younger people are typically able to hear audio frequencies at higher frequencies than older people. As people age, or as a result of hearing damage due to exposure to loud sounds, their hearing tends to deteriorate and their auditory response across a range of frequencies changes.
  • It is known to provide programmable hearing aids that can, through testing of the user by an audiologist, be set to compensate for the user's deteriorated hearing acuity or partial hearing loss. However, such systems require that the user makes an appointment to have a hearing aid test, typically employing expensive and cumbersome test equipment, and that the hearing aid be set by the technician.
  • It has been further realized that in order to provide effective compensation for hearing loss, it is important that the audio transducers provided in a hearing compensation system be accurately calibrated during manufacture and/or assembly so that the tests undertaken by the user provide consistent and reliable performance, including irrespective of a user-supplied interface device.
  • It is an object of the present invention to provide a method for calibrating a sound delivery system utilized for automatic hearing tests. It is a further object of the invention to provide a customizable personal sound delivery system that is pre-calibrated using the method for convenient use and which can automatically test the user's hearing and subsequently make adjustments to its sound delivery parameters on the basis of the test results.
  • SUMMARY OF THE INVENTION
  • According to a first aspect of the present invention there is provided a method for calibrating a sound delivery system having a processing assembly, a data communications assembly coupled to the processing assembly, and at least one audio transducer mounted with at least one processor of the processing assembly and responsive thereto for delivering sound to a user, the method including the steps of:
      • transmitting from a remote user interface device for the sound delivery system, a sequence of command codes for specifying predetermined characteristics of test sounds;
      • receiving the command code sequence at the communications assembly of the sound delivery system;
      • providing the command code sequence to said processing assembly of the sound delivery system;
      • reproducing by a selected at least one audio transducer, the predetermined test sounds under control of said at least one processor according to the command code sequence;
      • measuring with a reference meter proximate to the audio transducer, characteristics of test sounds reproduced by the sound delivery system;
      • comparing the measured characteristics of the reproduced sounds with the predetermined characteristics of the test sounds;
      • producing a mapping of specified test sounds to sounds reproduced by said at least one audio transducer; and
      • storing the mapping in an electronic memory associated with the remote user interface.
  • Preferably, the transmitting step involves use of wireless transmission employing a local or near field communications standard, such as Wi-Fi or Bluetooth™.
  • The user interface device suitably comprises a portable computational device, such as a smartwatch, smartphone, tablet or laptop computer.
  • In preference, the test sounds or tones include a sequence of discrete sounds of different frequencies and sound pressure levels (SPL) within each frequency, suitably covering a typical range of human hearing.
  • Preferably, the test sounds are in the range of frequencies from 10 Hz to 30 kHz, suitably 20 Hz to 20 kHz, most preferably including 100 Hz, 250 Hz, 500 Hz, 1 kHz, 2 kHz, 4 KHz, 8 kHz and 16 kHz, and of sound pressure level (SPL) ranging from −10 dB to 120 dB, suitably 0 dB to 110 dB, within each discrete sound frequency.
  • Each of the discrete sounds in the sequence is desirably of equal duration and suitably spaced apart from adjacent sounds by periods of silence. Suitably the sound duration is in a range from 0.1 milliseconds to 5 seconds, suitably 100 milliseconds to 1 second and the intervening silence period is in a range from 0.1 milliseconds to 5 seconds, suitably 100 milliseconds to 1 second.
  • Suitably the storing step involves storing the test sound mapping in a code base utilized by an audio application interface of the sound delivery system. Desirably, the sound delivery system includes a non-volatile electronic memory arranged to store the code base. Most preferably, the code base is stored remotely in a database and associated with an interface application for the sound delivery system, for down-loading with the interface application on request.
  • The sound delivery system may be an audiological testing apparatus, such as a hearing aid, set of headphones, or other head-mountable hearing apparatus incorporating an audio transducer.
  • In another form, there is also provided a sound delivery system including: a processing assembly including at least one processor and an electronic memory; a user interface coupled to the at least one processing assembly; at least one audio transducer responsive to the processing assembly for delivering sound to a user; and the electronic memory accessible by the at least one processor and storing: instructions for the processor to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via said audio transducer; a code base utilised by an audio application interface of the sound delivery system; wherein the sounds delivered via the transducer for determining the compensatory weights are generated by a transducer processor mounted within a transducer portion which includes the at least one audio transducer; and wherein the sound delivery system is calibrated in accordance with the method set out above.
  • Preferably, the processing assembly is mounted with the at least one audio transducer; suitably in the form of a set of headphones including a pair of speakers.
  • In a second aspect of the invention, there is provided a sound delivery system including: at least one processing assembly; an interface coupled to the at least one processing assembly; and at least one audio transducer responsive to the at least one processing assembly for delivering sound to a user; wherein the at least one processing assembly is arranged to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via said audio transducer.
  • In a preferred embodiment of the invention the sound delivery system comprises an interface portion which includes the user interface and a transducer portion which includes the at least one audio transducer, wherein the first interface portion and the transducer portion include corresponding data communication assemblies for data communication there between.
  • Preferably the at least one processing assembly includes: at least one interface processor that is mounted within the interface portion and coupled to the user interface; and at least one transducer processor that is mounted within the transducer portion and arranged to process sound signals for delivery as sound by said audio transducer.
  • Preferably the data communication assemblies are arranged for wireless data communication. For example the data communication assemblies may be arranged to implement data communication according to the Bluetooth standard.
  • In a preferred embodiment of the invention the interface portion comprises a smartphone though it could alternatively be a tablet, laptop or desktop computer, for example.
  • According to another aspect of the present invention there is provided a sound delivery system including: at least one processing assembly; an interface coupled to the at least one processing assembly; at least one audio transducer responsive to the at least one processing assembly for delivering sound to a user; and an electronic memory accessible by the at least one processing assembly storing: instructions for the processor to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via said audio transducer.
  • In a further aspect of the invention there is provided an automatic audiological testing apparatus including: a processing assembly having at least one processor; an electronic memory in communication with the processing assembly and containing instructions for execution by said at least one processor; a user interface in communication with the processing assembly; and at least one audio transducer mounted with the processing assembly and responsive to the at least one processor for delivering sound to a user; wherein the electronic memory stores instructions for the processor to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds at a number of different frequencies wherein the sounds delivered via the transducer for determining the compensatory weights are generated by a transducer processor mounted within a transducer portion which includes the at least one audio transducer; and wherein the audiological testing apparatus is calibrated in accordance with the method set out above.
  • According to a still further aspect of the present invention there is provided a set of headphones including right and left loudspeakers for delivery of sounds to a user: at least one processor configured to receive gain adjustment weights for the user for each of a number of predetermined frequencies; wherein the processor is arranged to convert an audio signal into the frequency domain, apply the gain adjustment weights to the audio signal in the frequency domain and convert the adjusted audio signal back into the time domain for delivery of an adjusted audio signal to the user via the loudspeakers.
  • According to another aspect of the present invention there is provided a method for sound delivery to a user including: presenting sounds of different frequencies and prompts to a user in order to determine an audiological model of the user comprising a set of gain adjustment weights for each of the different frequencies; and adjusting audio signals according to the adjustment weights to thereby deliver adjusted audio signals to the user to compensate for hearing deficiencies of the user.
  • Preferably the method includes facilitating adjustment of the weights by the user to introduce frequency equalization parameters selected by the user for each of a number of frequency bands.
  • It will therefore be realized that in one embodiment of the invention there is provided a sound delivery system that includes a processing assembly with a user interface coupled thereto. At least one audio transducer is provided for delivering sound to a user, which is responsive to the processing assembly. Typically the audio transducer is a loudspeaker of a pair of headphones or earbuds, though it may also be a bone conduction transducer. The at least one processing assembly is arranged to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via the audio transducer.
  • Additional features and advantages of the present invention are described in, and will be apparent from, the detailed description of the presently preferred embodiments and from the drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Preferred features, embodiments and variations of the invention may be discerned from the following Detailed Description which provides sufficient information for those skilled in the art to perform the invention. The Detailed Description is not to be regarded as limiting the scope of the preceding Summary of the Invention in any way. The Detailed Description will make reference to a number of drawings as follows:
  • FIG. 1 is a high level diagram of a sound delivery system according to a preferred embodiment of a first aspect of the present invention, in use;
  • FIG. 2 is a block diagram of electronic circuitry of a transducer portion of the sound delivery system;
  • FIGS. 3A-3D are a first portion of a circuit schematic generally corresponding to the block diagram of FIG. 2;
  • FIGS. 4A-4B are a second portion of the circuit schematic of FIG. 3;
  • FIG. 5 is a high level block diagram of a user interface portion, in the form of a smartphone, of the sound delivery system.
  • FIGS. 6 to 10 are screen shots of screens presented to a user by the smartphone;
  • FIG. 11 is a block diagram illustrating a modelling method in accordance with an embodiment of the present invention;
  • FIGS. 12 and 13 are screen shots of screens presented to a user by the smartphone;
  • FIG. 14 comprises three frequency domain spectrograms. At left is the hearing response spectrogram of a person with normal hearing to a test audio signal. In the middle there is the hearing response spectrogram of a person with deteriorated hearing response in high frequency bands to the test audio signal. At right is the perceived audio response to the test signal subsequent to the test signal being gain adjusted to compensate for the high frequency loss;
  • FIG. 15 is a flowchart of the steps performed by the sound delivery system delivery of audio to a user;
  • FIG. 16 is a schematic diagram showing the equipment employed in a calibration method of another aspect of the present invention;
  • FIG. 17 is a table illustrating an example of results obtained from the calibration method of the embodiment; and
  • FIG. 18 is a flowchart of steps in a method for carrying out the calibration method employing the components illustrated in FIG. 16 to produce the results tabulated in FIG. 17.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • Referring now to FIG. 1, there is depicted a sound delivery system 1 in use and being applied to a user 3. In the presently described preferred embodiment the sound delivery system 1 is comprised of two major portions. A first portion comprises a smartphone 5, or other computational device such as a laptop, desktop or tablet computer. The smartphone 5 is in data communication with a second portion of the sound delivery system being a transducer portion, which in the present embodiment comprises headphones 7 though it might equally be a set of earbuds or some other sound delivery apparatus. The data communication between the smartphone 5 and the headphones 7 is by Bluetooth wireless in the presently described embodiment though of course it could be established otherwise, for example through a wired connection or by other wireless protocols.
  • FIG. 2 is a high level block diagram of the electronic circuitry that is contained within the headphones 7. The circuitry includes a communications port 9 in the form of a Bluetooth port for communicating with the smartphone 5. A processor in the form of a field programmable gate array 11 is coupled to the Bluetooth port 8. As will be explained, the FPGA 11 is configured by uploading data from the smartphone 5 to apply “weights”, i.e. gain adjustment parameters, for different frequencies to an audio signal that it receives from smartphone. An output side of the FPGA 11 is coupled to a digital to analogue converter (DAC) 13. The DAC converts the digital audio signal from the FPGA into right and left stereo analogue signals which are applied via pre-amplifiers 15 a, 15 b, through noise cancelling modules 17 a, 17 b to output amplifiers 19 a, 19 b. The output amplifiers 19 a, 19 b drive electric signal to vibration transducers 21 a, 21 b. The transducers 21 a, 21 b are typically loudspeakers though they could alternatively be bone conduction transducers.
  • FIGS. 3A-3D are a first part of a circuit schematic corresponding to block diagram 3 and showing the Bluetooth port 9, FPGA 11 and DAC 13. FIGS. 3A-3D also shows programmable flash components 23 and 25 which are used to configure the FPGA, for example to set the frequency gain adjustments weights that the FPGA will apply to an audio signal in use. The FPGA 11 is a Cyclone IV EP4CE40F integrated circuit that is manufactured by Altera Corporation and which is configured to perform Fast Fourier Transforms on audio signals received via the Bluetooth port, apply the gain weights in the frequency domain and then perform an Inverse Fast Fourier Transform to convert the digital signal back to the time domain.
  • FIGS. 3A-3D also shows a clock module 27 and a power supply chip 29 for applying power to the various components. All of these components are readily available commercially.
  • FIGS. 4A-4B are a second part of the circuit schematic (the first part being depicted in FIGS. 3A-3D). FIGS. 4A-4B show the component level integrated circuit 15 that implements the left and right channel pre-amplifiers 15 a, 15 b. It also shows the component level integrated circuit 19 that implements the left and right output amplifiers 19 a and 19 b.
  • The output transducers 21 a, 21 b shown in the block diagram of FIG. 2 comprise loudspeakers. However, they could instead comprise bone conduction transducers 31 a, 31 b as shown in FIGS. 4A-4B. In that case a demux chip 33 is provided to switch the output signal from the power amplifier 19 between the loudspeaker and bone conduction transducers as desired. The demux chip 33 is controlled by the FPGA via the DEMUX interface 32 on that chip.
  • Turning now to FIG. 5 there is depicted a block diagram of the smartphone 5. The smartphone 5 comprises a number of modules which are able to exchange data and commands via a data bus 47. The various modules comprise:
      • a processor 35;
      • an electronic memory 37;
      • a communications module in the form of Bluetooth port 43 for communicating with corresponding Bluetooth port 9 of the transducer portion 7;
      • a telecoms module 45 which allows the smartphone 5 to establish voice and data communications with a telecommunications network; and
      • a touchscreen drive module 39, which drives touchscreen 41 and processes user data inputs received via the touchscreen and passes them to the processor 35.
  • It will be realised that the architecture shown in FIG. 5 is highly simplified and omits many components that are found in a smartphone. However, it will be sufficient for those skilled in the art to understand the preferred embodiment of the invention.
  • The memory 37 stores instructions that comprise a custom application, i.e. “App” 38 which the processor 35 executes in use in order to perform a method according to a preferred embodiment of an aspect of the present invention which will now be described. The programming of the App 38 is straightforward once the method, which will become apparent from the following discussion, is understood.
  • Referring again to FIG. 1, in use the user 3 dons the headphones 7 and switches the headphones and the smartphone 5 on so that Bluetooth communication is established there between.
  • The user then operates the smartphone to initiate execution of the App 38, for example by clicking on an icon for the App, which displayed on touch screen 41. A splash screen 49 shown in FIG. 6 is then displayed to the user 3 and shortly thereafter a control menu screen 51 is displayed as shown in FIG. 7.
  • The control menu screen 51 presents the user 3 with three configuration options, 51 a, 51 b, 51 c. The first option is “My Headphones” 51 a. If the user has never used the App before and wants to quickly upload some equaliser style adjustments then he/she can choose the “My Headphones” option 51 a. in response to that selection the processor 35 presents an equaliser screen 59 shown in FIG. 12. The App is programmed so that the user 3 can quickly set and upload equaliser preferences.
  • Alternatively if the user 3 has used the app 38 before and has pre saved hearing profiles then the user can choose a second option being “Test History” option 51 b. In response to selecting the “Test History” option 51 b the processor causes the display of a list view of previous models from which the user can select from and upload to the headphones 7 with or without an equaliser overlay.
  • Finally, if the user 3 is using the app 38 for the first time then the user can select the “My Profile” option 51 c. Selecting the “My Profile” option 51 c causes the processor to call up an audio modelling routine and set a personalised model to be uploaded with or without an equaliser overlay to the headphones 7. If an equalizer overlay is applied then the gain adjustment weights that have been determined based on the audiological testing are varied to take into account the user's equalization preferences. For example if the user prefers a bassier sound then the weights corresponding to lower frequency bands are increased.
  • On selecting the “My Profile” option 51 c the processor 43 causes screens to display prompting the user to help optimise the acoustic model as displayed in prompt screens 53 (FIG. 8) and 55 (FIG. 9).
  • Once the user 3 operates touch screen 41 to indicate that he/she is ready to undergo the audio model routine the app 43 displays the interface screen 57 and the user 43 is directed to respond to the software by pressing the “left” 57 a or “right” 57 b buttons. Upon doing so the processor communicates with the headphones 7 via the Bluetooth link to cause the loudspeakers in the headsets to present beeps in the user's left or right ear respectively.
  • The App then presents screens to step the user through a modelling method 59 that is shown in FIG. 11 and which is in accordance with a preferred embodiment of the invention. The modelling method 59 includes steps to identify the user's minimal perceived headset specific decibel threshold at each of a number of frequency assessment points. The determined specific decibel thresholds for the user are then saved into the audio model. The dB Threshold variable for each frequency is set in the boxes titled “Stop Threshold”, in this step the assessment is stopped and the calculated dB value is saved. Alternatively, in the box “Stop No Threshold”, the procedure is stopped and the correction dB is set to the maximum as the user has profound hearing loss at this frequency and has maximised the capabilities of the hardware. In the flowchart the box labelled “2 for 3 at this level ?” comprises part of an error check loop. The user will actually be presented with that dB level three times before the program exits. If the user can hear it two out of three times then they are deemed to be able to hear it. The purpose of this is to avoid input error and the like. The audio model is a set of parameters for the user, including the decibel thresholds that are saved in the digital memory 37.
  • On completion of the method 59 for each of the frequency assessment points, the App has successfully modelled the way in which the user perceives sound through the headset 7. The app 3 then converts the perceived model into a graphical depiction 60 as shown in FIG. 13 for review. The graph in FIG. 13 showing shows the user's left ear and right ear hearing response as assessed at each of a number of frequencies.
  • With reference to frequency spectrum graphs of FIG. 14, the method 59 finds the user's perceived gain deficiency 63 in each specific frequency band in the audio spectrum. It then calculates weights (i.e. gain correction factors) by which future audio waveforms need to be gain adjusted in the frequency domain to correct the user's waveform back to perceived unity as intended. These weighting coefficients are then up loaded from the smartphone 5 into the headset FPGA 7. The FPGA then uses the uploaded weights in run time for the dynamic real time processing of uncompensated audio from the smartphone.
  • The App 43 then displays the equaliser screen 58 (FIG. 12) to the user 3. Here the user has the option to remain with a unity correction as per the audiological model or to use this base level correction with an equaliser overlay to allow the additional personalisation.
  • Once the user completes this customisation and selects “Upload”, the model with or without the equaliser overlay as per the users preference are transitioned into frequency based coefficient corrections and are uploaded to the paired headset for configuration of the on-board signal processing corrections.
  • FIG. 15 is a flowchart showing how the FPGA is programmed to use the determined gain adjustment weights, i.e. the audiological model, to process a WAV file (or other audio file) to thereby apply the weightings produced in the audiological assessment with or without an equalizer overlay.
  • From left to right of FIG. 15, a digital audio signal 61 is transmitted to the headset 7 via a Bluetooth link 63. This received signal is transmitted into the on board FPGA (item 11 of FIG. 2) for signal processing. Inside the FPGA the time domain audio signal 61 is converted to the frequency domain 65 by the utilisation of an FFT this frequency domain signal is then gain adjusted against the patient's personal correction weightings 67 for each corresponding frequency bin to create a gain adjusted frequency domain representation of the signal. The FPGA 11 then undertakes an IFFT to render the signal in to a user specific time domain digital audio waveform 71. The digital audio waveform is then processed by the DAC (item 13 of FIG. 2) and analogue amplifiers and possibly noise cancellation modules to drive the transducers of the headphones.
  • It will therefore be understood that in a preferred embodiment of the invention a sound delivery system 1 (FIG. 1) is provided. The sound delivery system 1 includes at least one processing assembly which in the presently described embodiment includes the smartphone processor 35 and the headphone FPGA 11. A user interface is provided in the form of smartphone touchscreen 41 and touchscreen display driver unit 39. The touchscreen display driver unit 39 is coupled to the processor via a data bus 47. The sound delivery system 1 also includes at least one audio transducer. For example, either or both loudspeakers 21 a, 21 b (FIG. 4) and bone conduction transducers 31 a, 31 b (FIG. 4) are provided. The bone transducers are responsive to signals from the FPGA via suitable digital to analog converts and analog amplifiers.
  • The at least one processing assembly includes the processor 35 of smartphone 5. That processor is arranged, by virtue of it executing the instructions comprising app 38 that are stored in digital memory 37, to determine compensatory weights at each of a number of audio frequencies for the user. The processor determines the weights on the basis of user responses via the interface (e.g. touchscreen 41) to sounds delivered via the audio transducer (for example the loudspeakers of headset 7). The at least one processor also includes the FPGA 11 which is configured with the determined weights and which is therefore able to deliver audio signals to the user by modifying the audio signals in accordance with the determined weights.
  • In the presently described embodiment of the invention the user interface portion and the transducer portion of the sound delivery system are physically separate, though in data communication via a Bluetooth connection. It will be realized that in other embodiments of the invention the separation of the two units may not be so. For example, the headset could have a user interface, for example one or more buttons, mounted to the side which are coupled to an internal processor so that a user may initiate the automatic audiological assessment and then press one or other of the buttons to indicate a hearing threshold for a presented audio signal. Such an arrangement would not require a separation of the user interface portion and the transducer (i.e. headset) portion. In such an embodiment the processing assembly might comprise a single, suitably programmed, high frequency processor that is capable of both running the audiological assessment method and also performing the FFT and IFFT functions with gain adjustment according to the determined weights for the user.
  • Turning to FIG. 16 there is shown a schematic diagram of an embodiment of calibration equipment employed for the factory calibration of a sound delivery system, here in the form of the customisable sound delivery system (or “SDS”) 1, as described above. The SDS of the present embodiment includes a set of headphones 7 having a pair of audio transducers 21 and a remote computational device, here in the form of a laptop computer or tablet 6. Note that the laptop computer or tablet has many components—for example touch screen 41′—in common, at least functionally, with the smartphone 5, described hereinabove. The headphones include a processor 11 and associated memory 12 that communicates with remote devices, such as laptop computer 6, via a communications module 9. The laptop computer 6 may also communicate with remote storage, such as database 82 held in a remote storage facility—sometimes referred to a “cloud storage”—accessible via a network (not shown), whether public or virtual private network.
  • The equipment necessary for calibration of an individual headset 7, includes a reference SPL meter 70 which is attached to a selected acoustic transducer, here left speaker 21 a, by an acoustic coupler 72 in order to exclude external noise during calibration testing. Suitable reference SPL meters include the DigiTech QM1592 Pro Sound Level Meter supplied by Jaycar Electronics of Australia or, particularly for headphone sets, the bilateral EARS stand supplied by miniDSP of Hong Kong (see www.minidsp.com). It will be appreciated that it is not always economically viable to calibrate every headset produced. Instead, headsets of a particular design or “model” may be manufactured to quality standards of, for example +/−2 dB A, and a representative headset from a given production run subject to the calibration procedures described in relation to the present embodiment. By way of example, should the transducer pair be re-specified or redesigned, a fresh calibration would be conducted for the model variant. It will be appreciated that, in other embodiments for some particular medical applications, it may be desirable to calibrate each and every headset individually to achieve higher accuracy.
  • FIG. 18 is a top level flow diagram of the sequence of steps in an embodiment of the calibration method 100 of the second aspect of the invention, here employing the equipment and optional infrastructure illustrated in FIG. 16. In step 102, the headphone body or headset 7 of a representative SDS containing a first audio transducer in the form of speaker 21 a is coupled by to the reference sound pressure meter 70 by acoustic coupler 72, directing produced sounds to a microphone 74 of the SPL meter. The first of a sequence of command codes, targeted at speaker 21 a, are then sent to the headphone assembly 7 in step 104 requesting output of a discrete test tone of specific frequency and sound pressure level, for example a command code requesting 100 Hz at 0 dB. In step 106 the command code is acknowledged by the communications interface 9 of the headphone assembly 7. In step 108, the processor 11 operates in response to the command code to cause the speaker 21 a to reproduce a test tone, which in turn is measured by the reference sound level meter 70 in step 110.
  • In step 112 the SPL reading obtained by the reference SPL meter 70 is recorded for transfer to a database associated with a user interface application for the SDS 1. Desirably, a mapping of command codes input to the processor 11 and SPL readings obtained from a transducer 21 via the microphone 74 of reference SPL meter 70 is built up to produce a mapping table in the database. The SPL mapping resulting from the calibration may be stored, at least temporarily, in a database held locally in the local memory 12 associated with processor 11, in memory of the handheld device 6 controlling calibration, or most desirably and eventually in a remote database 82 held in a cloud storage facility 80. The remote database 82 suitably also contains an interface application for selective down-loading to any compatible user interface device, and incorporates the SPL mapping for the particular model and/or production run of the headset 7 that has undergone calibration. This effectively provides a single point of calibration, thus obviating the need for “paired” interface devices and transducer hardware which typically adds to costs and/or inconvenience to achieve a similar level of accuracy.
  • In step 114, control is passed back to step 104 where after a delay of 0.5 s the command code for a subsequent test tone having the same frequency but a different SPL level, for example 10 dB, is produced in step 104. Return loop 124 is then repeated through each of the desired SPL levels (for example in 10 dB steps to 100 dB).
  • At the conclusion of the desired SPL level range, control drops through from the Next SPL in decision box 114 to decision box 116 wherein a subsequent frequency step is selected, for example 250 Hz recommencing at an SPL level of 0 dB. Control then passes back to loop 124 and the 250 Hz in stepped through each of the desired SPL levels.
  • At the conclusion of each of the desired frequencies represented by loop 126 (and SPL levels within each frequency), for example 500 Hz, 1 kHz, 2 kHz, 4 kHz, 8 kHz and 16 kHz, control drops to decision box 118 wherein a user will be prompted to move (if required) or switch (in the case of a bilateral meter) the acoustic coupler and reference SPL meter 70 to the other of the acoustic transducers, e.g. speaker 21 b, in step 102. Subsequently control returns to step 104 to repeat the test tone process for the other transducer at each selected frequency and SPL level.
  • In use, by way of example, if the command code “025-50” issued by an interface device 6 requiring an SPL of 50 dB at 1 kHz was reproduced as a test tone of 45 dB by the left transducer 21 a, the mapping associated with the interface application appropriate to the headset model would make the appropriate adjustment during 1 kHz tone production by processor 11. See the example results table depicted in FIG. 17 wherein the mapping may be derived from the difference or offset between the requested and measured SPL results for the left transducer 21 a. It will be appreciated that there will be a similar table portion generated for the right transducer 21 b, across the full range of desired frequencies and SPLs.
  • In compliance with the statute, the invention has been described in language more or less specific to structural features or methodical steps. The term “comprises” and its variations, such as “comprising” and “comprised of” is used throughout in an inclusive sense and not to the exclusion of any additional features. It is to be understood that the invention is not limited to specific features shown or described since the means herein described comprises preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted by those skilled in the art.
  • Throughout the specification and claims (if present), unless the context requires otherwise, the term “substantially” or “about” will be understood to not be limited to the value for the range qualified by the terms.
  • Any embodiment of the invention is meant to be illustrative only and is not meant to be limiting to the invention. Therefore, it should be appreciated that various other changes and modifications can be made to any embodiment described without departing from the spirit and scope of the invention.

Claims (15)

We claim:
1. A method for calibrating a sound delivery system having a processing assembly, a data communications assembly coupled to the processing assembly, and at least one audio transducer mounted with at least one processor of the processing assembly and responsive thereto for delivering sound to a user, the method including the steps of:
transmitting from a remote user interface device for the sound delivery system, a sequence of command codes for specifying predetermined characteristics of test sounds;
receiving the command code sequence at the communications assembly of the sound delivery system;
providing the command code sequence to the processing assembly of the sound delivery system;
reproducing by a selected at least one audio transducer, the predetermined test sounds under control of said at least one processor according to the command code sequence;
measuring with a reference meter proximate to the audio transducer, characteristics of test sounds reproduced by the sound delivery system;
comparing the measured characteristics of the reproduced sounds with the predetermined characteristics of the test sounds;
producing a mapping of specified test sounds to sounds reproduced by said at least one audio transducer; and
storing the mapping in an electronic memory associated with the processing assembly.
2. The calibration method of claim 1 wherein the transmitting step involves use of wireless transmission employing a local or near field communications standard.
3. The calibration method of claim 1 wherein the test sounds include a sequence of discrete sounds of different frequencies and different sound pressure levels (SPL) within each frequency, suitably covering a typical range of human hearing.
4. The calibration method of claim 3 wherein the frequencies of the test sounds are in a range of frequencies from 10 Hz to 30 kHz, suitably 20 Hz to 20 kHz.
5. The calibration method of claim 3 wherein the sound pressure levels of the test sounds are in a range from −10 dB to 120 dB, suitably 0 dB to 110 dB, within each discrete sound frequency.
6. The calibration method of claim 3 wherein each of the discrete test sounds in the sequence is of equal duration and spaced apart from adjacent sounds by a period of silence.
7. The calibration method of claim 6 wherein the discrete sound duration is in a range from 0.1 milliseconds to 5 seconds, suitably 100 milliseconds to 1 second.
8. The calibration method of claim 6 wherein the silence period in a range from 0.1 milliseconds to 5 seconds, suitably 100 milliseconds to 1 second.
9. The calibration method of claim 1 wherein the storing step involves storing the test sound mapping in a code base utilised by an audio application interface of the sound delivery system.
10. The calibration method of claim 9 wherein the code base is stored in a non-volatile portion of the electronic memory.
11. The calibration method of either claim 9 or claim 10 wherein the code base is also stored remotely in a database and associated with an interface application for the sound delivery system, for down-loading with the interface application on request.
12. A sound delivery system comprising:
a processing assembly including at least one processor and an electronic memory;
an interface for a user coupled to the at least one processing assembly;
at least one audio transducer responsive to the processing assembly for delivering sound to the user; and
wherein the electronic memory is accessible by the at least one processor and stores:
instructions for the processor to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via said audio transducer;
a code base utilised by an audio application interface of the sound delivery system;
wherein the sounds delivered via the transducer for determining the compensatory weights are generated by a transducer processor mounted within a transducer portion which includes the at least one audio transducer; and
wherein the sound delivery system is calibrated in accordance with the method of claim 1.
13. The sound delivery system of claim 12 wherein a processor of the processing assembly is mounted with said at least one audio transducer.
14. The sound delivery system of claim 13 wherein the audio transducers comprise a pair of speakers mounted in a set of headphones.
15. An automatic audiological testing apparatus comprising:
a processing assembly having at least one processor;
an electronic memory in communication with the processor and containing instructions for execution by said at least one processor;
a user interface in communication with the processor; and
at least one audio transducer mounted with the processing assembly and responsive to the at least one processor for delivering sound to a user;
wherein the electronic memory stores instructions for the processor to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds at a number of different frequencies;
wherein the sounds delivered via the transducer for determining the compensatory weights are generated by a transducer processor mounted within a transducer portion which includes the at least one audio transducer; and
wherein the audiological testing apparatus is calibrated in accordance with the method of claim 1.
US17/155,465 2015-06-29 2021-01-22 Calibration Method for Customizable Personal Sound Delivery Systems Pending US20210141595A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/155,465 US20210141595A1 (en) 2015-06-29 2021-01-22 Calibration Method for Customizable Personal Sound Delivery Systems

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
AU2015902532A AU2015902532A0 (en) 2015-06-29 A customisable personal sound delivery system
AUAU2015902532 2015-06-29
US15/196,256 US20170046120A1 (en) 2015-06-29 2016-06-29 Customizable Personal Sound Delivery System
US16/043,351 US10936277B2 (en) 2015-06-29 2018-07-24 Calibration method for customizable personal sound delivery system
US17/155,465 US20210141595A1 (en) 2015-06-29 2021-01-22 Calibration Method for Customizable Personal Sound Delivery Systems

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/043,351 Continuation US10936277B2 (en) 2015-06-29 2018-07-24 Calibration method for customizable personal sound delivery system

Publications (1)

Publication Number Publication Date
US20210141595A1 true US20210141595A1 (en) 2021-05-13

Family

ID=64658146

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/043,351 Active 2036-07-08 US10936277B2 (en) 2015-06-29 2018-07-24 Calibration method for customizable personal sound delivery system
US17/155,465 Pending US20210141595A1 (en) 2015-06-29 2021-01-22 Calibration Method for Customizable Personal Sound Delivery Systems

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/043,351 Active 2036-07-08 US10936277B2 (en) 2015-06-29 2018-07-24 Calibration method for customizable personal sound delivery system

Country Status (1)

Country Link
US (2) US10936277B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449427A (en) * 2020-11-02 2022-05-06 原相科技股份有限公司 Hearing assistance device and method for adjusting output sound of hearing assistance device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6350243B1 (en) * 1999-12-29 2002-02-26 Bruel-Bertrand & Johnson Acoustics, Inc. Portable hearing threshold tester
US20020183648A1 (en) * 2001-05-03 2002-12-05 Audia Technology, Inc. Method for customizing audio systems for hearing impaired
US20030065276A1 (en) * 2001-09-28 2003-04-03 Nidek Co., Ltd. Audiometer
US20050078838A1 (en) * 2003-10-08 2005-04-14 Henry Simon Hearing ajustment appliance for electronic audio equipment
US20050094822A1 (en) * 2005-01-08 2005-05-05 Robert Swartz Listener specific audio reproduction system
US20070204694A1 (en) * 2006-03-01 2007-09-06 Davis David M Portable audiometer enclosed within a patient response mechanism housing
US20110009770A1 (en) * 2009-07-13 2011-01-13 Margolis Robert H Audiometric Testing and Calibration Devices and Methods
US20120230501A1 (en) * 2009-09-03 2012-09-13 National Digital Research Centre auditory test and compensation method
US20120288199A1 (en) * 2011-05-09 2012-11-15 Olympus Corporation Image processing apparatus, image processing method, and computer-readable recording device
US20140194775A1 (en) * 2010-08-05 2014-07-10 Ace Communications Limited Method and System for Self-Managed Sound Enhancement
US20170231535A1 (en) * 2014-08-17 2017-08-17 Audyx Systems Ltd. Audiometer

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1196006A3 (en) 2000-10-03 2008-08-27 FreeSystems Pte Ltd On-demand audio entertainment device that allows wireless download content
US20040131206A1 (en) 2003-01-08 2004-07-08 James Cao User selectable sound enhancement feature
KR100636213B1 (en) 2004-12-28 2006-10-19 삼성전자주식회사 Method for compensating audio frequency characteristic in real-time and sound system thereof
US20070110256A1 (en) 2005-11-17 2007-05-17 Odi Audio equalizer headset
WO2007112918A1 (en) * 2006-04-04 2007-10-11 Cleartone Technologies Limited Calibrated digital headset and audiometric test methods therewith
US8060149B1 (en) 2006-07-31 2011-11-15 Cynthia Louis Wireless radio and headphones system and associated method
US8379871B2 (en) 2010-05-12 2013-02-19 Sound Id Personalized hearing profile generation with real-time feedback
US8923928B2 (en) 2010-06-04 2014-12-30 Sony Corporation Audio playback apparatus, control and usage method for audio playback apparatus, and mobile phone terminal with storage device
US9172345B2 (en) * 2010-07-27 2015-10-27 Bitwave Pte Ltd Personalized adjustment of an audio device
US8965756B2 (en) 2011-03-14 2015-02-24 Adobe Systems Incorporated Automatic equalization of coloration in speech recordings
US20130223661A1 (en) 2012-02-27 2013-08-29 Michael Uzuanis Customized hearing assistance device system
US9020161B2 (en) * 2012-03-08 2015-04-28 Harman International Industries, Incorporated System for headphone equalization
KR102006734B1 (en) 2012-09-21 2019-08-02 삼성전자 주식회사 Method for processing audio signal and wireless communication device
WO2014124449A1 (en) 2013-02-11 2014-08-14 Symphonic Audio Technologies Corp. Methods for testing hearing
US9344815B2 (en) 2013-02-11 2016-05-17 Symphonic Audio Technologies Corp. Method for augmenting hearing
WO2015124598A1 (en) 2014-02-18 2015-08-27 Dolby International Ab Device and method for tuning a frequency-dependent attenuation stage
US9943253B2 (en) 2015-03-20 2018-04-17 Innovo IP, LLC System and method for improved audio perception
AU2016100861A4 (en) 2015-06-29 2016-07-07 Audeara Pty. Ltd. A customisable personal sound delivery system
TWI629906B (en) 2017-07-26 2018-07-11 統音電子股份有限公司 Headphone system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6350243B1 (en) * 1999-12-29 2002-02-26 Bruel-Bertrand & Johnson Acoustics, Inc. Portable hearing threshold tester
US20020183648A1 (en) * 2001-05-03 2002-12-05 Audia Technology, Inc. Method for customizing audio systems for hearing impaired
US20030065276A1 (en) * 2001-09-28 2003-04-03 Nidek Co., Ltd. Audiometer
US20050078838A1 (en) * 2003-10-08 2005-04-14 Henry Simon Hearing ajustment appliance for electronic audio equipment
US20050094822A1 (en) * 2005-01-08 2005-05-05 Robert Swartz Listener specific audio reproduction system
US20070204694A1 (en) * 2006-03-01 2007-09-06 Davis David M Portable audiometer enclosed within a patient response mechanism housing
US20110009770A1 (en) * 2009-07-13 2011-01-13 Margolis Robert H Audiometric Testing and Calibration Devices and Methods
US20120230501A1 (en) * 2009-09-03 2012-09-13 National Digital Research Centre auditory test and compensation method
US20140194775A1 (en) * 2010-08-05 2014-07-10 Ace Communications Limited Method and System for Self-Managed Sound Enhancement
US20120288199A1 (en) * 2011-05-09 2012-11-15 Olympus Corporation Image processing apparatus, image processing method, and computer-readable recording device
US20170231535A1 (en) * 2014-08-17 2017-08-17 Audyx Systems Ltd. Audiometer

Also Published As

Publication number Publication date
US20180364971A1 (en) 2018-12-20
US10936277B2 (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN107615651B (en) System and method for improved audio perception
AU2016100861A4 (en) A customisable personal sound delivery system
US20230111715A1 (en) Fitting method and apparatus for hearing earphone
US7564979B2 (en) Listener specific audio reproduction system
US8948425B2 (en) Method and apparatus for in-situ testing, fitting and verification of hearing and hearing aids
US20140193008A1 (en) System and method for fitting of a hearing device
JP2016525315A (en) Hearing aid fitting system and method using speech segments representing appropriate soundscape
US20180098720A1 (en) A Method and Device for Conducting a Self-Administered Hearing Test
US10341790B2 (en) Self-fitting of a hearing device
US11818545B2 (en) Method to acquire preferred dynamic range function for speech enhancement
WO2020019020A1 (en) Calibration method for customizable personal sound delivery systems
KR100643311B1 (en) Apparatus and method for providing stereophonic sound
WO2005125280A2 (en) Hearing aid demonstration unit and method of using
US20190141462A1 (en) System and method for performing an audiometric test and calibrating a hearing aid
US20210141595A1 (en) Calibration Method for Customizable Personal Sound Delivery Systems
WO2004004414A1 (en) Method of calibrating an intelligent earphone
US20230179934A1 (en) System and method for personalized fitting of hearing aids
US20170251310A1 (en) Method and device for the configuration of a user specific auditory system
CN207304895U (en) Audio playing system
KR102393176B1 (en) Optimal sound setting device and method therefor
WO2021249611A1 (en) A control device for performing an acoustic calibration of an audio device

Legal Events

Date Code Title Description
AS Assignment

Owner name: AUDEARA PTY. LTD., AUSTRALIA

Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNORS:JEFFERY, CHRISTOPHER ARNOLD;FIELDING, JAMES ALEXANDER;AFFLICK, ALEX JOHN;SIGNING DATES FROM 20160816 TO 20161121;REEL/FRAME:055027/0829

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: AMENDMENT AFTER NOTICE OF APPEAL

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED