CN115460526B - Method for determining hearing model, electronic equipment and system - Google Patents

Method for determining hearing model, electronic equipment and system Download PDF

Info

Publication number
CN115460526B
CN115460526B CN202211414959.0A CN202211414959A CN115460526B CN 115460526 B CN115460526 B CN 115460526B CN 202211414959 A CN202211414959 A CN 202211414959A CN 115460526 B CN115460526 B CN 115460526B
Authority
CN
China
Prior art keywords
loss
amplitude
vertical
horizontal
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211414959.0A
Other languages
Chinese (zh)
Other versions
CN115460526A (en
Inventor
杨昭
韩荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211414959.0A priority Critical patent/CN115460526B/en
Publication of CN115460526A publication Critical patent/CN115460526A/en
Application granted granted Critical
Publication of CN115460526B publication Critical patent/CN115460526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/30Monitoring or testing of hearing aids, e.g. functioning, settings, battery power
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/48Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using constructional means for obtaining a desired frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Neurosurgery (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A method, electronic equipment and a system for determining a hearing model relate to the technical field of hearing assistance functions, and can determine the hearing condition of a user based on orientation test to obtain the hearing model so as to assist in realizing hearing compensation. The method comprises the steps of playing horizontal test audio comprising a plurality of first sound signals with different frequencies and different occurring horizontal orientations but same amplitudes, and receiving a horizontal test result, wherein the horizontal test result comprises the horizontal perception orientations of a user on the plurality of first sound signals. The method includes playing a vertical test audio including a plurality of second sound signals that differ in amplitude and appearance in vertical orientation but have the same spectral signal, and receiving a vertical test result including a user's vertical perception orientation of the plurality of second sound signals. A hearing model of the user is derived based on the horizontal test results and the vertical test results, the hearing model including a first minimum amplitude at which the user can perceive sound signals of respective frequencies.

Description

Method for determining hearing model, electronic equipment and system
Technical Field
The present application relates to the technical field of hearing assistance functions, and in particular, to a method, an electronic device, and a system for determining a hearing model.
Background
If the user is in a noisy environment for a long time or has bad ear habits, the hearing of the user is damaged. Furthermore, hearing loss is irreversible. For a hearing-impaired user and a hearing-unimpaired user, in a process of using an electronic device such as a mobile phone or a tablet, the audio effect heard by the two from the electronic device may be completely different. That is, for a hearing impaired user, the audio effects heard by the user may not be as effective as desired. If the desired effect is to be achieved, the electronic device needs to compensate the played audio, for example, to increase the amplitude of the frequency band of the audio where the user is hearing impaired.
It is obvious that the electronic device first needs to determine the hearing ability of the user, and to determine the audible amplitude of the user in each frequency band, and then can accurately compensate based on the determined amplitude.
However, the inventors found in the course of carrying out the embodiments of the present application that: there is no simple solution for determining the hearing ability of a user in the prior art for a while, which results in an inaccurate implementation of compensation on an electronic device.
Disclosure of Invention
In view of this, the present application provides a method, an electronic device, and a system for determining a hearing model, which can determine a hearing condition of a user based on an orientation test, that is, obtain the hearing model, so as to assist in implementing hearing compensation.
In a first aspect, an embodiment of the present application provides a method for determining a hearing model, which is applied to an electronic device. The horizontal test audio is played, and the horizontal test audio comprises a plurality of first sound signals which are different in frequency and appearance horizontal direction and are identical in amplitude. And receiving a horizontal test result fed back by the user, wherein the horizontal test result of the right ear test sequence comprises horizontal perception orientations of the user on the plurality of first sound signals of the right ear test sequence. Since the user needs to rely on the ILD to perceive the horizontal orientation, i.e. the amplitude affects the user's perception of the horizontal orientation. Then, the horizontal test audio is set to include a plurality of sound signals with the same amplitude, so that the influence of different amplitudes on the judgment of the horizontal direction by the user can be avoided.
And playing vertical test audio, wherein the vertical test audio comprises a plurality of second sound signals with different amplitudes and vertical directions of occurrence and the same spectrum signals. And receiving a vertical test result fed back by the user, wherein the vertical test result of the right ear test sequence comprises the vertical perception position of the user to the multiple second sound signals of the right ear test sequence. Since the user needs to rely on the change of the spectral signal to perceive the change of the vertical orientation, i.e. the spectral signal affects the perception of the vertical orientation by the user. Therefore, the vertical test audio comprises the sound signals with the same common signals, and the influence of different spectral signals on the judgment of the vertical direction of the user can be avoided.
Finally, a hearing model of the user is derived based on the horizontal test results and the vertical test results, the hearing model comprising a first minimum amplitude at which the user can perceive sound signals of various frequencies. That is, the hearing model may reflect the user's hearing ability for sound signals of various frequencies. And thus may be used for hearing compensation. For example, the lowest amplitude corresponding to the frequency of 1.1KHz in the hearing model is 40dB, and the lowest amplitude corresponding to the frequency of 1.1KHz in the hearing model of the normal user is 30dB, that is, the difference between the lowest amplitude that the detected user can hear at 1.1KHz is 10dB compared with the normal user, so that the amplitude can be increased by 10dB when the electronic device plays the sound at 1.1KHz. Therefore, the audio can be played for the hearing-impaired user in a personalized way, so that the same audio effect as that of a normal user can be obtained.
In a possible design manner of the first aspect, the playing level test audio includes: the level test audio is played through headphones. The playing of the vertical test audio includes: the vertical test audio is played through the headphones.
With the embodiment, the test audio is played by the earphone, so that the influence of the surrounding environment on the hearing test can be reduced.
In a possible design manner of the first aspect, the method further includes: a first feed-forward signal acquired by a feed-forward microphone in a left ear plug of the headset is acquired, and a second feed-forward signal acquired by a feed-forward microphone in a right ear plug of the headset is acquired. The aforesaid plays horizontal test audio through the earphone to and play vertical test audio through the earphone, include: and under the condition that the first feedforward signal is smaller than a first threshold value and the second feedforward signal is smaller than the first threshold value, playing the horizontal test audio through the earphone and playing the vertical test audio through the earphone.
By adopting the embodiment, the test audio is played and tested under the condition that the feedforward signals of the two earplugs are small, namely the environmental noise of the environment where the two earplugs are located is small. Therefore, the influence of environmental noise on the perception of the user on the direction can be avoided.
In a possible design manner of the first aspect, the method further includes: a first feedback signal picked up by the feedback microphone in the left earpiece and a second feedback signal picked up by the feedback microphone in the right earpiece are acquired. The aforesaid plays horizontal test audio through the earphone to and play vertical test audio through the earphone, include: and under the condition that the difference value of the first feedforward signal and the first feedback signal is larger than a second threshold value, and the difference value of the second feedforward signal and the second feedback signal is larger than the second threshold value, playing the horizontal test audio through the earphone, and playing the vertical test audio through the earphone.
Adopt this embodiment, the difference of feedforward signal and feedback signal is all under great condition in two earplugs, and left earplug and right earplug all laminate very much with the ear promptly, to the isolated condition of effect fine of outside environment sound, just broadcast test audio frequency and test. Therefore, the condition that the sound insulation and noise reduction effects of the earphone are poor and the user perceives the direction can be avoided.
In a possible design manner of the first aspect, the method further includes: and sending the same preset audio to the left earplug and the right earplug of the earphone, wherein the preset audio is used for testing the hardware difference of the loudspeaker of the left earplug and the loudspeaker of the right earplug. And acquiring a third feedback signal acquired by a feedback microphone in the left earplug and a fourth feedback signal acquired by a feedback microphone in the right earplug, wherein the third feedback signal comprises a result of playing a preset audio by a loudspeaker of the left earplug, and the fourth feedback signal comprises a result of playing the preset audio by a loudspeaker of the right earplug. A calibration factor is determined that reconciles the third feedback signal and the fourth feedback signal. The calibration factor is sent to a first earpiece, either the left or right earpiece. That is, the calibration coefficients may mask out hardware differences for the speaker in the left ear piece and the speaker in the right ear piece.
Correspondingly, above-mentioned horizontal test audio of broadcast through the earphone to and broadcast vertical test audio through the earphone, include: the calibrated level test audio is played after being calibrated by the first earplug using the calibration coefficient, and the level test audio is played by the second earplug. And playing the calibrated vertical test audio after the vertical test audio is calibrated by the first earplug by using the calibration coefficient, and playing the vertical test audio by the second earplug. Wherein the second earplug is the one of the left and right earplugs other than the first earplug. For example, the first earplug is a left earplug, and the second earplug is a right earplug; the first earplug is a right earplug and the second earplug is a left earplug.
By adopting the embodiment, the hardware difference of the left earplug and the right earplug is shielded by the calibration coefficient, so that the influence of the hardware difference on the test effect can be avoided.
In a possible design manner of the first aspect, the obtaining of the hearing model of the user based on the horizontal test result and the vertical test result includes: a horizontal angular loss of hearing of the user is determined based on the horizontal test results, the horizontal angular loss comprising an angular deviation of a horizontal perceptual orientation of the user to the sound signals of the respective frequencies. A vertical angular loss of hearing of the user is determined based on the vertical test results, the vertical angular loss comprising an angular deviation of a vertical perceptual orientation of the user for sound signals of respective magnitudes. A hearing model of the user is calculated based on the horizontal angle loss and the vertical angle loss.
In a possible design manner of the first aspect, the electronic device includes a first corresponding relationship and a second corresponding relationship, the first corresponding relationship includes a corresponding relationship between a change in amplitude and a change in horizontal sensing orientation of the sound signal by a population without hearing loss, and the second corresponding relationship includes a corresponding relationship between a change in frequency and a change in vertical sensing orientation of the sound signal by a population without hearing loss. The above hearing model for a user based on horizontal angle loss and vertical angle loss includes: based on the first correspondence, the horizontal angle loss is converted into a relative amplitude loss for both ears of the user, the relative amplitude loss comprising an amplitude difference perceived for the sound signal at the respective frequency, the left ear and the right ear of the user. Based on the second correspondence, the vertical angle loss is converted into a frequency loss model for both ears of the user, the frequency loss model including a second lowest amplitude of the sound signal perceivable by the user at each frequency. And correcting the frequency loss model based on the relative amplitude loss to obtain the hearing model.
With the present embodiment, the angle loss in the horizontal direction can be converted into the relative amplitude loss by the first correspondence obtained in advance, and the angle loss in the vertical direction can be converted into the frequency loss model by the second correspondence obtained in advance. That is, the frequency loss model is the lowest amplitude that the user can hear for sounds of respective frequencies on the basis of the angular loss irrespective of the horizontal orientation. Therefore, by further correcting the basic model based on the relative amplitude loss, a hearing model that integrates the angle loss in the horizontal direction and the angle loss in the vertical direction can be obtained, so that the result of the hearing model is more reasonable.
In a possible design of the first aspect, the modifying the frequency loss model based on the relative amplitude loss to obtain the hearing model includes: and adding the amplitude difference value corresponding to the first frequency in the relative amplitude loss to the first lowest amplitude corresponding to the first frequency in the frequency loss model to obtain a second lowest amplitude corresponding to the first frequency in the hearing model. Wherein, the first frequency is any frequency included in the frequency loss model.
By adopting the embodiment, the lowest amplitude of each frequency in the corresponding frequency loss model can be obtained and obtained through correction.
In a possible design of the first aspect, the hearing loss may interfere with the measurement of the vertical orientation, considering that the user is a hearing impaired user. Therefore, before testing the angular loss in the vertical orientation, the absolute amplitude loss of both ears of the user may be tested, including the first absolute amplitude loss of the left ear and the second absolute amplitude loss of the right ear. And then, in the process of testing the angle loss of the vertical orientation, the first absolute amplitude loss and the second absolute amplitude loss are smoothed, so that the test process of the vertical orientation is not interfered by the amplitude loss.
Based on this, the above method further comprises: and subtracting the first absolute amplitude loss from the amplitudes of the various second sound signals in the vertical test audio to obtain various left ear test signals. And subtracting the second absolute amplitude loss from the amplitudes of the plurality of second sound signals to obtain a plurality of right ear test signals, wherein the plurality of left ear test signals and the plurality of right ear test signals form the updated vertical test audio. The first absolute amplitude loss is the difference value between the lowest audible amplitude of the crowd without hearing loss and the lowest audible amplitude of the left ear of the user within the preset frequency range, and the second absolute amplitude loss is the difference value between the lowest audible amplitude of the crowd without hearing loss and the lowest audible amplitude of the right ear of the user within the preset frequency range. Correspondingly, the above playing the vertical test audio through the earphone includes: a plurality of left ear test signals are played through a left earplug of the earphone, and a plurality of right ear test signals are played through a right earplug of the earphone.
By adopting the embodiment, the interference of amplitude loss in the test process of the vertical direction can be avoided, and the accuracy of the vertical direction test is improved.
The electronic device also includes a first corresponding relationship and a second corresponding relationship, the first corresponding relationship includes a corresponding relationship between a change in amplitude and a change in horizontal perception orientation of the sound signal by a population without hearing loss, and the second corresponding relationship includes a corresponding relationship between a change in frequency and a change in vertical perception orientation of the sound signal by a population without hearing loss.
Accordingly, when calculating the hearing loss, the first absolute amplitude loss and the second absolute amplitude loss, which were previously smoothed, need to be complemented back again. Thus, a hearing model of a user is calculated based on horizontal angle loss and vertical angle loss, comprising: based on the first correspondence, the horizontal angle loss is converted into a relative amplitude loss for both ears of the user, the relative amplitude loss including an amplitude difference perceived for the sound signal of each frequency, the left ear and the right ear of the user. And correcting the relative amplitude loss based on the first absolute amplitude loss and the second absolute amplitude loss to obtain the binaural amplitude loss. Based on the second correspondence, the vertical angle loss is converted into a frequency loss model for both ears of the user, the frequency loss model including a second lowest amplitude of the sound signal perceivable by the user at each frequency. And correcting the frequency loss model based on the binaural amplitude loss to obtain the hearing model.
By adopting the embodiment, the first absolute amplitude loss and the second absolute amplitude loss which are previously smoothed can be compensated on the basis of the relative amplitude loss, so that the accurate amplitude loss can be obtained.
In a possible design manner of the first aspect, the modifying the relative amplitude loss based on the first absolute amplitude loss and the second absolute amplitude loss to obtain the binaural amplitude loss includes: and adding the difference value between the first absolute amplitude loss and the second absolute amplitude loss to the first amplitude difference value corresponding to the second frequency in the relative amplitude loss to obtain a second amplitude difference value corresponding to the second frequency in the binaural amplitude loss.
In a possible design of the first aspect, after obtaining the hearing model of the user based on the horizontal test result and the vertical test result, the method further includes: and increasing the preset amplitude of the sound signal with the third frequency in the audio to be played, and then playing the audio, wherein the third frequency is any frequency of the sound signal included in the audio to be played, and the preset amplitude is the difference value between the first lowest amplitude corresponding to the third frequency in the hearing model and the third lowest amplitude of the sound signal with the third frequency which can be heard by people without hearing loss. For example, the lowest amplitude corresponding to the frequency of 1.1KHz in the hearing model is 40dB, and the lowest amplitude corresponding to the frequency of 1.1KHz in the hearing model of the normal user is 30dB, that is, the difference between the lowest amplitude that the detected user can hear at 1.1KHz is 10dB compared with the normal user, so that the amplitude can be increased by 10dB when the electronic device plays the sound at 1.1KHz. Therefore, the audio can be played for the hearing-impaired user in a personalized way, so that the same audio effect as that of a normal user can be obtained.
In a possible design manner of the first aspect, the horizontal test audio includes a first horizontal test audio and a second horizontal test audio, and an order of a plurality of frequencies corresponding to a plurality of types of first sound signals that sequentially appear in the first horizontal test audio is not completely the same as an order of a plurality of frequencies corresponding to a plurality of types of first sound signals that sequentially appear in the second horizontal test audio.
By adopting the embodiment, the hearing conditions of the users under different frequency sequences can be tested. Therefore, the hearing conditions of the tested user to the sound signals of various frequencies can be obtained more accurately.
In a possible design manner of the first aspect, the vertical test audio includes a first vertical test audio and a second vertical test audio, and an order of a plurality of amplitudes corresponding to a plurality of second sound signals sequentially appearing in the first vertical test audio is not completely the same as an order of a plurality of amplitudes corresponding to a plurality of second sound signals sequentially appearing in the second vertical test audio.
By adopting the embodiment, the hearing conditions of the users in different amplitude sequences can be tested. Therefore, the hearing conditions of the tested user to the sound signals with various amplitudes can be obtained more accurately.
In a second aspect, embodiments of the present application further provide an electronic device, which includes a memory and one or more processors; the memory and the processor are coupled; the memory is for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method as described in the first aspect and any of its possible designs.
In a third aspect, an embodiment of the present application provides a chip system, where the chip system is applied to an electronic device that includes a display screen and a memory; the chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is to receive a signal from a memory of the electronic device and to send the signal to the processor, the signal comprising computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device performs the method as described in the first aspect and any one of its possible designs.
In a fourth aspect, the present application provides a computer-readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method according to the first aspect and any one of its possible design approaches.
In a fifth aspect, the present application provides a computer program product for causing a computer to perform the method according to the first aspect and any one of its possible designs when the computer program product runs on the computer.
In a sixth aspect, the present application provides a communication system comprising an electronic device for performing the method according to the first aspect and any one of its possible designs, and comprising a headset for playing test audio.
It should be understood that the advantageous effects that the electronic device according to the second aspect, the chip system according to the third aspect, the computer-readable storage medium according to the fourth aspect, the computer product according to the fifth aspect, and the communication system according to the fifth aspect can achieve may refer to the advantageous effects of the first aspect and any one of the possible design manners thereof, and are not repeated herein.
Drawings
FIG. 1 is a simplified diagram of a process for determining a hearing model according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a communication system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a mobile phone according to an embodiment of the present application;
fig. 4 is a schematic diagram of a kind of earphone provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of an earphone according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a speaker and a microphone in an earphone according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a principle of implementing audio compensation according to an embodiment of the present application;
fig. 8 is a schematic diagram of a mobile phone interface provided in an embodiment of the present application;
fig. 9 is a schematic diagram illustrating a principle of implementing environment detection according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a test sequence provided by an embodiment of the present application;
FIG. 11 is an interactive flowchart for determining angle loss according to an embodiment of the present application;
fig. 12 is a second schematic diagram of a mobile phone interface provided in the embodiment of the present application;
FIG. 13 is a graphical illustration of the angular loss provided by an embodiment of the present application;
FIG. 14 is a schematic diagram illustrating a hearing model determination provided by an embodiment of the present application;
FIG. 15 is an interactive flow chart for determining absolute amplitude loss provided by an embodiment of the present application;
fig. 16 is a third schematic view of a mobile phone interface provided in the embodiment of the present application;
FIG. 17 is a graph illustrating the results of an absolute amplitude loss provided by an embodiment of the present application;
FIG. 18 is an interactive flow chart for determining angle loss in combination with absolute amplitude loss according to an embodiment of the present application;
FIG. 19 is a schematic diagram of a hearing model with absolute amplitude loss determination provided by an embodiment of the present application;
fig. 20 is a structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
In the conventional art, there are two main ways of determining the hearing impairment of a user: one is to make a coarse frequency response judgment on the left ear and the right ear by playing sounds with different frequencies. This method usually can only obtain results with sparse frequency, and cannot obtain accurate damage. The other method adopts complex test signals and adopts intensive subjective listening to draw a hearing curve, the diagnosis takes long time, and the diagnosis needs medical equipment and environment besides professional medical personnel, so that the professional requirement is high.
Due to the defects, the conventional technologies cannot be used for the electronic equipment to accurately determine the hearing damage condition of the user, and the audio is compensated before the electronic equipment plays the audio, so that the hearing-damaged user can obtain a better hearing effect.
Based on this, the embodiment of the present application provides an audio playing method, and referring to fig. 1, an electronic device may test the hearing loss of both ears of a tested user in a horizontal direction and a vertical direction. The angle loss of the horizontal direction refers to the angle deviation between the horizontal direction of the sound source perceived by the detected user and the horizontal direction of the sound source perceived by the normal user. The angular loss of the vertical orientation refers to the angular deviation of the vertical orientation of the sound source as perceived by the tested user from the vertical orientation of the sound source as perceived by the normal user. The electronic device then determines a hearing model of the user under test based on the angular loss at the horizontal orientation and the angular loss at the vertical orientation. Wherein the hearing model comprises the lowest amplitude (e.g. ordinate a of the hearing model in fig. 1) at which the tested user can hear sounds of different frequencies (e.g. abscissa f of the hearing model in fig. 1). Finally, the electronic device can perform compensation based on the hearing model of the tested user and the hearing model of the normal user before playing the audio. For example, the lowest amplitude corresponding to the frequency of 1.1KHz in the hearing model of the tested user is 40dB, and the lowest amplitude corresponding to the frequency of 1.1KHz in the hearing model of the normal user is 30dB, that is, the difference between the lowest amplitude that can be heard by the tested user at 1.1KHz and the lowest amplitude that can be heard by the normal user (i.e., the crowd without hearing loss) is 10dB, so that the amplitude can be increased by 10dB when the electronic device plays the sound at 1.1KHz. Therefore, the audio can be played in a personalized way aiming at the tested user with hearing loss, so that the audio can obtain the same audio effect as that of a normal user.
In summary, according to the embodiments of the present application, the hearing model of the tested user can be finally obtained by testing the angle loss in the horizontal direction and the angle loss in the vertical direction, and the angle loss in the horizontal direction and the angle loss in the vertical direction are obviously easier to measure, so that the implementation is simple. Moreover, the hearing of the tested user has deviation between the horizontal direction and the vertical direction, and it can be basically determined that the hearing has loss, so that a hearing model of the tested user can be obtained based on the angle loss of the horizontal direction and the angle loss of the vertical direction, and a more accurate result can be obtained generally. Therefore, the scheme of the embodiment of the application can be well used for compensating the audio, so that a user with hearing impairment can obtain a better hearing effect.
The embodiment of the application also provides a communication system, which is used for realizing the audio playing method provided by the embodiment of the application. Referring to fig. 2, a communication system includes an electronic device, such as a handset 210, and a headset, such as a headset 220. The electronic device and the headset are communicatively connected, for example, either by a wired connection or a wireless connection, for transmitting data relating to the test audio. Wherein the headset can play test audio that appears in various orientations in space. It will be appreciated that the headphones need to support stereo playback to enable playback of test audio that appears at various locations in space. For example, headphones need to have left and right channels. After hearing the test audio, the tested user can feed back the test result, such as the horizontal orientation and the vertical orientation. The electronic device can determine the hearing loss of the two ears of the tested user (including the angle loss of the horizontal orientation and the angle loss of the vertical orientation) based on the test result, obtain a hearing model, and complete audio compensation.
By adopting the communication system, the earphone is used for playing the test audio, so that the influence of the surrounding environment on the hearing test can be reduced. Of course, in other embodiments, the audio playing method provided in the embodiments of the present application may also be implemented by an electronic device alone. That is, playing the test audio is also done by the electronic device. For example, the test audio is played through a speaker of the electronic device. In other embodiments, the test audio may also be played by other audio playing devices disposed around the electronic device. Hereinafter, the audio playing method provided by the embodiment of the present application is mainly implemented by using the above communication system as an example.
For example, the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, and an electronic device supporting audio playing, such as a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR), a Virtual Reality (VR) device, and the like, and the embodiment of the present application is not limited to a specific form of the electronic device. Hereinafter, the present application will be described mainly by taking the electronic device as a mobile phone as an example.
Please refer to fig. 3, which is a hardware structure diagram of a mobile phone 210 according to an embodiment of the present disclosure. As shown in fig. 3, the mobile phone 210 may include a processor 310, an external memory interface 320, an internal memory 321, a Universal Serial Bus (USB) interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a sensor module 380, keys 390, a motor 391, an indicator 392, a camera 393, a display 394, and a Subscriber Identity Module (SIM) card interface 395.
It is to be understood that the illustrated structure of the present embodiment does not specifically limit the mobile phone 210. In other embodiments, the handset 210 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 310 may include one or more processing units, such as: the processor 310 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only an exemplary illustration, and does not constitute a limitation to the structure of the mobile phone 210. In other embodiments, the mobile phone 210 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 340 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 340 may receive charging input from a wired charger via the USB interface 330. In some wireless charging embodiments, the charging management module 340 may receive a wireless charging input through a wireless charging coil of the cell phone 210. The charging management module 340 may also supply power to the mobile phone 210 through the power management module 341 while charging the battery 342.
The power management module 341 is configured to connect the battery 342, the charging management module 340 and the processor 310. The power management module 341 receives input from the battery 342 and/or the charging management module 340 and provides power to the processor 310, the internal memory 321, the external memory, the display 394, the camera 393, and the wireless communication module 360. The power management module 341 may also be configured to monitor parameters such as battery capacity, battery cycle count, and battery state of health (leakage, impedance). In other embodiments, the power management module 341 may also be disposed in the processor 310. In other embodiments, the power management module 341 and the charging management module 340 may be disposed in the same device.
The wireless communication function of the mobile phone 210 can be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modem processor, the baseband processor, and the like.
The wireless communication module 360 may provide solutions for wireless communication applied to the mobile phone 210, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 360 may be one or more devices integrating at least one communication processing module. The wireless communication module 360 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 310. The wireless communication module 360 may also receive a signal to be transmitted from the processor 310, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
The mobile phone 210 implements a display function through the GPU, the display screen 394, and the application processor, etc. The GPU is an image processing microprocessor coupled to a display 394 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 310 may include one or more GPUs that execute program instructions to generate or change display information.
The mobile phone 210 may implement a shooting function through the ISP, the camera 393, the video codec, the GPU, the display 394, the application processor, and the like. The ISP is used to process the data fed back by the camera 393. The camera 393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. In some embodiments, cell phone 210 may include 1 or N cameras 393, N being a positive integer greater than 1.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone 210. The external memory card communicates with the processor 310 through the external memory interface 320 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 321 may be used to store computer-executable program code, which includes instructions. The processor 310 executes various functional applications of the handset 210 and data processing by executing instructions stored in the internal memory 321. For example, the processor 310 may display different content on the display 394 in response to a user's operation to expand the display 394 by executing instructions stored in the internal memory 321. The internal memory 321 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data (such as audio data, phone book, etc.) created during use of the mobile phone 210. In addition, the internal memory 321 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The mobile phone 210 can implement audio functions through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the earphone interface 370D, and the application processor. Such as music playing, recording, etc.
The keys 390 include a power-on key, a volume key, and the like. The keys 390 may be mechanical keys. Or may be touch keys. The cellular phone 210 may receive a key input, and generate a key signal input related to user setting and function control of the cellular phone 210. Motor 391 may generate a vibration cue. The motor 391 may be used for both incoming call vibration prompting and touch vibration feedback. Indicator 392 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 395 is for connecting a SIM card. The SIM card can be brought into and out of contact with the cellular phone 210 by being inserted into and pulled out of the SIM card interface 395. The handset 210 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
The earphone 220 in the embodiment of the present application may be a wired earphone or a wireless earphone, for example. Further, referring to fig. 4, the Wireless headset may include a True Wireless (TWS) headset 401, a neck-worn Wireless headset 402, or a head-worn Wireless headset 403. It will be appreciated that the headphones are typically paired, as shown in fig. 4, the TWS headphone 401 includes a left headphone 401a and a right headphone 401b. In order to facilitate the distinction between a pair of headphones and a single headphone, hereinafter, a pair of headphones is referred to as headphones, a single headphone is referred to as an earbud, and corresponding to the left headphone and the right headphone, the left earbud and the right earbud can be classified.
In the following, the hardware structure of the earphone will be mainly described by taking a single earplug as an example.
Referring to fig. 5, a schematic structural diagram of an earphone 220 provided in an embodiment of the present application is shown. As shown in fig. 5, the ear bud may include a processor 510, a memory 521, a charging management module 540, a power management module 541, a battery 542, a wireless communication module 560, an audio module 570, a speaker 570A, a microphone 570C, a motor 591, an indicator 592, and the like.
Processor 510 may include one or more processing units, such as: processor 510 may include an audio signal processor (ISP), a controller, a memory, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
The memory 521 may be used to store computer-executable program code, including instructions. The processor 510 executes various functional applications of the headset 220 and data processing by executing instructions stored in the memory 521. In the embodiment of the present application, the processor 510 may perform the test audio playback, calibration before the hearing test, and the like by executing the instructions stored in the memory 521.
The memory 521 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function) required by at least one function, and the like. The data storage area may store data (such as audio data) created during use of the headset 220, and the like. Further, the memory 521 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like.
The charging management module 540 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 540 may receive charging input from a wired charger via a charging interface. In some wireless charging embodiments, the charging management module 540 may receive wireless charging input through the electrode sensor 580K of the headset 220. The charging management module 540 may also provide power to the electronic device through the power management module 541 while charging the battery 542.
The power management module 541 is used to connect the battery 542, the charging management module 540 and the processor 510. The power management module 541 receives input from the battery 542 and/or the charging management module 540, and provides power to the processor 510, the internal memory 521, and the like. The power management module 541 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some embodiments, the power management module 541 may also be disposed in the processor 510. In other embodiments, the power management module 541 and the charging management module 540 may be disposed in the same device.
The wireless communication function of the wireless headset may be implemented by wireless communication 560, such as a Bluetooth (BT) module. Through the wireless communication module 560, the earphone 220 can establish a communication connection with an electronic device such as a mobile phone or a tablet, so that the earphone 220 can be used to listen to music, answer a call, and the like.
The headset 220 may implement audio functions through an audio module 570, speaker 570A, microphone 570C, and application processor, among other things.
The audio module 570 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 570 may also be used to encode and decode audio signals. In some embodiments, the audio module 570 may be disposed in the processor 510, or some functional modules of the audio module 570 may be disposed in the processor 510.
Speaker 570A is used to play audio. The microphone 570C, also known as a "microphone," is used to collect ambient sound signals. One or more microphones 570C may be disposed in the headset 220. For example, the plurality of microphones 570C disposed in the earphone 220 may be used for not only collecting sound signals, but also reducing noise, identifying sound sources, and directionally recording sound.
Referring to fig. 6, the earphone 220 includes a speaker 570A therein. In the present embodiment, speaker 570A may be used to play test audio. The headset 220 further includes two microphones 570C, namely a feedback microphone (Feed Back MIC, FB MIC) 570C1 and a feedforward microphone (Feed Forward MIC, FF MIC) 570C 2. The FB MIC 570C1 is disposed near the ear canal when the earphone 220 is worn, and is configured to collect a sound signal in the ear canal. In operation of speaker 570A, the sound signals within the ear canal include sound signals corresponding to audio being played by speaker 570A. The FF MIC 570C2 is disposed at a position exposed to an external environment when the earphone 220 is in a wearing state, and collects a sound signal in an environment around an ear. For example, if the user under test wears an earphone to sit in a subway, the sound signal in the environment around the ear may include ambient noise in the subway.
The motor 591 may generate a vibration cue and thus may be used for both an electrical vibration cue and touch/tap vibration feedback.
Indicator 592 can be an indicator light that can be used to indicate a charge status, a charge change, a message, a missed call, a notification, etc.
It is understood that the structure of the handset 210 and the headset 220 illustrated in the present embodiment is merely exemplary. In other embodiments, both the handset 210 and the headset 220 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The audio playing method provided by the embodiment of the present application can be executed in the communication system formed by the mobile phone 210 and the earphone 220.
In general, a user to be measured generally relies on an Interaural Level Difference (ILD) and an Interaural Time Difference (ITD) to perceive a horizontal direction in which a sound source is located. And the tested user relies on the change of the spectrum signal to perceive the vertical orientation of the sound source. Then, for a hearing-impaired user under test, the spatial orientation (including horizontal and vertical orientations) at which the sound source is perceived may differ from the spatial orientation (including horizontal and vertical orientations) at which a normal user perceives the sound source.
Based on this, in the embodiment of the present application, the mobile phone may first test the angle loss of the two ears of the tested user in the horizontal direction and the angle loss of the two ears of the tested user in the vertical direction. Referring to fig. 7, in the angular loss of the horizontal direction, the horizontal axis is frequency f, and the vertical axis is the angular deviation Δ dh between the horizontal direction of the sound source perceived by the ears of the user to be tested and the horizontal direction of the sound source perceived by the normal user. That is, the angular loss of the horizontal orientation may reflect: and (3) the perception deviation of the tested user to the sound with different frequencies and in the horizontal direction. For example, for a sound with frequency f1, the perceived deviation of the detected user in the horizontal direction is Δ dh1. In the angle loss of the vertical direction, the horizontal axis is amplitude a, and the vertical axis is the angle deviation delta dv between the vertical direction of the sound source perceived by the ears of the tested user and the vertical direction of the sound source perceived by the normal user. That is, the angular loss of vertical orientation may reflect: and (3) the perception deviation of the tested user to the sound with different amplitudes in the vertical direction. For example, for a sound with amplitude a1, the perceived deviation of the measured user in the vertical orientation is Δ dv1.
After the angle loss in the horizontal direction and the angle loss in the vertical direction are obtained, the mobile phone can determine the hearing model of the tested user based on the angle loss in the horizontal direction and the angle loss in the vertical direction. For example, the mobile phone may input the angle loss in the horizontal direction and the angle loss in the vertical direction to a preset Artificial Intelligence (AI) model having a function of converting the angle loss in the horizontal direction and the angle loss in the vertical direction to obtain a hearing model.
With continued reference to fig. 7, in the hearing model, the dotted line represents the hearing model of the tested user, and the solid line represents the hearing model of the normal user. The horizontal axis represents frequency f, and the vertical axis represents the lowest amplitude a (which may also be referred to as the first lowest amplitude) of sound that can be heard by the user (the user to be tested or the normal user) at each frequency. Then, the amplitude difference between the hearing model of the tested user and the hearing model of the normal user corresponding to each frequency is the amplitude loss of the tested user at the corresponding frequency. It will be appreciated that a greater amplitude loss indicates a more severe hearing loss. A smaller amplitude loss indicates a smaller hearing loss.
After the hearing model is obtained, the mobile phone can perform compensation based on the hearing model of the tested user and the hearing model of the normal user before playing the video, namely, the compensation is performed based on the amplitude loss. In a specific implementation manner, a sound signal of a third frequency in the audio to be played is played after being increased by a preset amplitude, the third frequency is any frequency of the sound signal included in the audio to be played, and the preset amplitude is a difference value between a first minimum amplitude corresponding to the third frequency in the hearing model and a third minimum amplitude of the sound signal of the third frequency audible to people without hearing loss. Illustratively, in the compensation example shown in fig. 7, the amplitude loss corresponding to the frequency f2 (i.e., the third frequency f 2) is Δ a2, so that when the sound with the frequency f2 is played, the amplitude can be increased by Δ a2. And the amplitude loss corresponding to the frequency f3 (i.e. the third frequency f 3) is Δ a3, then when the sound with the frequency f3 is played, the amplitude can be increased by Δ a3.
In the embodiment of the present application, it is necessary to play test audio using headphones. However, if the environmental noise is large, if the environment of the tested user is a workshop where a large machine works, the environmental noise may enter the ear canal, thereby affecting the spatial orientation where the tested user perceives the sound in the test audio. Alternatively, if the earphone is not very close to the ear, for example, if the ear plug of the earphone is small, ambient noise may enter the ear canal and affect the orientation of the sound perceived by the tested user in the test audio.
Based on this, in some embodiments, after detecting that the tested user starts the audio compensation function, the mobile phone may first detect whether the tested environmental condition is satisfied. If the environmental conditions are met, the test audio can be further played by using an earphone so as to test the angle loss of the horizontal direction and the angle loss of the vertical direction. And if the environmental condition is not met, prompting the tested user to go to a quiet position for re-detection, or prompting the tested user to wear the earphone again.
The audio compensation function is used for compensating the audio based on the hearing loss of the tested user before the audio is played by the mobile phone. For example, the process for f1 compensation is shown in FIG. 7. In a mobile phone setting or a setting of a music player and a video player, a switch of an audio compensation function can be provided for turning on or off the audio compensation function.
For example, the handset may display the interface 801 shown in fig. 8, where the interface 801 is a setting interface in a handset setting, and the interface 801 includes an audio compensation switch 802. After detecting the on or off operation of the audio compensation switch 802 by the detected user, the mobile phone can turn on the audio compensation function or turn off the audio compensation function. The mobile phone may display the interface 803 shown in fig. 8 in response to the on operation of the audio compensation switch 802 by the detected user, that is, detecting the operation of the detected user for turning on the audio compensation function. The interface 803 includes a prompt "please wear the headset and connect the mobile phone and the headset", so as to prompt the user to wear the headset and establish the connection between the mobile phone and the headset to prepare for playing the test audio.
It should be understood that after the headset is worn and the connection between the mobile phone and the headset is established, the headset may send an indication message to the mobile phone to indicate that the headset is worn. After the mobile phone receives the indication information, the mobile phone can determine that the earphone is worn successfully and establish connection with the mobile phone, and then environment detection can be started.
For example, after the headset is worn and connected successfully, the mobile phone may display an interface 804 shown in fig. 8, where the interface 804 includes a control 805 for starting environment detection, and the control is used to trigger the mobile phone to start environment detection.
In the process of environment detection, the mobile phone can collect FF MIC signals and FB MIC signals of the left earplug and collect FF MIC signals and FB MIC signals of the right earplug. Wherein the FF MIC signal may reflect a sound condition of an external environment, and the FB MIC signal may reflect a sound condition within an ear canal. Based on the FF MIC signal and the FB MIC signal, the mobile phone can detect whether an environmental condition is satisfied. For example, if the FF MIC signals of the left and right earplugs are both less than the first threshold λ 1, it indicates that the ambient noise is low and the ambient condition is satisfied. For another example, if the FB MIC signals of the left and right earplugs are both smaller than the third threshold λ 2, it indicates that the interfering sound in the ear canal is small, and the environmental condition is satisfied. For another example, if the difference between the FF MIC signal and the FB MIC signal of the left earplug is greater than the second threshold λ 3, and the difference between the FF MIC signal and the FB MIC signal of the right earplug is greater than the second threshold λ 3, it indicates that both the left earplug and the right earplug are very close to the ear, and the left earplug and the right earplug have a good isolation effect on external environmental sounds, so as to satisfy the environmental conditions. The embodiment of the present application is not particularly limited to this.
For ease of illustration, the FB MIC signal of the left earpiece may be referred to as a first feedback signal, the FB MIC signal of the right earpiece may be referred to as a second feedback signal, the FF MIC signal of the left earpiece may be referred to as a first feedforward signal, and the FF MIC signal of the right earpiece may be referred to as a second feedforward signal.
Further, in order to avoid accidental errors, the handset may collect the FF MIC signal and the FB MIC signal for a certain period of time (e.g., 5s,10s, etc.), and then detect whether the environmental condition is satisfied based on the FF MIC signal and the FB MIC signal for the certain period of time.
For example, after detecting that the detected user clicks or presses the control 805 for starting the environment detection in the interface 804 shown in fig. 8 for a long time, the mobile phone may display an interface 806 shown in fig. 8, where the interface 806 includes a detection detail prompt 807 for prompting the acquired signal content. And, the interface 806 also includes a progress prompt 808 for prompting the progress of signal acquisition. For example, progress prompt 808 prompts 24% progress for 5 seconds of the current acquisition.
In a specific implementation, after acquiring the FF MIC signal and the FB MIC signal over a period of time, the handset may calculate a Root Mean Square (RMS) of the FF MIC signal for the left ear plug and the right ear plug, respectively, and calculate an RMS of the FB MIC signal for the left ear plug and the right ear plug, respectively. The calculated RMS is then used to determine whether the environmental conditions are met.
Illustratively, the environmental condition is determined to be satisfied if the calculated RMS satisfies the following relationship (1):
Figure DEST_PATH_IMAGE001
relation (1)
Wherein,
Figure 807125DEST_PATH_IMAGE002
is the FF MIC signal of the left ear plug,
Figure DEST_PATH_IMAGE003
is the FF MIC signal of the right ear plug. I.e., the RMS of the FF MIC signal of the left ear piece and the RMS of the FF MIC signal of the right ear piece are both less than the first threshold, then it is determined that the environmental condition is satisfied.
On the contrary, if the relation (1) is not satisfied, the environmental sound of the environment where the left earplug and the right earplug are located is larger, and it is determined that the environmental condition is not satisfied.
Further illustratively, the environmental condition is determined to be satisfied if the calculated RMS satisfies the following relation (1) on the basis of satisfying the relation (1):
Figure 38780DEST_PATH_IMAGE004
relation (2)
Wherein,
Figure DEST_PATH_IMAGE005
is the FB MIC signal of the left ear plug,
Figure 615255DEST_PATH_IMAGE006
is the FB MIC signal of the right earplug. That is, the difference between the RMS of the FF MIC signal and the RMS of the FB MIC signal at the left earplug is greater than the second threshold λ 3, and the difference between the RMS of the FF MIC signal and the RMS of the FB MIC signal at the right earplug is greater than the second threshold λ 3, it is determined that the environmental condition is satisfied.
In addition, there are some earphones in which the acoustic structure of the left and right earplugs is also subject to tolerances due to hardware differences. For example, the same audio signal is input to the left ear plug and the right ear plug, but the audio signals output by the speaker of the left ear plug and the speaker of the right ear plug are different. That is, even if the left and right earplugs receive the same audio signal, the sound effect heard by the ear canal of the left ear and the sound effect heard by the ear canal of the right ear of the user to be tested are different due to the structural difference between the left and right earplugs. For example, the sounds respectively heard by the two ears at the respective eardrum Reference points (eDRP) are different. Thus, the accuracy of subsequent testing may be compromised.
Based on this, in some embodiments, the handset may also calibrate the left and right earplugs, avoiding differences in the left and right earplugs hardware from affecting subsequent testing. After the headset is worn and connected successfully, referring to fig. 9, the handset can control the left and right earplugs to send the same audio signal (which may also be referred to as preset audio), such as the audio signal a in fig. 9, and collect the FB MIC signal of the left earplug (which may also be referred to as a third feedback signal) and the FB MIC signal of the right earplug (which may also be referred to as a fourth feedback signal). Here, the physiological structural differences within the ear canal are ignored, and therefore, the FB MIC signal of the left ear insert can be approximated as a sound signal heard at the eDRP of the left ear, and the FB MIC signal of the right ear insert can be approximated as a sound signal heard at the eDRP of the right ear. After the audio signal a is played, the mobile phone can obtain a variation curve including the audio signal a, the FB MIC signal of the left earplug and the FB MIC signal of the right earplug as shown in fig. 9. The horizontal axis of the graph is frequency and the vertical axis is amplitude.
The mobile phone analyzes the change curves of the FB MIC signal of the left earplug and the FB MIC signal of the right earplug to determine that the REC-L is enabled FB (s)/REC-R FB (s) =1 calibration factor p. Wherein, REC-L FB (s) is the FB MIC signal of the left earplug, REC-R FB (s) is the FB MIC signal for the right earplug. That is, the calibration coefficient p that makes both the frequency and the amplitude of the FB MIC signal and the FB MIC signal uniform is determined. The calibration factor p is used to adjust the input signal to the speaker in the left ear plug or to adjust the input signal to the speaker in the right left ear plug. Thereby shielding hardware differences of the speaker in the left earpiece and the speaker in the right earpiece. For convenience of explanation, the earplug whose calibration coefficient p is used for calibration may be referred to as a first earplug, and earplugs other than the first earplug may be referred to as a second earplug. For example, the first earplug is a left earplug, and the second earplug is a right earplug; the first earplug is a right earplug and the second earplug is a left earplug.
Subsequently, in the process of testing the angle loss in the horizontal direction and the angle loss in the vertical direction, before the mobile phone sends the test audio to the earphone, the test audio can be calibrated based on the calibration coefficient p. Wherein the calibration comprises amplitude calibration and/or frequency calibration.
In a specific implementation, the handset may calibrate the test audio input to the left earpiece based on the calibration factor p, obtain a calibrated test audio, and then send the calibrated test audio to the left earpiece. In this implementation, the handset may not calibrate the test audio sent to the right earpiece. It will be appreciated that after calibration of the test audio sent to the left earpiece, the input signal to the speaker in the left earpiece changes accordingly. In this way, it is possible to adjust the input signal to the speaker in the left ear plug, and to shield the hardware difference between the speaker in the left ear plug and the speaker in the right ear plug.
In another specific implementation, the handset may calibrate the test audio input to the right earplug based on the calibration factor p, obtain the calibrated test audio, and then send the calibrated test audio to the right earplug. In this implementation, the handset may not calibrate the test audio sent to the left earpiece. It will be appreciated that after calibration of the test audio sent to the right earpiece, the input signal to the speaker in the left earpiece changes accordingly. In this way, it is possible to adjust the input signal to the speaker in the right earpiece, shielding hardware differences of the speaker in the left earpiece and the speaker in the right earpiece.
In the foregoing specific implementation regarding calibration, the test audio is calibrated by the handset based on the calibration coefficient p. Of course, in other implementations, after the mobile phone obtains the calibration coefficient through analysis, the calibration coefficient p may also be sent to the left earplug or the right earplug. The test audio is then calibrated by either the left or right ear plug based on the calibration factor p. The embodiment of the present application is not particularly limited to this.
After the environment detection and calibration are completed, the mobile phone can test the angle loss of the horizontal direction and the angle loss of the vertical direction. Of course, in other embodiments, the above-described environment detection process or calibration process may be omitted.
Specific implementations for testing the angular loss in the horizontal orientation and the angular loss in the vertical orientation are described below:
before testing the angle loss in the horizontal direction, the handset needs to determine m groups of horizontal test sequences for testing the angle loss in the horizontal direction. A set of horizontal test sequences includes n horizontal test samples. Since the user needs to rely on the ILD to perceive the horizontal orientation, i.e. the amplitude affects the user's perception of the horizontal orientation. Based on this, it can be set that the amplitudes of the sounds are the same in the n horizontal test samples included in the set of horizontal test sequences. Therefore, the judgment of the user on the horizontal direction is avoided being influenced by different amplitudes. However, a set of horizontal test sequences includes n horizontal test samples in which the frequency of the sound and the preset horizontal orientation of the occurrence are different. Wherein different frequencies can be simulated using sounds emitted by different objects. For example, the frequencies of sounds emitted by various instruments are different, and multiple instrument sounds may be used to simulate sounds of multiple different frequencies.
For example, referring to the horizontal orientation diagram and a group of horizontal test sequences shown in fig. 10, when viewed from the top of the human head, the orientation of the front face of the human face and perpendicular to the center of the human face is horizontal 0 °, and along the counterclockwise direction, the angle of the horizontal orientation gradually increases, for example, to 15 °, 30 °, 45 °, … …. The set of horizontal test sequences includes 3 horizontal test samples, which are respectively the horizontal test sample (1) shown in fig. 10: drum sound with the preset horizontal direction of 23 degrees and the amplitude of A1 appears; horizontal test sample (2): piano tones with 83 ° and a range of A1 appear in the preset horizontal direction; and, horizontal test sample (3): a guitar sound with amplitude A1 at a preset horizontal orientation of 233 ° occurs. It is apparent that the amplitudes of the sounds in the horizontal test sample (1), the horizontal test sample (2), and the horizontal test sample (3) are all A1, and thus the amplitudes are the same. And, the sounds of the drum, the piano and the guitar in the horizontal test sample (1), the horizontal test sample (2) and the horizontal test sample (3), respectively, and thus the frequencies are different. Further, the preset horizontal orientations in which the sound appears in the horizontal test sample (1), the horizontal test sample (2), and the horizontal test sample (3) are 23 °, 83 °, and 233 °, respectively, and thus the preset horizontal orientations are different.
The two sets of horizontal test sequences may comprise frequencies that are completely different, partially identical, or completely identical.
Illustratively, 3 horizontal test samples included in one set of horizontal test sequences are drum sound, piano sound and guitar sound, respectively, and 3 horizontal test samples included in the other set of horizontal test sequences are violin sound, harmonia sound and suona sound, respectively, that is, the frequencies included in the two sets of horizontal test sequences are completely different.
Further illustratively, the 3 horizontal test samples included in one set of horizontal test sequences are respectively drum sound, piano sound and guitar sound, and the 3 horizontal test samples included in the other set of horizontal test sequences are respectively piano sound, guitar sound and violin sound, that is, the two sets of horizontal test sequences include the same frequency part, and the same frequency part is the corresponding frequency of piano sound and guitar sound.
As another example, the 3 horizontal test samples included in one set of horizontal test sequences are drum sound, piano sound and guitar sound, and the 3 horizontal test samples included in the other set of horizontal test sequences are drum sound, piano sound and guitar sound, i.e. the two sets of horizontal test sequences include the same frequency.
In one specific implementation, if the frequencies included in the two sets of horizontal test sequences are identical, the timing of the occurrence of the frequencies may be made different (i.e., in different order). In this way, the two groups of horizontal test sequences include n horizontal test samples with the same frequency but different frequency time sequences, and are used for testing the hearing condition of the tested user under different frequency time sequences. Therefore, the hearing condition of the tested user to the sound of each frequency can be obtained more accurately.
For example, two sets of horizontal test sequences are shown in table 1 below:
TABLE 1
Figure DEST_PATH_IMAGE007
It should be understood that table 1 above omits the presetting of both vertical orientation and amplitude parameters.
It is clear that both sets of horizontal test sequences in table 1 above include drumbeats, pianos and guitars, i.e. include the same frequencies. However, the group 1 horizontal test sequence includes 3 horizontal test samples in time series of drum sound, piano sound, and guitar sound, and the group 2 horizontal test sequence includes 3 horizontal test samples in time series of drum sound, guitar sound, and piano sound. Then, the 1 st group horizontal test sequence and the 2 nd group horizontal test sequence can be used for testing the hearing conditions of the tested user on drum sound, piano sound and guitar sound; moreover, the 1 st group of horizontal test sequences can be used for testing the hearing condition of the tested user on the guitar sound under the condition that the guitar sound appears behind the piano sound; and, group 2 horizontal test sequences can be used to test the hearing of the user under test for guitar sounds in the event that they occur after drumming.
And before testing the angle loss in the vertical direction, the mobile phone needs to determine q groups of vertical test sequences for testing the angle loss in the vertical direction. A set of vertical test sequences includes w vertical test samples. Since the user needs to rely on the change of the spectrum signal to perceive the change of the vertical orientation, i.e. the spectrum signal affects the perception of the vertical orientation by the user. Based on this, it can be set that the spectral signals of the sounds are the same in the w vertical test samples included in the set of vertical test sequences. Therefore, the influence of different spectral signals on the judgment of the user on the vertical direction is avoided. However, the amplitude of the sound and the vertical orientation of the occurrence differ among the n vertical test samples included in a set of vertical test sequences. Wherein the same spectrum signal can be simulated by using the same initial curve, for example, a set of vertical test sequences includes the spectrum signals in w vertical test samples all being curve a.
Illustratively, referring to the vertical orientation diagram and a group of vertical test sequences in fig. 10, when viewed from the right side of the human head, the orientation of the front face of the human face and perpendicular to the center of the human face is vertical to 0 °, and along the counterclockwise direction, the angle of the vertical orientation gradually increases, for example, to 15 °, 30 °, 45 °, … …. The set of vertical test sequences includes 3 vertical test samples, which are respectively the vertical test sample 1 shown in fig. 10: a curve a with the preset vertical direction of 53 degrees and the amplitude of A1 appears; vertical test sample 2: a curve a with 188 degrees of preset vertical direction and A2 amplitude appears; and, vertical test sample 3: a curve a with a preset vertical orientation of 278 ° and an amplitude of A3 appears. It is apparent that tra is present in the vertical test sample 1, the vertical test sample 2, and the vertical test sample 3, and thus the spectral signals are the same. And, the amplitudes of the curves a in the vertical test sample 1, the vertical test sample 2, and the vertical test sample 3 are A1, A2, and A3, respectively, and thus the amplitudes are different. Also, the preset vertical orientations in which the curve a appears in the vertical test sample 1, the vertical test sample 2, and the vertical test sample 3 are 53 °, 188 °, and 278 °, respectively, and thus the preset vertical orientations are different.
The two sets of vertical test sequences may comprise amplitudes that are completely different, partially identical, or completely identical.
Illustratively, the amplitudes of the sounds in the 3 vertical test samples included in one set of vertical test sequences are A1, A2, and A3, respectively, and the amplitudes of the sounds in the 3 vertical test samples included in the other set of vertical test sequences are A4, A5, and A6, respectively, i.e., the amplitudes included in the two sets of vertical test sequences are completely different.
Further illustratively, the amplitudes of the sounds in the 3 vertical test samples included in one set of vertical test sequences are A1, A2, and A3, respectively, and the amplitudes of the sounds in the 3 vertical test samples included in the other set of vertical test sequences are A2, A3, and A6, respectively, i.e., the amplitudes included in the two sets of vertical test sequences are partially the same, and are A2 and A2.
As another example, the amplitudes of the sounds in the 3 vertical test samples included in one set of vertical test sequences are A1, A2, and A3, respectively, and the amplitudes of the sounds in the 3 vertical test samples included in the other set of vertical test sequences are A1, A2, and A3, respectively, that is, the amplitudes included in the two sets of vertical test sequences are identical.
In one specific implementation, if the amplitudes included in the two sets of vertical test sequences are identical, the timing of the occurrence of the amplitudes may be made different (i.e., in different order). Therefore, the two groups of vertical test sequences comprise w vertical test samples with the same frequency but different amplitude time sequences, and the w vertical test samples are used for testing the hearing conditions of the tested users under different amplitude time sequences. Therefore, the hearing conditions of the tested user to the sounds with various amplitudes can be obtained more accurately. The principle of the method is similar to the principle of the horizontal test sequence that has the same frequency but different frequency sequences, and reference may be made to the description of the relevant parts (such as the relevant contents before and after table 1) in the foregoing, which is not repeated herein.
At this point, m sets of horizontal test sequences for testing the angular loss of the horizontal azimuth are determined, i.e., there are multiple sets of horizontal test sequences. Multiple sets of horizontal test sequences can then simulate a large number of possible frequency combinations. And, q sets of vertical test sequences for testing the angular loss of vertical orientation are determined, i.e., there are also multiple sets of vertical test sequences. Multiple sets of vertical test sequences can then simulate a large number of possible amplitude combinations. However, in actual implementation, there may be only one set of horizontal test sequences and vertical test sequences. I.e., a set of horizontal test sequences to determine the angular loss for the horizontal orientation and a set of vertical test sequences to determine the angular loss for the vertical orientation. The embodiment of the present application is not particularly limited to this. The following description will be given mainly by taking m sets of horizontal test sequences and q sets of vertical test sequences as examples.
After m groups of horizontal test sequences and q groups of vertical test sequences are determined, the mobile phone can test the angle loss of the horizontal direction and the angle loss of the vertical direction. Referring to fig. 11, the process of testing the angle loss of the horizontal orientation and the angle loss of the vertical orientation includes:
s1101, the mobile phone sends a kth group of target test sequences to the earphone, the kth group of target test sequences are any one of m groups of horizontal test sequences and q groups of vertical test sequences, and k takes values from 1,2,3 … … m + q in sequence.
Taking m = q =10 as an example, the handset may sequentially send test audios corresponding to 10 groups of horizontal test sequences to the headset, and then sequentially send test audios corresponding to 10 groups of vertical test sequences to the headset.
If the kth group of target test sequences is a group of horizontal test sequences, the kth group of target test sequences includes a plurality of sounds (which may be referred to as a plurality of first sound signals) with the same amplitude and different frequencies (such as different musical instruments) respectively appearing in the preset horizontal direction. Taking the k-th set of target test sequences as an example of the set of horizontal test sequences shown in FIG. 10 described above, the k-th set of target test sequences would then include the drum sound occurring at 23 deg. in horizontal orientation, the piano sound occurring at 83 deg. and the guitar sound occurring at 233 deg. in that order.
And if the kth group of target test sequences is a group of vertical test sequences, the kth group of target test sequences comprises sounds (which can be called as a plurality of second sound signals) with the same spectrum signals (like a first song) and different amplitudes appearing in a plurality of preset vertical directions respectively. Taking the k-th set of target test sequences as an example of the set of vertical test sequences shown in FIG. 10, the k-th set of target test sequences should include a curve a with an amplitude of A1 occurring at 53 deg. in the vertical orientation, a curve a with an amplitude of A2 occurring at 188 deg. and a curve a with an amplitude of A3 occurring at 278 deg. in this order.
For example, after completing the environmental test and calibration, the mobile phone may display an interface 1201 shown in fig. 12, where the interface 1201 includes a prompt "whether the earphone is ready and the hearing loss is to be tested" for prompting the hearing loss to be tested. And, the interface 1201 also includes two options, yes and no, for the tested user to choose whether to test the hearing loss. After detecting the selection operation of the tested user on the "yes" option in the interface 1201, the handset may start to send a1 st set of target test sequences, such as a1 st set of horizontal test sequences, to the headset.
It should be noted that, the sending, by the mobile phone, the kth group of target test sequences to the headset may specifically be: respectively sending kth group of target test sequences to a left earplug and a right earplug of the earphone by the mobile phone; alternatively, the handset sends the kth set of target test sequences to one earpiece of the headset, which then synchronizes the kth set of target test sequences to the other earpiece. The embodiment of the present application is not particularly limited to this.
And S1102, the earphone plays a test audio k corresponding to the kth group of target test sequences.
For convenience of explanation, the test audio k corresponding to the horizontal test sequence may be referred to as a horizontal test audio, and the test audio k corresponding to the vertical test sequence may be referred to as a vertical test audio.
After receiving the kth group of target test sequences, the two earplugs of the earphone can play test audio corresponding to the kth group of target test sequences from the loudspeaker. In this way, the sound signal can be transmitted to both ears of the user under test. It should be understood that as long as the earphone is a stereo earphone, the effect of playing the corresponding sound in the preset horizontal direction and the preset vertical direction can be achieved. Namely, stereoscopic playback is realized.
The tested user can feed back the perception position of sound appearing in a plurality of preset horizontal positions or a plurality of preset vertical positions in the kth group of target test sequences based on the effect heard by ears, namely, the perception result k is fed back.
Taking the kth group of target test sequences as an example of horizontal test sequences, after detecting that the tested user performs a selection operation on the "yes" option in the interface 1201 shown in fig. 12, the mobile phone may display the interface 1202 shown in fig. 12, where the interface 1202 includes a horizontal direction map for selecting a perception direction. And, the interface 1202 also includes a prompt "please select the direction in which the sound is located in the horizontal direction map after the earphone starts playing the test audio", for prompting to select the perception direction. For example, after the headphone plays the drum sound that appears at 23 ° in the horizontal position of the kth group of target test sequences, the tested user clicks a region of 30 ° -45 ° in the horizontal bitmap in the interface 1203 shown in fig. 12, for example, 38 °, and then the test result k includes: the perceived orientation of the tested user to the drum sound which appears at 23 ° horizontal orientation in the kth group of target test sequences is 38 °.
S1103, the mobile phone receives a test result k, wherein the test result k comprises a plurality of perception directions corresponding to the tested user to the plurality of sounds in the kth group of target test sequences.
For convenience of explanation, the test result k fed back for the test audio k corresponding to the horizontal test sequence may be referred to as a horizontal test result, and the test result k fed back for the test audio k corresponding to the vertical test sequence may be referred to as a vertical test result.
Wherein, the kth group of target test sequences is a group of horizontal test sequences, and the multiple sounds refer to sounds of multiple frequencies. The kth set of target test sequences is a set of vertical test sequences, and the multiple sounds are sounds of multiple amplitudes.
For example, taking the kth group of target test sequences as a group of horizontal test sequences (denoted as group 1 horizontal test sequences) shown in fig. 10 as an example, a complete test result k can be shown in the following table 2:
TABLE 2
Figure 682568DEST_PATH_IMAGE008
As can be seen from table 2 above, the test result k corresponding to the group 1 horizontal test sequence includes: the perception orientation of the tested user to the drum sound in the 1 st group of horizontal test sequences is 38 degrees; the perception orientation of piano sound in the 1 st group of horizontal test sequence is 83 degrees; and the perceived orientation of guitar sounds in the group 1 horizontal test sequence was 263 °.
And S1104, the mobile phone calculates the angle deviation of the perception position of the tested user to the multiple sounds in the kth group of target test sequences according to the test result k.
In some embodiments, the mobile phone may further test the normal user, for example, test the normal user by using the above S1101-S1103, and obtain a perception result k' corresponding to the kth group of target test sequences by the normal user.
Taking the kth set of target test sequences as an example of a set of horizontal test sequences (denoted as set 1 horizontal test sequences) shown in fig. 10, a complete test result k' can be shown in the following table 3:
TABLE 3
Figure DEST_PATH_IMAGE009
As can be seen from Table 2 above, the test results k' for the group 1 horizontal test sequences include: the perception orientation of the tested user to the drum sound in the 1 st group of horizontal test sequences is 30 degrees; the perception orientation of piano sound in the 1 st group of horizontal test sequences is 85 degrees; and a perceived orientation of 275 ° for guitar sounds in group 1 horizontal test sequence.
In this embodiment, for any sound in the kth group of target test sequences, the perceptual orientation of the sound in the test result k is subtracted from the perceptual orientation of the sound in the test result k', so as to obtain the angular deviation of the perceptual orientation of the tested user with respect to the sound.
Illustratively, the test result k and the test result k' are shown in the above table 2 and table 3, respectively, then it can be calculated that the angle deviation of the perceived orientation of the drum sound is 38 ° -30 ° =8 °, the angle deviation of the perceived orientation of the piano sound is 83 ° -85 ° = -2 °, and the angle deviation of the perceived orientation of the guitar sound is 263 ° -250 ° =13 ° for the tested user in the set of horizontal test sequences (denoted as the 1 st set of horizontal test sequences) shown in fig. 10. And, the drum sound, the piano sound and the guitar sound correspond to different frequencies, such as f1, f2 and f3, respectively, if f1 < f2 < f3, then based on the measured user's angle deviation of 8 °, -2 ° and 13 ° for the drum sound, the piano sound and the guitar sound, respectively, the angle loss (such as the angle loss of the 1 st set of horizontal orientations) of a set of horizontal orientations corresponding to the 1 st set of horizontal test sequence shown in fig. 13 can be obtained.
By adopting the embodiment, the sensing azimuth of the normal user is taken as the standard azimuth to calculate the angle deviation of the sensing azimuth of the detected user.
Of course, in other embodiments, the mobile phone may also calculate the angle deviation of the perceived orientation of the tested user by using a plurality of preset horizontal orientations or a plurality of vertical orientations in the kth group of target test sequences as standard orientations. In this embodiment, only the preset horizontal orientation or the preset vertical orientation is required to replace the sensing orientation of the normal user in the foregoing embodiment, and details are not described here.
It should be noted that in the above description of S1104, the horizontal test sequence is mainly used for description. In practice, the same applies to the vertical test sequence, which is described below as an example.
For example, if the kth set of target test sequences is a set of vertical test sequences (denoted as the 1 st set of vertical test sequences) shown in fig. 10, a complete test result k can be shown in table 4 below:
TABLE 4
Figure 808525DEST_PATH_IMAGE010
As can be seen from table 4, the test result k corresponding to the 1 st set of vertical test sequences includes: the perception direction of the tested user to the sound with the amplitude A1 in the 1 st group of vertical test sequences is 300 degrees; the perceived orientation of the sound of amplitude A2 in the set 1 vertical test sequence is 32 °; and the perceived orientation for sounds of amplitude A3 in the set 1 vertical test sequence is 229 °.
Similarly, the kth set of target test sequences is a set of vertical test sequences (denoted as the 1 st set of vertical test sequences) shown in fig. 10, and a corresponding complete test result k' can be shown in table 5 below:
TABLE 5
Figure DEST_PATH_IMAGE011
As can be seen from Table 5, the test results k' for the 1 st set of vertical test sequences include: the perception orientation of the tested user to the sound with the amplitude A1 in the 1 st group of vertical test sequences is 280 degrees; the perceived orientation of the sound of amplitude A2 in the 1 st set of vertical test sequences is 50 °; and the perceived orientation for sounds of amplitude A3 in the 1 st set of vertical test sequences is 200 °.
Based on the above table 4 and table 5, it can be calculated that the angle deviation of the perceived orientation of the sound of the amplitude A1 is 300 ° -280 ° =20 °, the angle deviation of the perceived orientation of the sound of the amplitude A2 is 32 ° -50 ° = -18 °, and the angle deviation of the perceived orientation of the sound of the amplitude A1 is 229 ° -200 ° =29 ° for the tested user in the set of vertical test sequences (denoted as the 1 st set of horizontal test sequences) shown in fig. 10. If A1 < A2 < A3, then based on the measured user's angle deviations of 20 °, -18 °, and 29 ° for the sounds of amplitudes A1, A2, and A3, respectively, a set of vertical orientation angle losses (e.g., set 1 vertical orientation angle losses) corresponding to the set 1 vertical test sequence shown in fig. 13 can be obtained.
Of course, the angle loss in the horizontal direction and the angle loss in the vertical direction may also be expressed in the form of a table or a character, and this is not particularly limited in the embodiment of the present application.
In the above description of S1101-S1104, only the specific implementation of completing the test for a set of target test sequences is described. In practice, after the m groups of horizontal test sequences and the q groups of vertical test sequences are tested, the angle loss of the m groups of horizontal orientations corresponding to the m groups of horizontal test sequences and the angle loss of the q groups of vertical orientations corresponding to the q groups of vertical test sequences can be obtained.
And playing the test audio through the earphone, obtaining the test result by the mobile phone, analyzing and calculating the test result, and finally obtaining the angle loss of m groups of horizontal orientations and the angle loss of q groups of vertical orientations.
Subsequently, the handset can use the m sets of angular losses for horizontal orientations and the q sets of angular losses for vertical orientations to determine the hearing model of the user.
In some embodiments, referring to fig. 14, the handset may use the angle loss of m groups of horizontal orientations as input, and run the standard orientation model to obtain the relative amplitude loss of the hearing of the tested user, where the relative amplitude loss refers to the difference in the amplitudes (also referred to as amplitude variation) perceived by the left ear and the right ear of the tested user for the sounds of the respective frequencies. And the mobile phone can take the angle loss of the q groups of vertical directions as input, operate the standard direction model and obtain the basic model of the hearing of the tested user. Wherein, the basic model is the lowest amplitude that can be heard by the tested user for the sound of each frequency on the basis of not considering the relative amplitude loss. Then, the mobile phone can correct the basic model based on the relative amplitude loss to obtain the hearing model of the tested user. I.e. the lowest amplitude that can be heard by the user under test for each frequency of sound, taking into account the relative amplitude loss. For example, the amplitude (i.e., the first lowest amplitude, a1 in the base model shown in fig. 14) corresponding to each frequency (which may also be referred to as the first frequency, f1 in the base model shown in fig. 14) in the base model is added to the amplitude deviation (Δ a1 in the relative amplitude loss shown in fig. 14) corresponding to the frequency (f 1 in the relative amplitude loss shown in fig. 14) in the relative amplitude loss, to obtain the amplitude (which may also be referred to as the second lowest amplitude, a1+ Δ a1 in the hearing model shown in fig. 14) corresponding to the frequency (f 1 in the hearing model shown in fig. 14) in the hearing model.
In a specific implementation manner, the mobile phone performs fusion processing on the angle losses of m groups of horizontal orientations to obtain a corresponding relation 1 that the angle deviation of the horizontal orientation changes along with the change of frequency. As shown in fig. 13, the angular loss of a set of horizontal orientations includes the angular deviation of each frequency. For example, the standard azimuth model may average a plurality of angle deviations corresponding to the same frequency in the angle loss of m groups of horizontal azimuths, to obtain an average value of the angle deviations of the sound of the corresponding frequency. For example, in the angular loss of the horizontal orientations of the 1 st group, the 2 nd group, and the 3 rd group, the angular deviations of the sound to f1 by the user to be measured are Δ dh11, Δ dh12, and Δ dh13 in this order. Then, the mobile phone can obtain the average value Δ dh11= ([ delta dh11+ [ delta ] dh12+ [ delta ] dh 13)/3 of the angular deviation of the voice of the tested user to f 1. After obtaining the correspondence 1, the mobile phone may input the correspondence 1 to the standard orientation model, and the standard orientation model may output the relative amplitude loss.
It should be noted that, if only one set of horizontal test sequences is used to determine the angle loss of the horizontal azimuth, only one set of horizontal azimuth angle losses is obtained, and the set of horizontal azimuth angle losses can be regarded as the corresponding relation 1. That is, if the angle loss of one set of horizontal orientations is obtained using only one set of horizontal test sequences, the step of performing the fusion processing on the angle losses of the m sets of horizontal orientations can be omitted.
Illustratively, the standard orientation model may be trained by:
and collecting a plurality of groups of horizontal training samples for hearing test of normal users to obtain a fitting model which changes along with the amplitude of sound and the normal users perceive the change of the horizontal direction. Illustratively, the plurality of sets of horizontal training samples are the aforementioned m sets of horizontal test samples. For the multiple sets of horizontal training samples, reference is made to the above description about the m sets of horizontal test sequences, which is not repeated herein.
Aiming at any group of horizontal training samples, the initial horizontal sensing direction of the normal user to the initial sound with certain frequency and amplitude is determined by playing the horizontal training audio corresponding to the horizontal training sample and then receiving the sensing direction of the horizontal direction fed back by the normal user. Then, the amplitude in the group of horizontal training samples can be adjusted for multiple times, the adjusted horizontal training audio is played after each adjustment, and then the perception orientation of the horizontal orientation fed back by the normal user is received, so that the horizontal perception orientation of the normal user to the sound with certain frequency and adjusted amplitude is determined. In this way, through a large amount of training on a plurality of groups of horizontal training samples, a correspondence 2 (which may also be referred to as a first correspondence) can be fitted to the change of the horizontal perception orientation of the normal user with the change of the amplitude of the sound. For example, the correspondence relation 2 includes an amplitude change Δ a0, and an angular deviation of the horizontal sensing azimuth is Δ dh1.
After obtaining the corresponding relation 2, the mobile phone inputs the corresponding relation 1 to the standard orientation model, and the mobile phone may determine a corresponding amplitude variation in the corresponding relation 2 based on an angle deviation corresponding to each frequency in the corresponding relation 1. Thus, the amplitude variation corresponding to each frequency in correspondence relation 1, i.e., the relative amplitude loss, can be obtained.
In a specific implementation manner, the mobile phone performs fusion processing on the angle loss of the q groups of vertical orientations to obtain a corresponding relation 3 that the angle deviation of the vertical orientation changes along with the change of the amplitude. As shown in fig. 13, the angular losses for a set of perpendicular orientations include angular deviations of various magnitudes. For example, the standard orientation model may obtain a weighted average of a plurality of angle deviations corresponding to the same amplitude in the angular loss of the q groups of vertical orientations, so as to obtain an angle deviation average of the sound of the corresponding amplitude. In calculating the weighted average, the weight values may be determined based on the frequencies of the spectral signals employed in each set of vertical test sequences.
For example, in the vertical test sequences of group 1, group 2 and group 3, spectral signals with frequencies f1, f2 and f3 are used in sequence, and f2 and f3 are in the middle frequency range where human ear hearing is sensitive, i.e., 1.1KHz-3.8KHz, while f1 is lower than 1.1KHz. Then, the angular losses for the 2 nd and 3 rd set of vertical orientations may be given higher weights and the angular losses for the 1 st set of vertical orientations may be given lower weights. If the tested user has the angular deviation of the sound with the amplitude A1 in the vertical test sequences of the 1 st group, the 2 nd group and the 3 rd group, the angular deviation of the sound with the amplitude A1 is respectively delta dv11, delta dv12 and delta dv13. Then, the mobile phone can obtain the average value Δ dv11 =:Δdh11 × k1 +/Δ dh12 × k2 +/Δ dh13 × k3 of the angular deviation of the measured user to the sound with amplitude A1, where k2 and k3 are greater than k1. After obtaining the correspondence 3, the mobile phone may input the correspondence 3 to the standard orientation model, and the standard orientation model may output a basic model (which may also be referred to as a frequency loss model).
It should be noted that, if only one set of vertical test sequences is used to determine the angle loss of the vertical orientation, only one set of vertical orientation angle loss is obtained, and the set of vertical orientation angle loss can be regarded as the corresponding relation 3. That is, if the angle loss of one set of vertical orientations is obtained using only one set of vertical test sequences, the step of performing the fusion process on the angle loss of q sets of vertical orientations can be omitted. Illustratively, the standard orientation model may be trained by:
and collecting a plurality of groups of vertical training samples for hearing test of normal users to obtain a fitting model which changes along with the frequency of sound and the normal users perceive the change of the vertical direction. Illustratively, the plurality of sets of vertical training samples are the aforementioned q sets of vertical test samples. For the multiple sets of vertical training samples, reference is made to the related description of the q sets of vertical test sequences, which is not repeated here.
Aiming at any group of vertical training samples, the initial vertical perception orientation of the normal user to the initial sound with certain frequency and amplitude is determined by playing the vertical training audio corresponding to the vertical training sample and then receiving the perception orientation of the vertical orientation fed back by the normal user. Then, the frequency of the group of vertical training sample spectrum signals can be adjusted for multiple times, the adjusted training audio is played after each adjustment, and then the perception orientation of the vertical orientation fed back by the normal user is received, so that the horizontal perception orientation of the normal user to the sound with certain frequency and adjusted frequency is determined. In this way, through a large amount of training on a plurality of groups of vertical training samples, a corresponding relationship 4 (which may also be referred to as a second corresponding relationship) can be obtained by fitting, in which the vertical perception orientation of a normal user changes with the change of the frequency of the sound. For example, the correspondence relationship 4 includes a frequency change Δ f, such as a change to f0+ Δ f, and an angular deviation from the vertical sensing orientation is Δ dv1.
After obtaining the correspondence 4, the mobile phone inputs the correspondence 3 to the standard orientation model, and the mobile phone may determine the corresponding frequency in the correspondence 4 based on the angle deviation corresponding to each amplitude in the correspondence 3. So that the frequencies corresponding to the respective amplitudes in relation 3 can be obtained. I.e. a base model is derived comprising the second lowest amplitude of the sound that the user can perceive at the respective frequency.
The current test subject user may be a hearing impaired user and a loss of hearing amplitude may interfere with the measurement of the angular loss in the vertical orientation. Based on this, in some embodiments, before testing the angular loss of the vertical orientation, such as S1101 shown in fig. 11, the absolute amplitude loss of both ears of the user under test may be tested, including the first absolute amplitude loss of the left ear and the second absolute amplitude loss of the right ear of the user under test. Then, in the process of testing the angle loss of the vertical orientation, the first absolute amplitude loss and the second absolute amplitude loss can be smoothed, so that the process of testing the angle of the vertical orientation is not interfered by the amplitude loss.
Referring to fig. 15, the process of testing for first absolute amplitude loss includes:
s1501, the mobile phone determines a reference amplitude.
Wherein the reference amplitude is the starting amplitude of the test first absolute amplitude loss.
The hearing ability of the normal user to the sounds with different amplitudes in the preset frequency range can be tested, and the lowest audible amplitude of the normal user in the preset frequency range is determined. The handset can use the lowest audible amplitude of a normal user in a preset frequency range as a reference amplitude.
Illustratively, since the human ear is most sensitive to intermediate frequency signals of 1.1KHz to 3.8KHz, the predetermined frequency range may be set to be within 1.1KHz to 3.8 KHz.
S1502, the mobile phone sends a test signal with amplitude j within a preset frequency range to the left earplug, wherein j gradually increases from the reference amplitude.
Illustratively, the reference amplitude is A0, and the mobile phone first transmits a test signal with the amplitude of A0 within a preset frequency range to the left earplug; and the mobile phone sends a test signal with the amplitude of A0+1 within the preset frequency range to the left earplug for the second time.
S1503, the left earplug plays audio corresponding to the test signal.
S1504, the mobile phone receives hearing feedback of the tested user, wherein the hearing feedback comprises hearing ability and hearing inability.
After the left earplug plays the audio corresponding to the test signal, if the tested user can hear the audio, the tested user can feed back the audio; if the user cannot hear the feedback, the feedback can be heard. For example, after detecting the selection operation of the tested user on the "yes" option in the interface 1601 shown in fig. 16, the mobile phone may start to test the absolute loss, and after sending a test signal once, an interface 1602 shown in fig. 16 may be displayed, where the interface 1602 includes a prompt "you hear the sound played by the left earplug" for prompting the tested user to input the hearing feedback. And, two options, "hear able" and "hear not able" are also included in the interface 1602 for selecting a hearing feedback result.
S1505, the handset determines whether the left ear can hear the sound with amplitude j based on the hearing feedback. If not, continue to execute S1502; if so, then S1506 is performed.
If the amplitude j cannot be heard, the hearing loss of the tested user to the sound with the current amplitude j is indicated, and S1502 and the subsequent steps need to be continuously executed, and the test is continuously carried out on the test signal with the amplitude j being j +1 until the amplitude which can be heard by the tested user is determined. If the current amplitude j is audible, it indicates that the tested user has no force loss to the sound of the current amplitude j, and S1506 is executed to determine a first absolute amplitude loss.
And S1506, the mobile phone determines the difference value between the reference amplitude and j as the first absolute amplitude loss.
Illustratively, with a reference amplitude of A0 and current j = A0+2, the first absolute amplitude loss can be determined to be-2 dB.
The foregoing only describes a specific implementation of determining the first absolute amplitude loss, and in practice, the process of determining the second absolute amplitude loss is the same. The only remarks to be noted are: in determining the second absolute amplitude loss, the handset needs to send a test signal to the right ear plug and play the test signal by the right ear plug.
Illustratively, in testing the first absolute amplitude loss in the manner described above with reference to fig. 15, the hearing feedback of the user under test is shown in the column "test left ear" of the user under test in fig. 17. That is, the lowest amplitude that can be heard by the left ear is 2dB higher than the reference amplitude. In other words, the left ear hearing is 2dB worse than a normal user, with a-2 dB first absolute amplitude loss. And, in testing for a second absolute amplitude loss in a manner similar to that described above with reference to FIG. 15, the hearing feedback of the user under test is shown in the column "test right ear of the user under test" in FIG. 17. I.e. the lowest amplitude that can be heard by the right ear is 3dB higher than the reference amplitude. In other words, the right ear hearing is 3dB worse than a normal user, with a second absolute amplitude loss of-3 dB.
After obtaining the first absolute amplitude loss and the second absolute amplitude loss, referring to fig. 18, before S1101 shown in fig. 11, the method further includes:
s1801, the mobile phone updates q sets of vertical test sequences based on the first absolute amplitude loss and the second absolute amplitude loss to obtain q updated sets of vertical test sequences, where each updated set of vertical test sequences includes a left ear test sequence and a right ear test sequence.
For any group of vertical test sequences, the mobile phone may subtract the first absolute amplitude loss from the amplitudes of the plurality of vertical test samples included in the group of vertical test sequences to obtain a group of vertical test sequences (including a plurality of left ear test signals) for testing the left ear; and subtracting the second absolute amplitude loss from the amplitudes of the plurality of vertical test samples included in the set of vertical test sequences to obtain another set of vertical test sequences (including the plurality of right ear test signals) for testing the right ear. I.e., a set of vertical test sequences, two sets of vertical test sequences (which may be referred to as a set of left ear test sequences and a set of right ear test sequences) for left ear and right ear testing, respectively, may be updated. That is, the updated set of vertical test sequences includes two sub-test sequences.
Illustratively, the 1 st group of vertical test sequences sequentially includes vertical test samples with amplitudes A1, A2, and A3, and the first absolute amplitude loss is-2 dB and the second absolute amplitude loss is-3 dB, so that the obtained 1 st group of vertical test sequence pairs includes the following two groups of sub-test sequences:
the left ear test sequence sequentially comprises vertical test samples with amplitudes of A1+2dB, A2+ 2dB and A3+2 dB;
the right ear test sequence sequentially comprises vertical test samples with the amplitudes of A1+3dB, A2+ 3dB and A3+ 3dB.
Through S1801, q sets of vertical left test sequences are obtained after the q sets of vertical test sequences are mapped, and each set of updated vertical test sequences includes two sets of sub-test sequences.
With continued reference to fig. 18, S1101 shown in fig. 11 is specifically:
s1101' the mobile phone sends a kth group of target test sequences to the earphone, wherein the kth group of target test sequences are any one of m groups of horizontal test sequences and q groups of updated vertical test sequences, and k takes values from 1,2,3 … … m + q in sequence.
Unlike the foregoing S1101, if the kth set of target test sequences is an updated set of vertical test sequences, the kth set of target test sequences includes a left ear test sequence and a right ear test sequence. Then, if the kth group of target test sequences is the updated group of vertical test sequences, the headset needs to send the left ear test sequence included in the kth group of target test sequences to the left earplug, and send the right ear test sequence included in the kth group of target test sequences to the right earplug.
With continued reference to fig. 18, S1102 shown in fig. 11 specifically includes:
s1102', the earphone plays a test audio k corresponding to the kth group of target test sequences; if the kth group of target test sequences is an updated group of vertical test sequences, the left earplug plays a test audio k1 corresponding to the left ear test sequence, and the right earplug plays a test audio k2 corresponding to the right ear test sequence.
Therefore, the mobile phone can smooth the absolute loss of the amplitudes of the two ears, and the interference on the vertical test is avoided.
Further, in the present embodiment, the absolute loss of the amplitudes in both ears is smoothed out during the vertical test. Then, in the subsequent process of determining the hearing model, the absolute amplitude loss needs to be compensated to improve the accuracy of the determined hearing model. Illustratively, unlike the process of determining a hearing model shown in FIG. 14 above, it is not: referring to fig. 19 (the difference is mainly within the bold rectangle in fig. 19), in this embodiment, after the handset obtains the relative amplitude loss based on the standard orientation model, the handset may correct the relative amplitude loss by combining the first absolute amplitude loss and the second absolute amplitude loss. For example, the difference between the absolute amplitude losses of the left ear and the right ear may be calculated, and the difference between the absolute amplitude losses and the amplitude difference (also referred to as the first amplitude difference) corresponding to each frequency (also referred to as the second frequency) in the relative amplitude losses may be added to obtain the amplitude difference (also referred to as the second amplitude difference) corresponding to the corresponding frequency (i.e., the second frequency) in the binaural amplitude losses. For example, if the first absolute amplitude loss is-2 dB, the second absolute amplitude loss is-3 dB, the absolute amplitude loss difference is-2- (-3) dB =1dB, and the amplitude difference corresponding to the frequency f1 in the relative amplitude loss is Δ a1, then adding 1 to Δ a1 can obtain the amplitude difference corresponding to the frequency f1 in the binaural amplitude loss, i.e., Δ a1+1. Subsequently, when the hearing model is obtained by fusion, the binaural amplitude loss is fused with the base model, rather than the relative amplitude loss with the base model. For example, in the hearing model shown in fig. 19, the amplitude corresponding to the frequency f1 is the sum of the amplitude a1 corresponding to the frequency f1 in the base model and the amplitude loss Δ a1+1 corresponding to the frequency f1 in the binaural amplitude loss, i.e., a1+ Δ a1+1.
An embodiment of the present application further provides an electronic device, which may include: a memory and one or more processors. The memory is coupled to the processor. The memory is for storing computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform various functions or steps performed by the mobile phone in the above-described method embodiments.
The embodiment of the present application further provides a chip system, as shown in fig. 20, where the chip system 2000 includes at least one processor 2001 and at least one interface circuit 2002. The processor 2001 and the interface circuit 2002 may be interconnected by wires. For example, the interface circuit 2002 may be used to receive signals from other devices (e.g., a memory of an electronic device). Also for example, the interface circuit 2002 may be used to send signals to other devices, such as the processor 2001. Illustratively, the interface circuit 2002 may read instructions stored in a memory and send the instructions to the processor 2001. The instructions, when executed by the processor 2001, may cause the electronic device to perform the various steps performed by the handset in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
The present embodiment also provides a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are executed on an electronic device, the electronic device is caused to execute each function or step performed by the mobile phone in the above method embodiments.
The present embodiment also provides a computer program product, which when running on a computer, causes the computer to execute the functions or steps performed by the mobile phone in the above method embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute each function or step executed by the watch in the embodiment of the method to estimate the physical fitness age.
In addition, the electronic device, the communication system, the computer-readable storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the communication system, the computer-readable storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the module or unit is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present application and not for limiting, and although the present application is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present application without departing from the spirit and scope of the technical solutions of the present application.

Claims (17)

1. A method for determining a hearing model, applied to an electronic device, includes:
playing horizontal test audio, wherein the horizontal test audio comprises a plurality of first sound signals with different frequencies and horizontal directions and the same amplitude;
receiving a horizontal test result fed back by a user, wherein the horizontal test result comprises horizontal perception positions of the user on the plurality of first sound signals;
playing vertical test audio, wherein the vertical test audio comprises a plurality of second sound signals which have different amplitudes and vertical directions and have the same frequency spectrum signals;
receiving a vertical test result fed back by a user, wherein the vertical test result comprises a vertical perception position of the user on the plurality of second sound signals;
and obtaining a hearing model of the user based on the horizontal test result and the vertical test result, wherein the hearing model comprises the first lowest amplitude of the sound signals which can be perceived by the user at various frequencies, and the hearing model is used for hearing compensation.
2. The method of claim 1, wherein the playback level test audio comprises:
playing the horizontal test audio through an earphone connected with the electronic equipment;
the playing of the vertical test audio comprises:
and playing the vertical test audio through the earphone.
3. The method of claim 2, further comprising:
acquiring a first feedforward signal acquired by a feedforward microphone in a left earplug of the earphone and acquiring a second feedforward signal acquired by a feedforward microphone in a right earplug of the earphone;
the playing the horizontal test audio through an earphone connected with the electronic device, and the playing the vertical test audio through the earphone, comprising:
and under the condition that the first feedforward signal is smaller than a first threshold value and the second feedforward signal is smaller than the first threshold value, playing the horizontal test audio through the earphone connected with the electronic equipment, and playing the vertical test audio through the earphone.
4. The method of claim 3, further comprising:
acquiring a first feedback signal acquired by a feedback microphone in the left earplug and acquiring a second feedback signal acquired by a feedback microphone in the right earplug;
the playing the horizontal test audio through an earphone connected with the electronic device, and the playing the vertical test audio through the earphone, comprising:
playing the horizontal test audio through a headset connected to the electronic device and playing the vertical test audio through the headset if a difference between the first feedforward signal and the first feedback signal is greater than a second threshold and a difference between the second feedforward signal and the second feedback signal is greater than a second threshold.
5. The method of claim 2, further comprising:
sending the same preset audio to a left earplug and a right earplug of the earphone, wherein the preset audio is used for testing the hardware difference of a loudspeaker of the left earplug and the loudspeaker of the right earplug;
obtaining a third feedback signal collected by a feedback microphone in the left earpiece and a fourth feedback signal collected by a feedback microphone in the right earpiece, the third feedback signal including a result of the speaker of the left earpiece playing the preset audio, the fourth feedback signal including a result of the speaker of the right earpiece playing the preset audio;
determining a calibration coefficient that makes the third feedback signal and the fourth feedback signal consistent;
sending the calibration factor to a first ear bud, the first ear bud being either a left ear bud or a right ear bud;
the playing the horizontal test audio through an earphone connected with the electronic device, and the playing the vertical test audio through the earphone, comprising:
playing the calibrated horizontal test audio after calibrating the horizontal test audio by the first earpiece using a calibration factor, playing the horizontal test audio without calibration by a second earpiece; and the number of the first and second groups,
playing the calibrated vertical test audio after calibrating the vertical test audio by the first earpiece using a calibration factor, playing the vertical test audio uncalibrated by a second earpiece;
wherein the second earplug is one of the left and right earplugs other than the first earplug.
6. The method of any of claims 2-5, wherein deriving a hearing model of the user based on the horizontal test results and vertical test results comprises:
determining a horizontal angular loss of hearing of the user based on the horizontal test result, the horizontal angular loss comprising an angular deviation of a horizontal perceptual orientation of the user to the sound signals of the respective frequencies;
determining a vertical angle loss of the user's hearing based on the vertical test results, the vertical angle loss comprising an angular deviation of a user's vertical perceptual orientation of sound signals of respective magnitudes;
calculating a hearing model of the user based on the horizontal angle loss and the vertical angle loss.
7. The method according to claim 6, wherein the electronic device comprises a first corresponding relationship and a second corresponding relationship, the first corresponding relationship comprises a corresponding relationship between a change in amplitude and a change in horizontal perception orientation of the sound signal for the population without hearing loss, and the second corresponding relationship comprises a corresponding relationship between a change in frequency and a change in vertical perception orientation of the sound signal for the population without hearing loss;
said computing a hearing model of said user based on said horizontal angle loss and said vertical angle loss, comprising:
converting the horizontal angle loss into relative amplitude losses of both ears of the user based on the first corresponding relation, wherein the relative amplitude losses comprise the amplitude difference values perceived by the sound signals of the various frequencies and the left ear and the right ear of the user;
converting the vertical angle loss into a frequency loss model for both ears of the user based on the second correspondence, the frequency loss model including a second lowest amplitude at which the user can perceive sound signals of respective frequencies;
and correcting the frequency loss model based on the relative amplitude loss to obtain the hearing model.
8. The method of claim 7, wherein said modifying the frequency loss model based on the relative amplitude loss to derive the hearing model comprises:
adding a first lowest amplitude corresponding to a first frequency in the frequency loss model to an amplitude difference corresponding to the first frequency in the relative amplitude loss to obtain a second lowest amplitude corresponding to the first frequency in the hearing model;
wherein the first frequency is any frequency included in the frequency loss model.
9. The method of claim 6, further comprising:
subtracting the first absolute amplitude loss from the amplitudes of the plurality of second sound signals in the vertical test audio to obtain a plurality of left ear test signals;
subtracting a second absolute amplitude loss from the amplitudes of the plurality of second sound signals to obtain a plurality of right ear test signals, wherein the plurality of left ear test signals and the plurality of right ear test signals form the updated vertical test audio;
the first absolute amplitude loss is the difference value between the lowest audible amplitude of the crowd without hearing loss and the lowest audible amplitude of the left ear of the user within a preset frequency range, and the second absolute amplitude loss is the difference value between the lowest audible amplitude of the crowd without hearing loss and the lowest audible amplitude of the right ear of the user within the preset frequency range;
the playing the vertical test audio through the headphones comprises:
playing the plurality of left ear test signals through a left ear plug of the headset and playing the plurality of right ear test signals through a right ear plug of the headset.
10. The method according to claim 9, wherein the electronic device comprises a first corresponding relationship and a second corresponding relationship, the first corresponding relationship comprises a corresponding relationship between a change in amplitude and a change in horizontal perception orientation of the sound signal for the population without hearing loss, and the second corresponding relationship comprises a corresponding relationship between a change in frequency and a change in vertical perception orientation of the sound signal for the population without hearing loss;
said computing a hearing model of said user based on said horizontal angle loss and said vertical angle loss, comprising:
converting the horizontal angle loss into relative amplitude losses of both ears of the user based on the first corresponding relation, wherein the relative amplitude losses comprise the amplitude difference values perceived by the sound signals of the various frequencies and the left ear and the right ear of the user;
correcting the relative amplitude loss based on the first absolute amplitude loss and the second absolute amplitude loss to obtain a binaural amplitude loss;
converting the vertical angle loss into a frequency loss model for both ears of the user based on the second correspondence, the frequency loss model including a second lowest amplitude at which the user can perceive sound signals of respective frequencies;
and correcting the frequency loss model based on the binaural amplitude loss to obtain the hearing model.
11. The method of claim 10, wherein said modifying the relative amplitude loss based on the first absolute amplitude loss and the second absolute amplitude loss to obtain a binaural amplitude loss comprises:
and adding a difference value between the first absolute amplitude loss and the second absolute amplitude loss to a first amplitude difference value corresponding to a second frequency in the relative amplitude loss to obtain a second amplitude difference value corresponding to the second frequency in the binaural amplitude loss.
12. The method of claim 1, wherein after said deriving a hearing model of a user based on the horizontal test results and vertical test results, the method further comprises:
and increasing a sound signal of a third frequency in the audio to be played by a preset amplitude, and then playing the audio, wherein the third frequency is any frequency of the sound signal included in the audio to be played, and the preset amplitude is a difference value between a first lowest amplitude corresponding to the third frequency in the hearing model and a third lowest amplitude of the sound signal of the third frequency, which can be heard by people without hearing loss.
13. The method according to claim 1, wherein the horizontal test audio comprises a first horizontal test audio and a second horizontal test audio, and an order of the frequencies corresponding to the first sound signals sequentially appearing in the first horizontal test audio and an order of the frequencies corresponding to the first sound signals sequentially appearing in the second horizontal test audio are not identical.
14. The method of claim 1, wherein the vertical test audio comprises a first vertical test audio and a second vertical test audio, and wherein an order of the plurality of amplitudes corresponding to the plurality of second sound signals sequentially occurring in the first vertical test audio and an order of the plurality of amplitudes corresponding to one-to-one with the plurality of second sound signals sequentially occurring in the second vertical test audio are not identical.
15. An electronic device, wherein the electronic device comprises memory and one or more processors; the memory and the processor are coupled; the memory for storing computer program code comprising computer instructions that, when executed by the processor, cause the electronic device to perform the method of any of claims 1-14.
16. A computer-readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform the method of any of claims 1-14.
17. A communication system, characterized in that the communication system comprises an electronic device for performing the method of any of claims 1-14, and comprises a headset for playing test audio.
CN202211414959.0A 2022-11-11 2022-11-11 Method for determining hearing model, electronic equipment and system Active CN115460526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211414959.0A CN115460526B (en) 2022-11-11 2022-11-11 Method for determining hearing model, electronic equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211414959.0A CN115460526B (en) 2022-11-11 2022-11-11 Method for determining hearing model, electronic equipment and system

Publications (2)

Publication Number Publication Date
CN115460526A CN115460526A (en) 2022-12-09
CN115460526B true CN115460526B (en) 2023-03-28

Family

ID=84295446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211414959.0A Active CN115460526B (en) 2022-11-11 2022-11-11 Method for determining hearing model, electronic equipment and system

Country Status (1)

Country Link
CN (1) CN115460526B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100413A (en) * 2015-05-27 2015-11-25 努比亚技术有限公司 Information processing method, device and terminal
CN112690782A (en) * 2020-12-22 2021-04-23 惠州Tcl移动通信有限公司 Hearing compensation test method, intelligent terminal and computer readable storage medium
CN113827227A (en) * 2021-08-10 2021-12-24 北京塞宾科技有限公司 Hearing test signal generation method, hearing test method, storage medium, and device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100730297B1 (en) * 2005-05-31 2007-06-19 한국과학기술원 Sound source localization method using Head Related Transfer Function database
US9363602B2 (en) * 2012-01-06 2016-06-07 Bit Cauldron Corporation Method and apparatus for providing virtualized audio files via headphones
JP6596896B2 (en) * 2015-04-13 2019-10-30 株式会社Jvcケンウッド Head-related transfer function selection device, head-related transfer function selection method, head-related transfer function selection program, sound reproduction device
DE102018201605A1 (en) * 2018-02-02 2019-08-08 Continental Teves Ag & Co. Ohg Method and device for locating and tracking acoustic active sources
TWI693926B (en) * 2019-03-27 2020-05-21 美律實業股份有限公司 Hearing test system and setting method thereof
CN110544532B (en) * 2019-07-27 2023-07-18 华南理工大学 Sound source space positioning capability detection system based on APP
JP7358010B2 (en) * 2019-07-29 2023-10-10 アルパイン株式会社 Head-related transfer function estimation model generation device, head-related transfer function estimating device, and head-related transfer function estimation program
CN113317781B (en) * 2021-04-12 2023-07-21 中国人民解放军总医院第六医学中心 Audiometric system and method for testing sound source positioning capability
CN113520377B (en) * 2021-06-03 2023-07-04 广州大学 Virtual sound source positioning capability detection method, system, device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100413A (en) * 2015-05-27 2015-11-25 努比亚技术有限公司 Information processing method, device and terminal
CN112690782A (en) * 2020-12-22 2021-04-23 惠州Tcl移动通信有限公司 Hearing compensation test method, intelligent terminal and computer readable storage medium
CN113827227A (en) * 2021-08-10 2021-12-24 北京塞宾科技有限公司 Hearing test signal generation method, hearing test method, storage medium, and device

Also Published As

Publication number Publication date
CN115460526A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
US20230111715A1 (en) Fitting method and apparatus for hearing earphone
US9426589B2 (en) Determination of individual HRTFs
US7602921B2 (en) Sound image localizer
EP2870779B1 (en) Method and system for fitting hearing aids, for training individuals in hearing with hearing aids and/or for diagnostic hearing tests of individuals wearing hearing aids
EP2202998B1 (en) A device for and a method of processing audio data
CN101873522B (en) Sound processing apparatus, and sound image localization method
Denk et al. An individualised acoustically transparent earpiece for hearing devices
US20100142719A1 (en) Acoustic apparatus and method of controlling an acoustic apparatus
US10555108B2 (en) Filter generation device, method for generating filter, and program
JP2002209300A (en) Sound image localization device, conference unit using the same, portable telephone set, sound reproducer, sound recorder, information terminal equipment, game machine and system for communication and broadcasting
AU2020402822B2 (en) User hearing protection method, apparatus, and electronic device
CN112954563B (en) Signal processing method, electronic device, apparatus, and storage medium
TW201814691A (en) Audio system and control method
WO2019086298A1 (en) Method for determining a response function of a noise cancellation enabled audio device
KR101659410B1 (en) Sound optimization device and method about combination of personal smart device and earphones
EP2822301B1 (en) Determination of individual HRTFs
US20190141462A1 (en) System and method for performing an audiometric test and calibrating a hearing aid
US20230109140A1 (en) Method for determining a head related transfer function and hearing device
CN115460526B (en) Method for determining hearing model, electronic equipment and system
CN115086851A (en) Human ear bone conduction transfer function measuring method, device, terminal equipment and medium
CN115225997A (en) Sound playing method, device, earphone and storage medium
Hamdan et al. A compact two-loudspeaker virtual sound reproduction system for clinical testing of spatial hearing with hearing-assistive devices
CN113613121A (en) Method and device for obtaining sound pressure level of earphone and earphone
US11997468B2 (en) Processing device, processing method, reproducing method, and program
WO2023228900A1 (en) Signal processing system, signal processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant