CN118102175A - Audio processing method, device and storage medium - Google Patents
Audio processing method, device and storage medium Download PDFInfo
- Publication number
- CN118102175A CN118102175A CN202410397174.XA CN202410397174A CN118102175A CN 118102175 A CN118102175 A CN 118102175A CN 202410397174 A CN202410397174 A CN 202410397174A CN 118102175 A CN118102175 A CN 118102175A
- Authority
- CN
- China
- Prior art keywords
- audio
- loudness
- hearing
- audio signal
- frequency point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 230000005236 sound signal Effects 0.000 claims abstract description 264
- 238000012074 hearing test Methods 0.000 claims abstract description 183
- 238000012545 processing Methods 0.000 claims abstract description 122
- 238000012360 testing method Methods 0.000 claims abstract description 107
- 238000000034 method Methods 0.000 claims abstract description 69
- 208000016354 hearing loss disease Diseases 0.000 claims description 98
- 238000004590 computer program Methods 0.000 claims description 8
- 208000032041 Hearing impaired Diseases 0.000 description 73
- 238000010586 diagram Methods 0.000 description 50
- 230000008569 process Effects 0.000 description 39
- 230000006870 function Effects 0.000 description 26
- 230000005540 biological transmission Effects 0.000 description 23
- 239000010410 layer Substances 0.000 description 23
- 206010011878 Deafness Diseases 0.000 description 18
- 230000036541 health Effects 0.000 description 18
- 230000010370 hearing loss Effects 0.000 description 18
- 231100000888 hearing loss Toxicity 0.000 description 18
- 230000000694 effects Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 12
- 210000005069 ears Anatomy 0.000 description 12
- 238000007726 management method Methods 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 230000004044 response Effects 0.000 description 10
- 230000008859 change Effects 0.000 description 9
- 101150090280 MOS1 gene Proteins 0.000 description 7
- 101100401568 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) MIC10 gene Proteins 0.000 description 7
- 239000003990 capacitor Substances 0.000 description 6
- 101100461812 Arabidopsis thaliana NUP96 gene Proteins 0.000 description 5
- 102100030393 G-patch domain and KOW motifs-containing protein Human genes 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 229910052982 molybdenum disulfide Inorganic materials 0.000 description 5
- 102100024061 Integrator complex subunit 1 Human genes 0.000 description 3
- 101710092857 Integrator complex subunit 1 Proteins 0.000 description 3
- 230000003321 amplification Effects 0.000 description 3
- 230000002238 attenuated effect Effects 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 102100028043 Fibroblast growth factor 3 Human genes 0.000 description 2
- 108050002021 Integrator complex subunit 2 Proteins 0.000 description 2
- 101000767160 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) Intracellular protein transport protein USO1 Proteins 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000003750 conditioning effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 101001051799 Aedes aegypti Molybdenum cofactor sulfurase 3 Proteins 0.000 description 1
- 101710116850 Molybdenum cofactor sulfurase 2 Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000002427 irreversible effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/01—Aspects of volume control, not necessarily automatic, in sound systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/03—Synergistic effects of band splitting and sub-band processing
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The application provides an audio processing method, audio processing equipment and a storage medium, which can be used for protecting hearing of a user. The method comprises the following steps: acquiring audio data; under the condition that the audio data comprises first audio signals of a first frequency point, acquiring hearing test data of a user, wherein the hearing test data comprises hearing damage values of the user at a plurality of test frequency points; according to the hearing test data, determining a loudness gain quantity corresponding to the first frequency point, wherein the loudness gain quantity corresponds to a hearing damage value of a user at the first frequency point; increasing the loudness of the first audio signal by a loudness gain amount; the first audio signal is output at the increased loudness.
Description
Technical Field
The present application relates to the field of audio data, and in particular, to an audio processing method, apparatus, and storage medium.
Background
During use of a terminal device, such as a mobile phone, the human ear is an important way to obtain the hearing experience provided by the terminal device.
Along with increasing importance of people on hearing health, how to perform hearing protection on a user in the audio playing process of a terminal device becomes a problem to be solved urgently.
Disclosure of Invention
In order to solve the technical problems, the application provides an audio processing method, audio processing equipment and a storage medium, which can protect the hearing of a user, achieve the purpose of ear protection and improve the hearing feeling of the user.
In a first aspect, the present application provides an audio processing method. The method comprises the following steps: audio data is acquired. Under the condition that the audio data comprises a first audio signal of a first frequency point, obtaining hearing test data of a user, wherein the hearing test data comprises hearing impairment values of the user at a plurality of test frequency points. And determining a loudness gain quantity corresponding to the first frequency point according to the hearing test data, wherein the loudness gain quantity corresponds to a hearing damage value of the user at the first frequency point. The loudness of the first audio signal is increased by a loudness gain amount. The first audio signal is output at the increased loudness.
Thus, in the case where the audio data comprises first audio data of a first frequency point, the amount of loudness gain of the user at the first frequency point may be determined from the hearing test data of the user. Because the loudness gain corresponds to the hearing impairment value of the first frequency point, the loudness gain can correspondingly compensate the hearing impairment value of the user at the first frequency point, so that after the first audio signal is output according to the increased loudness, the output volume of the audio signal does not need to be increased wholly, the first audio signal with the original volume can be heard by the hearing impaired human ear, the hearing impairment of other hearing impaired human ears caused by overlarge volume is avoided, and the hearing protection of the human ear is realized.
Illustratively, the audio processing circuit may obtain the audio data in response to an audio play operation by a user. The audio playing operation may be an operation for playing an audio signal by a user. For example, the user may be an operation of making a call or broadcasting a call at the call interface. For another example, the triggering operation of the user on the video playing interface/music playing interface for the playing control may be. For another example, a user's voice navigation operation may be provided. For another example, the user may play the voice message in the chat interface.
Still another example, the audio processing circuit may detect an audio play trigger condition to obtain audio data. For example, the terminal device detects that a telephone is dialed in or a short message is entered, and can acquire the audio data of the telephone bell or the short message prompting bell.
The first frequency point may be a hearing impaired frequency point, that is, a frequency point where the ear of the user has hearing impairment, for example, if the hearing impairment value of the user at a certain frequency point is greater than or equal to a preset hearing impairment threshold, the frequency point is the first frequency point.
Illustratively, the hearing test data may comprise a list of hearing test data, a hearing test function or a hearing test curve.
For example, the loudness gain value of the first frequency point may be derived based on the hearing impairment values of the first frequency point, one hearing impairment value corresponding to each loudness gain amount. The loudness gain of the first frequency point may be a hearing impairment value of the first frequency point. Or the loudness gain amount may be the sum of the hearing loss value of the first frequency point and the other loudness loss amount. Other loudness loss amounts may be transmission loss, noise loss, etc., that affect the loudness of the audio signal.
For example, a hearing impairment value for the first frequency point may be determined from the hearing test data, and then a loudness gain amount corresponding thereto may be determined from the hearing impairment value. Alternatively, the hearing compensation data may be determined based on the hearing test data, and then the loudness gain of the first frequency point may be determined based on the hearing compensation data.
Illustratively, the loudness gain amount may be increased on the basis of the initial loudness value to yield the target loudness value. The first audio signal is then output at the target loudness value.
According to a first aspect, an audio processing circuit comprises a first processing module for obtaining hearing test data of a user, comprising: the first processing module judges whether the audio data comprises a first audio signal or not; the first processing module obtains hearing test data in the case that the audio data comprises a first audio signal; according to the hearing test data, determining the loudness gain quantity corresponding to the first frequency point comprises: the first processing module determines a loudness gain corresponding to the first frequency point according to the hearing test data; increasing the loudness of the first audio signal by a loudness gain amount, comprising: the first processing module increases the loudness gain amount on the basis of the initial loudness value to obtain a target loudness value.
Therefore, the first processing module can gain the loudness of the first audio signal, and the software processing can adjust the loudness of the first audio signal, so that the adjustment difficulty is reduced.
For example, in case the first audio signal is an analog signal, the voltage of the first audio signal may be adjusted. In yet another example, where the first audio signal is a digital signal, the duty cycle or amplitude of the first audio signal may be adjusted.
According to the first aspect, or any implementation manner of the first aspect, the method further includes: in the case that the audio data includes a second audio signal of a second frequency point, the first processing module outputs the second audio signal at an initial loudness value.
Therefore, the second audio signal can be output according to the initial response value, the damage of the hearing of the user caused by the overhigh volume of the second audio signal is avoided, and the ear protection efficiency is further improved.
The second frequency point may be a frequency point where the user does not have hearing impairment, i.e. may be referred to as a hearing normal frequency point. Optionally, the hearing impairment of the user at the second frequency point is less than a preset hearing impairment threshold.
According to a first aspect, or any implementation manner of the first aspect, the first processing module is included in a terminal device, and the terminal device is connected to an audio playing device, where the audio playing device is configured to output the first audio signal at an increased loudness under the control of the first processing module.
Therefore, the ear protection processing of the audio signal can be completed at the terminal equipment side, namely the source side, and the ear protection processing can be performed without the end, namely the tail end, of the audio playing device, so that the purpose of ear protection can be realized for the audio playing devices with various circuit structures and various models, and the application range of the ear protection scheme provided by the embodiment of the application is improved.
Illustratively, the first processing module may be a processing module of an SOC, CODEC, or the like. The audio playing device can be earphone devices such as wireless earphone and wired earphone. Or the audio playing device can also be external playing equipment such as a Bluetooth sound box, a smart watch and the like.
According to a first aspect, or any implementation manner of the first aspect, the audio processing circuit includes a frequency discriminator, a second processing module and a variable resistor, one end of the frequency discriminator is connected with a power amplifier of the electronic device, the other end of the frequency discriminator is connected with one end of the second processing module, the other end of the second processing module is connected with a control end of the variable resistor, a first connection end of the variable resistor is connected with the power amplifier, the other end of the variable resistor is connected with an audio playing module, and the loudness of the first audio signal is increased by a loudness gain amount, including: the frequency discriminator detects whether the audio data comprises a first audio signal or not, wherein the audio signal is output by the power amplifier and has a preset loudness value; the frequency discriminator sends a first prompt signal to the second processing module under the condition that the audio data comprises a first audio signal; the second processing module responds to the first prompt signal to acquire hearing test data; increasing the loudness of the first audio signal by a loudness gain amount, comprising: the second processing module determines the loudness attenuation of the first audio signal based on the loudness gain, the initial loudness value and the preset loudness value; determining a second impedance of the variable resistor based on the loudness attenuation and the first impedance of the audio playing module; the resistance value of the variable resistor is controlled to be adjusted to be a second impedance, so that the loudness of the first audio signal is attenuated from a preset loudness value to a target loudness value through the variable resistor of the second impedance, and the target loudness value is the sum of the initial loudness value and the loudness gain.
Therefore, the loudness of the first audio signal can be adjusted through the variable resistor, the adjustment precision of the first audio signal is improved, and the precision of the ear protection is further improved.
For example, the frequency discriminator may determine whether the frequency value of the current audio signal coincides with the frequency value of the first frequency point input in advance to detect whether the first frequency point exists in the audio data.
Illustratively, the second processing module may be a processor in an example of the present application. The second processing module may be a module with a control function, such as an SOC or a CODEC.
Illustratively, the second impedance of the variable resistor may be determined according to equation (1) in an embodiment of the present application.
According to the first aspect, or any implementation manner of the first aspect, the method further includes: the frequency discriminator sends a second prompt signal to the second processing module under the condition that the audio data comprise second audio data of a second frequency point; the second processing module determines the loudness attenuation of the second audio signal based on the initial loudness value and the preset loudness value; determining a third impedance of the variable resistor based on the loudness attenuation of the second audio signal and the first impedance of the audio playing module; the resistance value of the variable resistor is controlled to be adjusted to be a third impedance, so that the loudness of the second audio signal is attenuated from the preset loudness value to the initial loudness value through the variable resistor of the third impedance, and the third impedance is larger than the second impedance.
Therefore, the audio signal attenuation can be carried out on the normal frequency point of hearing through a certain third resistor, and the volume signal attenuation can be carried out on the frequency point of hearing damage through a second impedance, and the third impedance is larger than the second impedance, so that the loudness of the attenuated first audio signal is larger than that of the second audio signal, the loudness gain of the first audio signal is realized on the whole, and the purpose of ear protection is realized.
Illustratively, the first impedance may be an earpiece resistance in embodiments of the present application. The second impedance may be a first impedance value in the embodiment of the present application, and the third impedance may be a second impedance value in the embodiment of the present application.
Illustratively, the third impedance may be determined according to equation (1) in an embodiment of the present application.
According to a first aspect, or any implementation manner of the first aspect, the hearing impairment value of the first audio signal is greater than or equal to a preset hearing impairment threshold; the hearing impairment value of the second audio signal is smaller than a preset hearing impairment threshold.
Therefore, the hearing-impaired frequency point and the hearing-normal frequency point can be accurately identified according to the preset hearing-impaired threshold, and the hearing-impaired frequency point and the hearing-normal frequency point can be output with different loudness values, so that the purpose of ear protection is realized.
Illustratively, the loudness value of the first audio signal after the increase is greater than the loudness value of the second audio signal.
According to a first aspect, or any implementation manner of the first aspect, obtaining hearing test data of a user includes: acquiring an audio playing mode of audio data; acquiring gesture information of the electronic equipment under the condition that the audio playing mode is a receiver playing mode; determining a target ear of the user receiving the audio data according to the gesture information; hearing test data of the target ear is acquired.
Therefore, under the condition that the user obtains sound information by using the earphone, the target ear of the user receiving the audio signal can be determined according to the electrons, and then the loudness value of the first audio signal is accurately adjusted according to the hearing test data of the target ear, so that the accurate hearing protection of the target ear is further realized, and the hearing experience of the user is further improved.
Illustratively, an audio play mode of the audio data may be acquired; and obtaining the hearing test data corresponding to the audio playing mode, so as to perform hearing protection on the target ear corresponding to the audio playing mode according to the hearing test data corresponding to the audio playing mode. For example, if the audio playing mode is the earpiece playing mode, the hearing test data of the target ear is obtained, the loudness gain of the first audio signal is determined according to the hearing test data of the target ear, and the first audio data with increased loudness is output to the target ear. For another example, if the audio playing mode is a speaker playing mode, hearing test data of both ears (i.e., hearing test data of human ears) is obtained, the loudness gain of the first audio signal is determined according to the hearing test data of the human ears, and the first audio data with increased loudness is played. For another example, if the audio playing mode is the earphone playing mode, the hearing test data of the left ear is obtained, the loudness gain of the first audio signal is determined according to the hearing test data of the left ear, and the first audio data with increased loudness is output to the left earphone. And acquiring the hearing test data of the right ear, determining the loudness gain of the first audio signal according to the hearing test data of the right ear, and outputting the first audio data with the increased loudness to the right earphone.
In the earpiece playing mode, the first frequency point is a frequency point where the target ear of the user has hearing impairment.
For example, whether the target ear is the left ear or the right ear may be determined according to the rotation angle of the terminal device along the y-axis. For example, if the terminal device rotates clockwise, the target ear is determined to be the right ear. If the terminal device rotates anticlockwise, the target ear is determined to be the left ear.
According to a first aspect, or any implementation manner of the first aspect, obtaining hearing test data of a user includes: acquiring an audio playing mode of audio data; judging whether the audio data comprises audio signals of a third frequency point or not under the condition that the audio playing mode is an earphone playing mode, wherein the third frequency point is a first frequency point of the left ear of a user; under the condition that the audio data comprises the audio signal of the third frequency point, acquiring first hearing test data, wherein the first hearing test data is the test data of the left ear in the hearing test data; according to the hearing test data, determining the loudness gain quantity corresponding to the first frequency point comprises:
according to the first hearing test data, determining a loudness gain quantity corresponding to the third frequency point; increasing the loudness of the first audio signal by a loudness gain amount, comprising: based on the loudness gain quantity corresponding to the third frequency point, the loudness of the audio signal of the third frequency point is increased to a first loudness value; outputting the first audio signal at the increased loudness, comprising: and controlling the left earphone to output the audio signal of the third frequency point at the first loudness value.
According to the embodiment, in the earphone playing mode, the audio signal of the third frequency point is subjected to loudness gain by using the hearing impairment data of the left ear, so that the audio signal of the left ear is generated. Therefore, the left ear can be accurately protected under the earphone mode, and the hearing experience is further improved.
The third frequency bin may be, for example, a frequency bin at which the left ear of the user has hearing impairment.
According to a first aspect, or any implementation manner of the first aspect, the obtaining hearing test data of a user further comprises: judging whether the audio data comprises audio data of a fourth frequency point, wherein the fourth frequency point is a first frequency point of the right ear of the user under the condition that the audio playing mode is an earphone playing mode; acquiring second hearing test data under the condition that the audio data comprise audio signals of a fourth frequency point, wherein the second hearing test data are test data of a right ear in the hearing test data; according to the hearing test data, determining the loudness gain amount corresponding to the first frequency point, and further comprises: according to the second hearing test data, determining a loudness gain quantity corresponding to the fourth frequency point; increasing the loudness of the first audio signal by a loudness gain amount, further comprising: based on the loudness gain quantity corresponding to the fourth frequency point, the loudness of the audio signal of the fourth frequency point is increased to a second loudness value; outputting the first audio signal at the increased loudness, further comprising: and controlling the right earphone to output the audio signal of the fourth frequency point at the second loudness value.
According to the embodiment, in the earphone playing mode, the audio signal of the fourth frequency point is subjected to loudness gain by using the hearing impairment data of the right ear, so that the audio signal of the right ear is generated. Therefore, accurate hearing protection can be carried out on the right ear in the earphone mode, and hearing experience is further improved.
For example, the right frequency bin may be a frequency bin where the right ear of the user has hearing impairment.
According to a first aspect, or any implementation manner of the first aspect, obtaining hearing test data of a user includes: acquiring an audio playing mode of audio data; under the condition that the audio playing mode is a loudspeaker playing mode, acquiring first hearing test data of a user and second hearing test data of the user, wherein the first hearing test data comprise first hearing damage values of the left ear of the user at a plurality of frequency points, and the second hearing test data comprise second hearing damage values of the right ear of the user at a plurality of frequency points; and determining the hearing test data based on the first hearing test data and the second hearing test data, wherein the hearing damage value of the single test frequency point in the hearing test data is the smaller value of the first hearing damage value of the single test frequency point and the second hearing damage value of the single test frequency point.
Therefore, certain hearing compensation can be carried out on the hearing impairment points coexisting in the ears under the playing mode of the loudspeaker, the loudness of the frequency point is improved, meanwhile, the loss of the other ear caused by overlarge loudness is avoided, and the hearing precision and the hearing health degree are considered.
According to a first aspect, or any implementation manner of the first aspect, the audio processing circuit further includes an audio playing module for outputting a first audio signal at an increased loudness, including: and controlling the audio playing module to output the first audio signal at the increased loudness, wherein the audio playing module is one of a loudspeaker of the terminal equipment, an earphone of the terminal equipment, a screen sounding device of the terminal equipment, a loudspeaker of the earphone or a loudspeaker of the playing equipment. The playback device is an audio playback device connected to the terminal device, except for headphones.
Through the embodiment, the ear protection function can be realized on various devices or equipment with the audio playing function, and the application range of the ear protection function is increased.
The playing device may be a smart watch, a smart sound, a television, a computer, a projector, etc. for performing screen projection by the terminal device.
According to a first aspect, or any implementation manner of the first aspect, before acquiring the hearing test data of the user, the method further comprises: for a single test frequency point, playing at least one test audio of the single test frequency point to a user, wherein one test audio corresponds to one loudness value; based on a feedback result of the user on the test audio, determining a hearing impairment value of the user at a single test frequency point; and generating hearing test data based on the hearing impairment values of the plurality of test frequency points.
Thus, accurate hearing test can be performed on the ears of the user through the terminal equipment, and the convenience and accuracy of the hearing test are improved.
In a second aspect, an embodiment of the present application provides an audio processing circuit, including: the audio acquisition module is used for acquiring audio data; the test data acquisition module is used for acquiring hearing test data of a user under the condition that the audio data comprises first audio signals of a first frequency point, wherein the hearing test data comprises hearing impairment values of the user at a plurality of test frequency points; the gain amount determining module is used for determining a loudness gain amount corresponding to the first frequency point according to the hearing test data, wherein the loudness gain amount corresponds to a hearing damage value of a user at the first frequency point; a loudness adjustment module for increasing the loudness of the first audio signal by a loudness gain amount; and the audio output module is used for outputting the first audio signal at the increased loudness.
According to a second aspect, a circuit comprises: the system comprises a first processing module, a second processing module and a sound volume adjusting module, wherein the first processing module comprises an audio acquisition module, a test data acquisition module and a sound volume adjusting module;
The first processing module is used for judging whether the audio data comprises a first audio signal or not; and, further for obtaining hearing test data in case the audio data comprises the first audio signal; the loudness gain quantity corresponding to the first frequency point is determined according to the hearing test data; and the loudness gain is increased on the basis of the initial loudness value to obtain a target loudness value; and, further for controlling the audio output module to output the first audio signal at the target loudness value.
According to a second aspect, or any implementation manner of the second aspect, the first processing module is included in a terminal device.
According to a second aspect, or any implementation manner of the second aspect above, the audio processing circuit includes: the device comprises a frequency discriminator, a second processing module, a variable resistor and an audio playing module;
The frequency discriminator comprises an audio acquisition module, one end of the frequency discriminator is connected with a power amplifier of the terminal equipment, the other end of the frequency discriminator is connected with one end of the second processing module, the frequency discriminator is used for acquiring audio signals and detecting whether the audio data comprise first audio signals or not, the audio signals are output by the power amplifier, and the audio signals have preset loudness values; the frequency discriminator is also used for sending a first prompt signal to the second processing module under the condition that the audio data comprises the first audio signal;
The second processing module comprises a test data acquisition module, a gain amount determination module and a loudness adjustment module, the other end of the second processing module is connected with the control end of the variable resistor, and the second processing module is used for responding to the first prompting signal to acquire hearing test data; the loudness attenuation amount of the first audio signal is further determined based on the loudness gain amount, the initial loudness value and the preset loudness value; and the second impedance of the variable resistor is further determined based on the loudness attenuation and the first impedance of the audio playing module; and controlling the resistance value of the variable resistor to be adjusted to be a second impedance so as to attenuate the loudness of the first audio signal from a preset loudness value to a target loudness value through the variable resistor of the second impedance, wherein the target loudness value is the sum of the initial loudness value and the loudness gain amount,
The first connecting end of the variable resistor is connected with the power amplifier, and the other end of the variable resistor is connected with the audio playing module.
Any implementation manner of the second aspect and the second aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively. The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a third aspect, the application provides a computer readable medium storing a computer program comprising instructions for performing the method of the first aspect or any possible implementation of the first aspect.
Any implementation manner of the third aspect and any implementation manner of the third aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. The technical effects corresponding to the third aspect and any implementation manner of the third aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a fourth aspect, the present application provides a computer program comprising instructions for performing the method of the first aspect or any possible implementation of the first aspect.
Any implementation manner of the fourth aspect and any implementation manner of the fourth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fourth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
In a fifth aspect, the present application provides a chip comprising processing circuitry, transceiver pins. Wherein the transceiver pin and the processing circuit communicate with each other via an internal connection path, the processing circuit performing the method of the first aspect or any one of the possible implementation manners of the first aspect to control the receiving pin to receive signals and to control the transmitting pin to transmit signals.
Any implementation manner of the fifth aspect and any implementation manner of the fifth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fifth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
Drawings
FIG. 1 is a schematic diagram of an exemplary application scenario;
FIG. 2 is a schematic diagram of another exemplary application scenario;
FIG. 3 is a schematic illustration of an exemplary audiometric report provided by an embodiment of the present application;
FIG. 4 schematically illustrates audio data provided by an embodiment of the present application;
FIG. 5 shows a schematic diagram of an audio playback scenario;
fig. 6 is a schematic diagram of an audio playing scene according to an embodiment of the present application;
fig. 7 is a schematic diagram of an audio playing system according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another audio playback system according to an embodiment of the present application;
FIG. 9 is a schematic diagram of yet another audio playback system according to an embodiment of the present application;
Fig. 10 shows a schematic diagram of an audio playing scene according to an embodiment of the present application
Fig. 11 is a diagram showing a structural example of an exemplary terminal device according to an embodiment of the present application;
fig. 12 shows a schematic structural diagram of an electronic device;
FIG. 13 is a block diagram of the software architecture of an electronic device 1200 according to an embodiment of the application;
Fig. 14 is a schematic diagram of an audio circuit of an electronic device according to an embodiment of the present application;
Fig. 15 is a schematic structural diagram of an audio circuit of an exemplary electronic device according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an audio circuit of another exemplary electronic device according to an embodiment of the present application;
Fig. 17 is a flowchart illustrating a process of generating hearing test data according to an embodiment of the present application;
FIG. 18 illustrates an interface diagram of an exemplary hearing health test provided by an embodiment of the present application;
FIG. 19 illustrates a schematic diagram of an exemplary hearing test interface provided by an embodiment of the present application;
fig. 20 is a schematic flow chart illustrating a process of identifying an audio playing mode according to an embodiment of the present application;
fig. 21 is a schematic diagram of a reference coordinate system of a mobile phone according to an embodiment of the present application;
fig. 22 shows a schematic diagram of posture change of a mobile phone according to an embodiment of the present application;
fig. 23 shows a schematic diagram of a usage scenario of a mobile phone according to an embodiment of the present application;
FIG. 24 is a schematic diagram of an exemplary audio conditioning circuit provided in accordance with an embodiment of the present application;
Fig. 25 shows a schematic structural diagram of a frequency discriminator according to the embodiment of the application;
fig. 26 is a schematic structural diagram of an equivalent circuit of a variable resistor according to an embodiment of the present application;
fig. 27 is a schematic structural diagram of a variable resistor according to an embodiment of the present application;
FIG. 28 is a schematic diagram of another variable resistor according to an embodiment of the present application;
fig. 29 is a schematic flow chart of an audio adjustment process according to an embodiment of the present application;
FIG. 30 is a schematic flow chart of an audio adjustment process according to an embodiment of the present application;
FIG. 31 is a schematic view of a hearing compensation curve provided by an embodiment of the present application;
FIG. 32 shows a schematic block diagram of an apparatus of an embodiment of the application
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
Terminal devices such as mobile phones often provide users with a versatile experience of visual and audio. In the hearing state, a user can feel the hearing experience brought by the terminal equipment through the left ear and the right ear, and feel the visual experience brought by the terminal equipment through eyes.
An application scenario is schematically shown in fig. 1. As shown in fig. 1, in a phone call scenario, when a user's ear is close to the phone 10, the phone 10 generates sound through a speaker and propagates the sound to the user's ear through air.
Another application scenario is schematically shown in fig. 2. As shown in fig. 2, in daily life, when a user wears the earphone 20, the mobile phone 10 (not shown in fig. 2) can play sounds to the user's left and right ears through the earphone 10.
Before starting to introduce the technical scheme provided by the embodiment of the application, the related technical content related to the embodiment of the application is described.
(1) Audiometric reports, which are one way to measure the extent of hearing impairment in the human ear. Illustratively, FIG. 3 is a schematic illustration of an exemplary audiometric report provided by an embodiment of the present application. In particular, the audiometric report may include a right ear audiometer 31 and a left ear audiometer 32, with the audiometer abscissa representing frequency in kilohertz (kHZ) and the ordinate representing hearing level (dB HL). Where hearing level refers to a person's hearing ability, describing the extent of hearing loss. It should be noted that the normal hearing of the person is in the range of 0-30 dB HL, the better the hearing level is, the closer to 0dB HL is. Wherein, 0dB HL represents reference hearing, namely the weakest sound which can be heard by the young people with normal hearing at each frequency point. If the test result of the subject at a certain frequency point is 30dB HL, the test result indicates that the subject cannot hear the sound with the frequency point lower than 30dB HL, or the test result can be called that the hearing damage value of the subject is 30dB.
For example, taking a right ear hearing test as an example, a doctor may test the hearing test results of the right ear of the subject at different frequency points, such as the dots 311 in fig. 3, where each dot 311 represents a hearing impairment value of the subject at the frequency point (or referred to as frequency) corresponding to the dot 311, such as 10dB HL for a hearing impairment value of the right ear of the subject at 1 kHZ. And, after hearing test results of the subject's right ear at a plurality of frequency points are collected, a hearing test curve 312 of the subject's right ear may be generated. It should be noted that the left ear detection process is similar to the right ear detection process, and will not be described here again.
(2) Audio data, which in embodiments of the present application may refer to data of sound heard by a user from a mobile phone. For example, the audio data in the embodiment of the present application may be audio data corresponding to a song, audio data corresponding to an audio reading, audio data contained in a video, or audio data of a talking voice, which is not limited. Since sounds of different tones may correspond to different frequencies, accordingly, one audio data may include audio signals of different frequencies.
Illustratively, fig. 4 schematically shows audio data provided by an embodiment of the present application. As shown in fig. 4, the audio data may include audio signals of a plurality of frequency points, such as an audio signal of frequency 1 (125 Hz) (hereinafter, audio signal 1) and an audio signal of frequency 2 (1 kHz) (hereinafter, audio signal 2). Wherein the audio signals of different frequencies represent sounds of different tones. And, the amplitude a of the audio signal shown in fig. 4 is used to represent the loudness of the audio signal (i.e., the volume of sound), which may be in decibels (dB).
After the above-mentioned related art content is introduced, the following description continues to explain the audio playback scheme according to the embodiment of the present application.
In the development process of the terminal equipment technology, due to the gradual importance of people on the hearing health, correspondingly, how to perform hearing protection on a user in the audio playing scene of the terminal equipment becomes a problem to be solved urgently.
The inventors found through research that various factors such as pressure and noise environment in work and life, age increase, bad ear habit and the like all cause hearing impairment of human ears. For example, with continued reference to FIG. 3 above, the human ear has different hearing impairment at certain frequency points, such as, for example, the subject's left, a hearing impairment of 26dB at 125Hz and a hearing impairment of 5dB at 8 kHz.
However, the hearing impairment may then lead to further aggravation of the hearing impairment during use of the terminal device. Specifically, fig. 5 shows a schematic diagram of an audio playback scenario. As shown in fig. 5, as shown by the volume bar 11 of the electronic device, after the audio signal 1 (normal volume) played at the volume a1 enters the human ear, the hearing loss of the human ear to the frequency 1 may result in the volume of the audio signal 1 heard by the human ear being lower, resulting in poor or even complete hearing of the audio signal of the frequency 1 by the human ear. However, if the overall volume of the mobile phone is increased from the volume a1 to the volume a2, the human ear can receive the audio signal 1 with normal volume, but the audio signal 2 received by the human ear further aggravates the hearing impairment of the human ear due to the excessive volume, and further affects the hearing health of the human ear.
Since hearing impairment of the human ear (or called hearing loss or hearing loss) is irreversible, how to perform hearing protection on the human ear during audio playback is a technical problem to be solved.
Based on the above, the application provides an audio playing scheme, and in the case that the audio data comprises the first audio data of the first frequency point, the loudness gain of the user at the first frequency point can be determined according to the hearing test data of the user. Because the loudness gain corresponds to the hearing impairment value of the first frequency point, the loudness gain can correspondingly compensate the hearing impairment value of the user at the first frequency point, so that after the first audio signal is output according to the increased loudness, the output volume of the audio signal does not need to be increased wholly, the first audio signal with the original volume can be heard by the hearing impaired human ear, the hearing impairment of other hearing impaired human ears caused by overlarge volume is avoided, and the hearing protection of the human ear is realized.
Fig. 6 is a schematic diagram of an audio playing scene according to an embodiment of the present application. As shown in (1) in fig. 6, when the electronic device generates the audio signal 1 and the audio signal 2 with the volume a1 (i.e. normal volume), the audio processing method provided by the embodiment of the application increases the volume of the audio signal 1 and keeps the volume of the audio signal 2 unchanged, so that after the audio signal of the audio signal 1 enters the human ear, the volume of the audio signal 1 and the volume of the audio signal 2 heard by the human ear are both normal volumes due to hearing impairment of the human ear.
And, as shown in (2) in fig. 6, when the electronic device generates the audio signal 1 and the audio signal 2 with the volume a2 (i.e. a larger volume), the audio processing method provided by the embodiment of the application can attenuate the volume of the audio signal 2 to the volume a1, and keep the volume a2 of the audio signal 1 unchanged. Therefore, after the audio signal a1 is subjected to the ear hearing impairment, the volume of the audio signal 1 and the volume of the audio signal 2 received by the ear can be normal.
The following describes specific technical schemes of the embodiments of the present application with reference to the accompanying drawings. In order to facilitate understanding of the technical solution provided by the embodiments of the present application, an audio playing system related to the embodiments of the present application is first described.
In one embodiment, fig. 7 is a schematic diagram of an audio playing system according to an embodiment of the present application. As shown in fig. 7, the audio playback system may include a terminal device 100 and a wireless headset 210. The wireless earphone 210 may include a real wireless earphone 211 or a headphone wireless earphone 212, among others. Still alternatively, the wireless headset 210 may also include a neck-mounted wireless headset (not shown). It should be noted that, in the implementation process of the embodiment of the present application, the terminal device 100 may be a device with an audio processing function, such as a computer, a smart phone, a telephone, a cable television set-top box, a digital subscriber line router, and the like. In practical application, the number of the devices connected with the terminal device may be one or more, which is not limited in the present application.
With continued reference to fig. 7, the true wireless earpiece 211 may include a left earpiece 211a and a right earpiece 211b. At the time of audio playback, the terminal device 100 may transmit the audio signal of the left ear to the left earphone 211a in a wireless transmission manner, and transmit the audio signal of the right ear to the right earphone 211b in a wireless transmission manner.
And, wireless headset 212 may include left earpiece 212a and right earpiece 212b. It should be noted that, the audio playing manner of the wireless headset 212 is similar to that of the real wireless headset 211, and will not be repeated. And, for convenience of description, in the embodiment of the present application, the left earphone and the left earplug are collectively referred to as a left earphone, and the right earphone and the right earplug are collectively referred to as a right earphone.
In another embodiment, fig. 8 is a schematic diagram of another audio playing system according to an embodiment of the present application. As shown in fig. 8, the audio playback system may include a terminal device 100 and a wired earphone 203.
The terminal device 100 may include an audio transmission interface 101, where the audio transmission interface 101 may be a USB interface conforming to a Universal Serial Bus (USB) standard, specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc., and in the embodiment of the present application, the audio transmission interface 101 is illustrated by taking USB Type C as an example.
And, the wired headphones 203 may include a left headphone 203a, a right headphone 203b, and an audio transmission connector 203c. The audio transmission connector 203c is used to connect to the audio transmission interface 101 of the terminal device 100. For example, the audio transmission connector 203C may be a USB Type C connector.
During audio playing, the terminal device may transmit the audio signal of the left ear and the audio signal of the right ear to the left earphone 203a and the right earphone 203b respectively in a wired transmission manner.
In yet another embodiment, fig. 9 is a schematic diagram of yet another audio playing system according to an embodiment of the present application. As shown in fig. 9, the audio playback system may include a terminal device 100 and an external connection device 300. The external connection device 300 may be a device connected to the terminal device in a wireless or wired manner. Illustratively, in the embodiment of the present application, the external connection device 300 may be an electronic device having an audio playing function, such as a smart sound 301, a smart watch 302, or the like. It should be noted that, the audio playing process of the external connection device 300 may refer to the related description of the above portion of the embodiment of the present application, which is not repeated.
As shown in the above embodiments, the terminal device may perform audio playing through the external device, and fig. 10 shows a schematic diagram of an audio playing scene provided in the embodiment of the present application, and as shown in fig. 10, the terminal device may also perform audio playing, which will be specifically described below with reference to fig. 11.
In a specific example, fig. 11 shows a structural example diagram of an exemplary terminal device provided in an embodiment of the present application. As shown in fig. 11, the terminal device 100 may further include an earpiece 102, a screen sound device 103, a screen sound region 104, a volume up key 105, and a volume down key 106.
Wherein the earpiece 102 may be located at an upper edge of the screen of the terminal device 100, and the screen sounding device 103 may be located below the screen of the terminal device 100 and close to the earpiece 102. Specifically, the screen sounding device 103 may vibrate the screen under the control of the current signal, thereby outputting an audio signal through the screen sounding region 104. Illustratively, the screen sounding device may be a vibration source such as a piezoelectric ceramic, a motor vibrator, an exciter, etc., which is not particularly limited in the embodiment of the present application.
In addition, it should be noted that, in the embodiment of the present application, the volume up key 105 and the volume down key 106 may control the increase and decrease of the loudness of the audio signal, which may be disposed under the outer frame or screen of the terminal device 100, without specific limitation.
After the audio playing system of the embodiment of the present application is initially described, the structure of the terminal device will be described next.
Fig. 12 shows a schematic structural diagram of an electronic device 1200. It should be understood that the electronic device 1200 shown in fig. 12 is only one example of an electronic device, and that the electronic device 1200 may be implemented as the terminal device 100 described above, and that the electronic device 1200 may have more or fewer components than shown in the drawings, may combine two or more components, or may have different component configurations. The various components shown in fig. 12 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 1200 may include: processor 1210, external memory interface 1220, internal memory 1221, universal serial bus (Universal Serial Bus, USB) interface 1230, charge management module 1240, power management module 1241, battery 1242, antenna 1, antenna 2, mobile communication module 1250, wireless communication module 1260, audio module 1270, speaker 1270A, receiver 1270B, microphone 1270C, headset interface 1270D, sensor module 1280, keys 1290, motor 1291, indicator 1292, camera 1293, display screen 1294, and subscriber identity module (Subscriber Identification Module, SIM) card interface 1295, among others. The sensor module 1280 may include a pressure sensor 1280A, a gyroscope sensor 1280B, an air pressure sensor 1280C, a magnetic sensor 1280D, an acceleration sensor 1280E, a distance sensor 1280F, a proximity sensor 1280G, a fingerprint sensor 1280H, a temperature sensor 1280J, a touch sensor 1280K, an ambient light sensor 1280L, a bone conduction sensor 1280M, and the like.
Processor 1210 may include one or more processing units such as: processor 1210 may include an application Processor (Application Processor, AP), a modem Processor, a graphics Processor (Graphics processing unit, GPU), an image signal Processor (IMAGE SIGNAL Processor, ISP), a controller, a memory, a video codec, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a baseband Processor, and/or a neural network Processor (Neural-network Processing Unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller may be a neural hub and a command center of the electronic device 1200, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 1210 for storing instructions and data. In some embodiments, the memory in processor 1210 is a cache memory. The USB interface 1230 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 1230 may be used to connect a charger to charge the electronic device 1200, or may be used to transfer data between the electronic device 1200 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
The charge management module 1240 is configured to receive charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 1240 may receive a charging input of a wired charger via the USB interface 1230. In some wireless charging embodiments, the charge management module 1240 may receive wireless charge input through a wireless charging coil of the electronic device 1200. The charging management module 1240 may also provide power to the electronic device through the power management module 1241 while charging the battery 1242.
The power management module 1241 is used to connect the battery 1242, the charge management module 1240 and the processor 1210. The power management module 1241 receives input from the battery 1242 and/or the charge management module 1240 to power the processor 1210, the internal memory 1221, the external memory, the display screen 1294, the camera 1293, the wireless communication module 1260, and the like. The wireless communication function of the electronic device 1200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1250, the wireless communication module 1260, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 1200 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1250 may provide a solution for wireless communication, including 2G/3G/4G/5G, as applied to the electronic device 1200. The mobile communication module 1250 may include at least one filter, switch, power amplifier, low noise amplifier (Low Noise Amplifier, LNA), or the like.
The wireless Communication module 1260 may provide solutions for wireless communications including wireless local area networks (Qireless Local Area Networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) networks), blueTooth (BT), global navigation satellite systems (Global Navigation SATELLITE SYSTEM, GNSS), frequency modulation (Frequency Modulation, FM), near field Communication (NEAR FIELD Communication), infrared (IR), etc., as applied to the electronic device 1200.
In some embodiments, antenna 1 and mobile communication module 1250 of electronic device 1200 are coupled, and antenna 2 and wireless communication module 1260 are coupled, such that electronic device 1200 may communicate with networks and other devices via wireless communication techniques.
The electronic device 1200 implements display functions through a GPU, a display screen 1294, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 1294 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1210 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 1294 is used to display images, videos, or the like. The display screen 1294 includes a display panel. The display panel may employ a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), or the like. In some embodiments, the electronic device 1200 may include 1 or M display screens 1294, M being a positive integer greater than 1.
The electronic device 1200 may implement shooting functions through an ISP, a camera 1293, a video codec, a GPU, a display screen 1294, an application processor, and the like.
Camera 1293 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (Charge Coupled Device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 1200 may include 1 or N cameras 1293, N being a positive integer greater than 1.
The external memory interface 1220 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 1200. The external memory card communicates with the processor 1210 through an external memory interface 1220 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 1221 may be used to store computer-executable program code including instructions. The processor 1210 executes various functional applications of the electronic device 1200 and data processing by executing instructions stored in the internal memory 1221. The internal memory 1221 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 1200 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 1221 may include a high-speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (Universal Flash Storage, UFS), and the like.
The electronic device 1200 may implement audio functions through an audio module 1270, a speaker 1270A, a receiver 1270B, a microphone 1270C, an ear-headphone interface 1270D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 1270 is used to convert digital audio information to analog audio signal output and also to convert analog audio input to digital audio signals. The audio module 1270 may also be used to encode and decode audio signals. In some embodiments, the audio module 1270 may be provided in the processor 1210 or some functional modules of the audio module 1270 may be provided in the processor 1210. In the embodiment of the present application, the audio module 1270 may gain the audio signal of the hearing-impaired frequency point, so as to ensure that the user can normally hear the sound of the hearing-impaired frequency point.
Speaker 1270A, also known as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 1200 may listen to music, or to hands-free conversations, through the speaker 1270A.
A receiver 1270B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device 1200 is answering a telephone call or voice message, voice can be received by placing the receiver 1270B close to the human ear.
The earphone interface 1270D is used to connect a wired earphone. Earphone interface 1270D may be a USB interface 1230 or a 3.5mm open mobile electronic device platform (Open Mobile Terminal Platform, OMTP) standard interface, a american cellular telecommunications industry association (Cellular Telecommunications Industry Association of the USA, CTIA) standard interface.
The gyro sensor 1280B may be used to determine a motion gesture of the electronic device 1200. In some embodiments, the angular velocity of the electronic device 1200 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 1280B. The gyro sensor 1280B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 1280B detects the shake angle of the electronic device 1200, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 1200 through the reverse motion, thereby realizing anti-shake. The gyro sensor 1280B may also be used for navigating, somatosensory game scenes. In an embodiment of the present application, the gyro sensor 1280B may detect the tilt angle of the electronic device 1200 to determine whether the user listens to the sound signal is the left ear or the right ear.
Touch sensor 1280K, also referred to as a "touch panel". The touch sensor 1280K may be disposed on the display 1294, and the touch sensor 1280K and the display 1294 form a touch screen, which is also referred to as a "touch screen". The touch sensor 1280K is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 1294. In other embodiments, the touch sensor 1280K may also be disposed on a surface of the electronic device 1200 at a location different from that of the display 1294. In the embodiment of the present application, the electronic device 1200 may detect user operations such as clicking, sliding, etc. of the user on the display screen 1294 through the touch detection capability provided by the touch sensor 1280K, so as to control the starting and closing of the application program and the device, and implement jump switching between different user interfaces.
The keys 1290 include a power-on key, a volume key, and the like. The keys 1290 may be mechanical keys. Or may be a touch key. The electronic device 1200 may receive key inputs, generate key signal inputs related to user settings and function controls of the electronic device 1200.
Motor 1291 may generate a vibration alert. The motor 1291 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The indicator 1292 may be an indicator light, which may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, or the like.
After the hardware structure of the electronic device is introduced through fig. 12, the description of the software structure of the electronic device will be continued with fig. 13.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 13 is a software architecture block diagram of an electronic device 1200 according to an embodiment of the application.
The layered architecture of the electronic device 1200 divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, a framework layer, a hardware abstraction layer (Hardware Abstraction Layer, HAL), and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 13, the application package may include an application program having an audio playing function such as a call, music, video, etc.
The application framework layer provides an application programming interface (Application Programming Interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The hardware abstraction layer is an interface layer between the application framework layer and the kernel layer, and is used for providing a virtual hardware platform for the operating system. The hardware abstraction layer may include a camera HAL, an audio HAL, a WiFi HAL, and the like. In an embodiment of the present application, the audio HAL may include a hearing impaired frequency point determination module, a loudness gain value determination module, an audio signal adjustment module, etc. Specifically, the hearing-impaired frequency point determining module may detect whether the audio to be output includes an audio signal of a hearing-impaired frequency point, the loudness gain value determining module is configured to determine a loudness gain value of the hearing-impaired frequency point, and the audio signal adjusting module is configured to increase the loudness value of the hearing-impaired frequency point by a loudness gain amount from the first loudness value to increase the loudness value of the hearing-impaired frequency point to a second loudness value, and then output the audio signal of the hearing-impaired frequency point at the second loudness value.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It should be understood that the layers and components contained in the layers in the software structure shown in fig. 13 do not constitute a specific limitation on the electronic device 1200. In other embodiments of the application, the electronic device 1200 may include more or fewer layers than shown, and may include more or fewer components per layer, as the application is not limited.
Having initially introduced the hardware structure and software structure of the electronic device 1200, the following continues to describe audio circuits in the electronic device.
Fig. 14 is a schematic diagram of an audio circuit of an electronic device according to an embodiment of the present application. As shown in fig. 14, the audio circuit 1300 of the electronic device may include a System On Chip (SOC) 1311, an audio encoder/DECoder (CODer-DECoder, CODEC) 1312, a Power Amplifier (PA) 1313, a speaker 1314, a handset 1315, an audio transmission interface 1316, and a bluetooth module 1317.
In one audio playback mode, the SOC1311 can process the audio signal and then send the processed audio signal to the CODEC1312 for decoding and analog-to-digital conversion. And the CODEC1312 may transmit the converted audio signal in the form of analog to the PA1313, and after power amplification of the PA1313, the audio signal in the form of analog may be vibrated by driving the speaker 1314 or the earpiece 1315 to generate a sound recognizable to the human ear.
In another audio playing mode, the PA1313 may be connected to an external playing device connected in a wired manner, such as a wired earphone 1410, through an audio transmission interface 1316, where the wired earphone 1410 is connected in a wired manner to the electronic device through an audio transmission connector 1411, and the wired earphone 1410 may play an audio signal through a speaker 1412 after receiving the audio signal. Still alternatively, the wired earphone 1410 can include its own PA, and accordingly, the audio transmission interface 1316 in the audio circuit 1300 can be directly connected to the CODEC1312, which is not a limitation of the present application.
In yet another audio playback mode, the CODEC1312 can be connected with an external playback device, such as a wireless headset 1420, that is wirelessly connected via a bluetooth module 1317. After the wireless headset 1420 receives the audio signal through the bluetooth module 1421, it may perform power amplification through the PA1422 and then perform audio playback through the speaker 1423.
In the embodiment of the present application, where the CODEC1312 does not have an analog-to-digital conversion function, the CODEC1312 may also be connected to the PA1313 through a digital-to-analog converter (Digital to Analog Converter, DAC) to convert the audio signal in digital form to the audio signal in analog form through the DAC.
And, in the embodiment of the present application, the PA1313 may be a normal PA or Smart PA, which is not limited thereto. And, the PA1313 may perform power amplification on an audio signal in an analog form or a digital form, which is not particularly limited.
And, in an embodiment of the present application, the SOC1311 may have the function of the CODEC1312, and accordingly, the SOC1311 may be directly connected to the PA 1313. Fig. 15 is a schematic structural diagram of an audio circuit of an exemplary electronic device according to an embodiment of the present application. As shown in fig. 15, the SOC1311 may be connected to an earpiece (or may be simply referred to as RCV) 1315 or an upper speaker (or may be simply referred to as upper BOX) 1314a through a first PA (such as upper SmartPA) 1313a, and may also be connected to a lower speaker (or may be simply referred to as lower BOX) 131b through a second PA (such as lower SmartPA) 1313 b. And, with continued reference to fig. 16, the soc1311 may also be connected to a digital headset 1318. In another embodiment, fig. 16 illustrates a schematic structural diagram of an audio circuit of another exemplary electronic device according to an embodiment of the present application, where, as shown in fig. 16, the SOC1311 may be connected to an audio transmission interface (such as a Type-C interface) 1316, or the SOC1311 may be connected to a speaker 1314C through a third PA 1313C. And, the CODEC1312 can also be connected with an audio transmission interface 1315 through a switch 1319.
After the above audio circuit is described, an audio playing process provided by the embodiment of the present application is described next. The audio playing process in the embodiment of the application can be divided into two parts. The first part is a process of generating hearing test data, the second part is a process of identifying an audio playing mode, and the third part is a process of adjusting an audio signal. Next, the first part will be described with reference to fig. 17 to 19.
Fig. 17 is a flowchart illustrating a process of generating hearing test data according to an embodiment of the present application. As shown in fig. 17, taking a terminal device as an example of a mobile phone, the process of generating hearing test data may include steps S1701 to S1712.
S1701, responding to the operation of the hearing health test of the user, and starting the hearing health test by the mobile phone.
Illustratively, fig. 18 shows an interface diagram of an exemplary hearing health test provided by an embodiment of the present application. Fig. 18 (1) is an interface schematic diagram of a main interface of the mobile phone. The handset may display the setup interface shown in (2) in fig. 18 in response to a user's trigger operation for the "setup" control on the main interface. The handset may then display the functional interface of hearing and vibration shown in (3) of fig. 18 in response to a user's trigger operation for the "hearing and vibration" control in the settings menu. Next, the mobile phone may display an ear protection sound effect setting interface shown in (4) in fig. 18 in response to a trigger operation of "ear protection sound effect" in the functional interface for hearing and vibration by the user. And, the mobile phone may display the hearing health test interface shown in (5) in fig. 18 in response to a trigger operation of the hearing health test control in the user setting interface for the ear protection sound effect.
And, as shown in (5) in fig. 18, the hearing health test interface may display a prompt pop-up window 1802, where the prompt pop-up window 1802 is used to prompt the user whether to start the hearing health test, and the prompt window 1802 further includes two controls of yes and no, and when the user triggers the yes control, the hearing health test is started. Optionally, the mobile phone may automatically start the hearing health test after the user triggers the hearing health test control in the ear protection sound effect setting interface shown in (4) in fig. 18, and the specific steps of the hearing health test are not limited in the present application.
Alternatively, after the user clicks the "yes" control in the hearing health test interface shown in (5) in fig. 18, the mobile phone may display the ear-under-test selection interface shown in (6) in fig. 18, and the user may select to test the left ear or the right ear in the selection popup 1803 of the interface. Alternatively, the mobile phone may need to directly select the left ear or the right ear by default to start the test through the ear selection interface to be tested shown in (6) in fig. 18, which is not limited by the present application.
It should be noted that, in the embodiment of the present application, the hearing health test may be performed in other manners, and the specific functional interface and triggering manner thereof are not limited.
And, triggering operations in embodiments of the present application may include, but are not limited to: single click, double click, slide, etc., without limitation.
S1702, the mobile phone performs the test of the ith test frequency point. Wherein i is an integer greater than or equal to 1, and the initial value of i is 1, i.e. the test is started from the 1 st test frequency point.
In S1702, the mobile phone may generate a test audio signal of an ith test frequency point, where an initial loudness value of the test audio signal may be a loudness that can be heard by an ear with a hearing impairment value of 0dB for the ith test frequency point.
The specific audio content of the test audio signal may be set according to the actual test scene and the specific test requirement, which is not limited.
S1703, the mobile phone sends a test audio signal of the ith test frequency point to the tested earphone. The earphone to be tested can be an earphone corresponding to the ear to be tested. For example, if the measured ear is a left ear, the measured earphone is a left earphone, and if the measured ear is a right ear, the measured earphone is a right earphone.
Illustratively, FIG. 19 shows a schematic diagram of an exemplary hearing test interface provided by embodiments of the present application. As shown in fig. 19 (1), the handset may send a test audio signal to the earphone under test.
S1704, the tested earphone plays the test audio signal of the ith test frequency point.
Illustratively, with continued reference to (1) in fig. 19, after receiving the test audio signal, the headphone under test may convert it to a corresponding sound signal, and determine a current play volume corresponding to the current loudness value of the test audio signal, and play at the current play volume.
S1705, the mobile phone receives a feedback result of the hearing test input by the user.
Illustratively, the user may input a feedback result of the hearing test at a feedback interface as shown in (2) of fig. 19, and the mobile phone may receive the feedback result input by the user after detecting the input operation.
S1706, the mobile phone determines whether the tested ear can hear the test audio signal of the ith test frequency point or not based on the feedback result. If the tested ear cannot hear the test audio signal, the process continues to step S1707. If the tested ear can hear the test audio signal, the process continues to step S1708.
Illustratively, with continued reference to the feedback interface shown in fig. 19 (2), if the user clicks the "no" option on the interface, the handset may determine that the user cannot hear the test audio signal of the current loudness. If the user clicks the "yes" option, the handset may determine that the user heard the test audio signal of the current loudness.
It should be noted that (2) in fig. 19 only illustrates a feedback interface, and in a practical process, the feedback interface may further include several hearing content options, for example, the feedback interface may include content "select the content to be heard among the following several options", and determine whether the user's tested ear can hear the test audio signal of the ith test frequency point according to whether the user selects the correct playing content, which is not limited by the embodiment of the present application.
S1707, the mobile phone increases the loudness of the test audio signal of the ith test frequency point. For example, the preset loudness value is increased on the basis of the current loudness, and step S1703 is returned to test whether the user can hear the test audio signal of the current loudness value. For example, the first 10dB increase is based on the initial loudness value x, and each subsequent 10dB increase is based on the previous loudness value, where j is an integer greater than or equal to 1, and the j-th test has a loudness value x+10 x (j-1).
S1708, the mobile phone judges whether the hearing test of the tested ear is finished. If the determination result is no, the step S1709 is continued, and if the determination result is yes, the step S1710 is continued.
In 1708, it may be determined whether the ith test frequency point is the last test frequency point, and if so, it is determined that the hearing test of the ear under test is completed. For example, if 8kHZ is the last test frequency point, if the i-th test frequency point is 8000HZ, it is determined that the hearing test of the ear under test is completed.
S1709, the mobile phone makes i=i+1, and returns to step S1702 to enable the mobile phone to proceed with the hearing test of the next test frequency point.
In S1709, the mobile phone may further record a hearing test result of the i-th test frequency point. For example, if the user hears the test audio data in the jth hearing test for the ith frequency point, the hearing test result of the ith test frequency point may be recorded as 10 x (j-1) dB, that is, the hearing damage value of the tested ear of the user at the ith test frequency point is 10 x (j-1) dB, where 10 is the preset loudness value increased in S1707. In one example, if the test audio data is heard at 1 st, the hearing impairment value is 0dB, if the test audio data is heard at 2 nd, the hearing impairment value is 10dB, and so on.
S1710, the mobile phone judges whether the hearing test is completed on both ears of the testee. If the determination result is no, the process continues to S1711, and if the determination result is no, the process continues to S1712.
Alternatively, if the user only needs to perform the hearing test of a single ear, the hearing test report may be generated directly based on the hearing test data of the single ear without executing steps S1710, S1711.
S1711, the mobile phone takes the other ear as the new ear to be tested, and let i=1. That is, the mobile phone performs a hearing test again from the 1 st test frequency point on the other ear of the subject.
For example, if the hearing test of the left ear is completed and the hearing test of the right ear is not completed, the mobile phone may treat the right ear as a new tested ear in S1711 to continue the hearing test of the right ear.
S1712, the mobile phone generates a hearing test report based on the binaural hearing test data.
In S1712, the handset may generate a hearing test report as shown in fig. 3 based on the binaural hearing test data.
It should be noted that, S1701-S1712 illustrate one way of obtaining the hearing test data/hearing test report according to the embodiment of the present application, and in another way, the mobile phone may obtain the hearing test data/hearing test report of the user by performing a photographing scan on the hearing test report of the user. It should be noted that, in the embodiment of the present application, the terminal device such as the mobile phone may also obtain the hearing test data/hearing test report of the user through other manners, such as a manner of manual input by the user, which is not limited in particular.
After the process of generating the hearing test data of the first part is described through the above, the process of recognizing the audio play mode of the second part will be described next with reference to fig. 20 to 23.
Fig. 20 is a flowchart illustrating a process of identifying an audio playing mode according to an embodiment of the present application. As shown in fig. 20, taking a terminal device as an example of a mobile phone, the process of generating the hearing test data may be implemented by an SOC and a sensor in the mobile phone, and specifically, the process of generating the hearing test data may include steps S1701 to S2008.
S2001, the SOC acquires the audio playing mode of the mobile phone. Wherein, the audio playing mode may include: an earpiece playback mode, a speaker playback mode (or may be referred to as a play-out mode), or a headset playback mode.
In some embodiments, the SOC may determine the audio playback mode in response to a user selection operation on the audio playback interface, such as a selection operation of an earpiece mode, a speaker mode, a headset mode, and the like. For another example, the SOC may determine the audio playing mode according to whether the handset is connected to a headset or not.
In some embodiments, the audio playback modes may also include a screen playback mode, an external playback mode, and the like. The external playback mode may refer to a mode in which audio playback is performed by an external playback device other than the headphones, and may refer to a mode in which playback is performed by the external playback device (i.e., playback is performed through a speaker of the external playback device).
S2002, the posture sensor of the mobile phone sends the posture information of the mobile phone. The gesture sensor may be a sensor such as a gravity sensor or a gyroscope, which can collect gesture data of the mobile phone.
In some implementations, the pose information of the handset may be the rotational angle of the handset about the x, y, z axes of the reference coordinate system.
Fig. 21 is a schematic diagram of a reference coordinate system of a mobile phone according to an embodiment of the present application. As shown in fig. 21, the reference coordinate system of the handset may include an x-axis, a y-axis, and a z-axis. Wherein the x-axis is parallel to the width direction of the terminal device (such as the short side of the mobile phone), the y-axis is parallel to the length direction of the terminal device (such as the long side of the mobile phone), and the z-axis is parallel to the height direction of the terminal device (such as the thickness direction of the mobile phone).
Fig. 22 shows a schematic diagram of a change in posture of a mobile phone according to an embodiment of the present application, when the rotation angle of the mobile phone along the y axis is 0 °, and when the mobile phone is in the posture shown in (1) in fig. 22, if the user rotates the mobile phone clockwise along the y axis, the mobile phone is in the posture shown in (2) in fig. 22, and the rotation angle of the mobile phone around the y axis is a positive value. And if the user rotates the handpiece counterclockwise along the y-axis, the handpiece is in the posture of (3) in fig. 22, and the rotation angle of the handpiece around the y-axis is a negative value.
It should be noted that, in the embodiment of the present application, the gesture information of the mobile phone may also be determined by the position information of the mobile phone in different directions of other coordinate systems, which is not limited in particular.
S2003, if the audio playing mode of the mobile phone is the earphone playing mode, the pose information of the mobile phone is obtained. The pose information may be described as related in S2002. Alternatively, the pose information may include tilt angles of the phone about the x-axis, y-axis, and z-axis. Or may include only the tilt angle of the handset about the y-axis.
S2004, according to the pose information of the mobile phone, the audio receiving ear is judged to be the left ear or the right ear. If the determination result is the left ear, the process continues to S2005, and if the determination result is the right ear, the process continues to S2006.
Fig. 23 is a schematic diagram illustrating a usage scenario of a mobile phone according to an embodiment of the present application. As shown in fig. 23, if the mobile phone is positioned in front of the user, the rotation angle of the mobile phone about the y-axis is 0 ° (it should be noted that, when the mobile phone is positioned in front of the user, the angle about the y-axis may be defined as other values according to the actual situation and specific requirements, which is not limited in particular).
And when the mobile phone is positioned on the right side of the user and the user receives the audio signal by using the right ear, the mobile phone rotates 90 degrees clockwise around the y axis, and the rotation angle of the mobile phone around the y axis is +90 degrees. Accordingly, in S2004, when the rotation angle of the mobile phone about the y-axis is a positive value (or when the difference from +90° is within a certain range), the SOC may determine that the audio receiving ear is the right ear.
And when the mobile phone is positioned at the left side of the user, namely the user receives the audio signal by using the left ear, the mobile phone rotates 90 degrees anticlockwise around the y axis, and the rotation angle of the mobile phone around the y axis is 90 degrees below zero. Accordingly, the SOC may determine that the audio receiving ear is the left ear when the rotation angle of the cellular phone about the y-axis is negative (or when the difference from-90 ° is within a certain range).
It should be noted that, in the embodiment of the present application, the SOC may also determine that the audio receiving ear is a left ear and a right ear through other manners, for example, when the user uses the left ear or the right ear, the shielding of light is different, or the pressure on different positions of the screen is different, and accordingly, the SOC may determine that the audio receiving ear through the sensing signal of the light sensor, the pressure sensor, or the like. Or the mobile phone can also determine the audio receiving ear according to the using habit of the user, and the like, and the method is not limited.
And S2005, if the audio receiving ear is the left ear, audio playing is performed according to the hearing test data of the left ear.
And S2006, if the audio receiving ear is the right ear, performing audio playing according to the hearing test data of the right ear.
S2007, if the audio playing mode of the mobile phone is a speaker playing mode, audio playing is performed according to the hearing test data of the ears. Still alternatively, in the speaker play mode, the volume of the audio signal may not be adjusted.
S2008, if the audio playing mode of the mobile phone is the earphone playing mode, the audio playing of the left earphone is performed according to the hearing test data of the left ear, and the audio playing of the right earphone is performed according to the hearing test data of the right ear.
The specific content of S2005-S2008 will be described in the third section.
In some embodiments, if the audio playing mode of the mobile phone is the screen playing mode, the processing may be performed according to the earpiece playing mode.
In some embodiments, if the audio playing mode of the mobile phone is the external playing mode, the processing may be performed according to the speaker playing mode, which will not be described herein.
Through S2001 to S2008, different audio processing can be performed on different audio play modes, hearing protection of the left ear and the right ear can be performed respectively, and the accuracy of hearing protection can be further improved. In addition, in the embodiment of the present application, audio playing may be directly performed by the binaural hearing test data without performing the distinction of the audio playing modes, which is not particularly limited.
After the identification process of the audio play mode of the second section is introduced through the above, the adjustment process of the audio signal of the third section is described next with reference to fig. 24 to 31.
Embodiments of the present application provide an audio processing scheme, which is described next with reference to fig. 24-29.
Fig. 24 is a schematic diagram of an exemplary audio adjusting circuit according to an embodiment of the present application. As shown in fig. 24, the audio conditioning circuit may include a frequency discriminator 1321, a processor 1322, and a variable resistor 1323.
One end of the discriminator 1321 is connected to one end of the PA1313, and the other end of the discriminator 1321 is connected to the processor 1322. The frequency discriminator 1321 may be used to acquire audio data output by the PA1313 and to detect the frequency of the audio signal in the audio data. In one embodiment, fig. 25 is a schematic structural diagram of a frequency discriminator according to the embodiment of the application. As shown in fig. 25, the frequency discriminator may include a first input terminal for acquiring a reference audio signal Fref of a hearing-impaired frequency point, a second input terminal for acquiring an audio signal Fin of audio data, and an output terminal. When the frequency value of the reference audio signal Fref of the hearing impaired frequency point is equal to the frequency value of the current audio signal Fin of the audio data, the frequency discriminator outputs a first indication signal INT1, and the first indication signal is used for indicating that the frequency point corresponding to the current audio signal is the hearing impaired frequency point. In another embodiment, the frequency discriminator may also detect the frequency value of the current audio signal and send the detected frequency value to the processor 1322.
The processor 1322 has one end connected to the frequency discriminator 1321 and the other end connected to the variable resistor 1323, and may be a processor having a data processing function such as an SOC or a CPU, which is not particularly limited. In an embodiment of the present application, processor 1322 may control the resistance change of variable resistor 1323.
The variable resistor 1323 has one end connected to the PA1313 and the other end connected to the earpiece 1315. Which may be implemented as a circuit. In one embodiment, fig. 26 is a schematic structural diagram of an equivalent circuit of a variable resistor according to an embodiment of the present application. As shown in fig. 26, in the case where the handset 1315 has the handset impedance R0, one end of the variable resistor Rx is connected to the PA1313, and the other end of the variable resistor Rx is connected to the handset impedance R0, so as to form a voltage dividing structure formed by the variable resistor Rx and the handset impedance R0. Illustratively, the earpiece impedance may be a fixed value, such as 32 ohms.
In one example, fig. 27 shows a schematic structural diagram of a variable resistor according to an embodiment of the present application. As shown in fig. 27, the variable resistor 1323 may include a first MOS transistor MOS1 to a third MOS transistor MOS3, a first resistor R1 to a fourth resistor R4, a first capacitor C1, and a second capacitor C2.
The first connection end of the first MOS transistor MOS1 is connected to one end of the PA1313, the second connection end of the first MOS transistor MOS1 is connected to the second connection end of the second MOS transistor MOS2, and the first connection end of the second MOS transistor MOS2 is connected to the earpiece 1315. The control end of the first MOS tube MOS1 and the control end of the second MOS tube MOS2 are connected with the first connecting end of the third MOS tube.
One end of the first resistor R1 is connected with the first connecting end of the first MOS tube MOS1, and the other end of the first resistor R1 is connected with the first connecting end of the second MOS tube MOS 2.
One end of the second resistor R2 is respectively connected with the second connecting end of the first MOS tube MOS1 and the second connecting end of the second MOS tube MOS2, and the other end of the second resistor R2 is connected with the first connecting end of the third MOS tube MOS 3.
One end of the third resistor R3 is connected with the control end of the third MOS tube MOS3, and the other end of the third resistor R3 is grounded to GND.
One end of the fourth resistor R4 is connected with the control end of the third MOS tube MOS3, and the other end of the fourth resistor R4 is used for receiving the variable resistor control signal PWM. Wherein the variable resistance control signal PWM may be processor generated. Alternatively, the variable resistance control signal PWM may be directly generated by the processor 1322 or generated by the processor 1322 through a gate driver, which is not particularly limited.
One end of the first capacitor C1 is connected with the first connecting end of the third MOS tube MOS3, and the other end of the first capacitor C1 is grounded to GND.
One end of the second capacitor C2 is connected with the control end of the third MOS tube MOS3, and the other end of the second capacitor C2 is grounded to GND.
Through the circuit, the resistance change between the PA1313 and the earpiece 1315 can be controlled by driving the first MOS transistor MOS1 and the second MOS transistor MOS2 to be turned on or off by the variable resistance control signal PWM. Specifically, different duty ratios of the variable resistance control signal PWM may realize output of different resistance values, so that by adjusting the duty ratio of the variable resistance control signal PWM, resistance value variation of the variable resistance may be controlled, so that different gains may be performed on different audio signals output by the PA 1313.
In another example, fig. 28 shows a schematic structural diagram of another variable resistor according to an embodiment of the present application. As shown in fig. 28, the variable resistor 1323 may include p branches, each including a control switch and an optional resistor, where p is an integer greater than or equal to 2. For example, the first branch includes a first selectable resistor R51 and a first control switch S1, the second branch includes a second selectable resistor R52 and a second control switch S2, …, and the p-th branch includes a p-th selectable resistor R5p and a p-th control switch Sp. The variable resistors can be adjusted to different resistance values according to different on-off schemes of the first control switch S1 to the p-th control switch Sp.
The variable resistor may be implemented as another circuit structure capable of generating a variable resistance, and is not particularly limited.
In yet another example, the variable resistor may also be implemented as a variable resistor chip, such as an impedance IC.
After the above-described audio adjustment circuit is introduced, the description of the audio adjustment process is continued. Fig. 29 is a schematic flow chart of an audio adjustment process according to an embodiment of the present application. As shown in fig. 29, the audio adjustment process may include S2901-S2910.
S2901, the PA sends audio data to the discriminator.
Wherein the audio data may comprise a plurality of audio signals, which may correspond to different moments in time. Accordingly, the PA may send different audio signals to the frequency discriminator at different times, and the frequencies of the different audio signals may be the same or different. Illustratively, the audio data may transmit the audio signal B1 to the frequency discriminator at a first time t1 and the audio signal B2 to the frequency discriminator at a second time t 2.
S2902, the frequency discriminator judges whether the current audio signal is an audio signal of a hearing impairment frequency point, and if so, S2903 is executed. If the determination result is negative, S2907 is executed.
S2903, when the frequency discriminator determines that the current audio signal is an audio signal of a hearing impaired frequency band, the frequency discriminator sends a first indication signal INT1 to the processor.
For example, in case the user has different hearing impaired frequency points, the frequency discriminator may send a different first indication signal INT1 to the processor to indicate the different hearing impaired frequency points.
For example, since the left and right ears may have different hearing-impaired frequency points, the frequency discriminator may detect the left and right ear hearing-impaired frequency points. Still another example, the left ear and the right ear may correspond to different frequency discriminators (hereinafter simply referred to as left ear). Accordingly, in the headphone playing mode, the frequency discriminator corresponding to the left ear (hereinafter referred to simply as the left ear frequency discriminator) and the frequency discriminator corresponding to the right ear (hereinafter referred to simply as the right ear frequency discriminator) may perform hearing-impaired frequency point detection of the left ear and hearing-impaired frequency point detection of the right ear, respectively. In the earphone playing mode, if the audio receiving ear is a left ear, the left ear frequency discriminator is used for detecting the hearing-impaired frequency point, and if the audio receiving ear is a right ear, the right ear frequency discriminator is used for detecting the hearing-impaired frequency point. In the speaker playing mode, the hearing-impaired frequency point detection of the left ear and the hearing-impaired frequency point detection of the right ear can be respectively performed, and then the processor determines a certain frequency point as a target hearing-impaired frequency point under the condition that the certain frequency point is simultaneously the hearing-impaired frequency point of the left ear and the right ear.
S2904, the processor generates a first control signal in response to the first indication signal INT 1. The first control signal is used for adjusting the resistance value of the variable resistor to a resistance value corresponding to a hearing-impaired frequency point.
In one example, the processor may perform steps C1-C5 as follows.
In step C1, the processor determines a frequency value of the hearing impaired frequency point in response to the first indication signal INT 1.
And C2, determining a hearing damage value of a hearing damage frequency point by the processor according to the hearing test data.
For example, in the earpiece playback mode, if the audio receiving ear is the left ear, the hearing impairment value is determined using the hearing test data of the left ear. If the audio receiving ear is the right ear, the hearing impairment value is determined using the hearing test data of the right ear. For example, continuing to take fig. 3 as an example, if the hearing impaired frequency point is 0.25kHZ, if the audio receiving ear is the left ear, the hearing impaired value is 25dB, and if the audio receiving ear is the right ear, the hearing impaired value is 5dB.
As another example, in the headphone play mode, the hearing impairment value of the left ear may be determined using the hearing test data of the left ear, and the hearing impairment value of the right ear may be determined using the hearing test data of the right ear.
Still further exemplary, in the speaker play mode, hearing impairment values for the human ear may be determined using binaural hearing test data. Specifically, the hearing impairment value of the left ear and the hearing impairment value of the right ear may be determined, and the smaller value of the two values is determined as the hearing impairment value of the human ear at the frequency point. For example, continuing with the example of 0.25kHZ, the hearing loss value of the human ear at this frequency may be 5dB. It should be noted that, the audio signal in the speaker playing mode may or may not be processed, and accordingly, the audio signal in the speaker playing mode may be directly output to the speaker after being output from the PA.
And, for hearing test data, it may include hearing impairment values corresponding to a plurality of frequency points. Or it may be implemented as a hearing test curve. Or it may be implemented as a hearing test function corresponding to a hearing test curve. The embodiment of the present application is not limited in its form.
And C3, determining a loudness gain value corresponding to the hearing impairment value according to the hearing impairment value by the processor.
Illustratively, the processor may determine the hearing impairment value directly as a loudness gain value, e.g., if the hearing impairment value is 30dB, then the loudness gain amount is also 30dB.
Still further exemplary, the loudness gain value may be jointly determined based on the hearing loss value and other losses (e.g., transmission loss, noise loss).
In step C4, the processor determines an increased loudness value (hereinafter referred to as an output loudness value) based on the initial loudness value and the loudness gain value.
Where the initial loudness value may be the loudness value heard by the intended user. The initial loudness value may be, for example, a loudness value corresponding to a volume set in the cell phone. Or the initial loudness value may be a default value set by the handset. Or the initial loudness value may be the optimal hearing loudness.
Illustratively, in step C4, if the initial loudness value is 60dB and the loudness gain is also 30dB, then the increased target loudness value may be determined to be 90dB.
And step C5, the processor determines the resistance value corresponding to the hearing-impaired frequency point based on the target loudness value and the loudness value before adjustment.
For the pre-adjustment loudness value, it may be a value that is greater than the increased loudness value. If a plurality of hearing-impaired frequency points exist, the loudness value before adjustment is larger than the maximum value of the output loudness values corresponding to the plurality of hearing-impaired frequency points. For example, if the output loudness corresponding to the hearing-impaired frequency point B1 is 90dB and the loudness corresponding to the hearing-impaired frequency point B2 is 75dB, the loudness before adjustment is greater than or equal to 90dB.
The loudness value before adjustment may be adjusted by a PA or SOC unit. For example, if the loudness value corresponding to the volume set by the mobile phone is 60dB, and the maximum hearing loss of the left ear at each frequency point is 30dB, the units such as PA or SOC may adjust the loudness value of each audio signal in the audio data to 90dB.
For the resistance value Ra corresponding to the hearing-impaired frequency point, the loudness attenuation of the hearing-impaired frequency point can be determined according to the loudness value before adjustment and the output loudness value. And determining a resistance value Ra corresponding to the hearing-impaired frequency point according to the loudness attenuation and the earphone impedance R0.
For example, if the loudness value before adjustment is D1 and the output loudness value is D2, the loudness attenuation D3 is the difference between D1 and D2. According to the following equation (1), the resistance Ra corresponding to the hearing-impaired frequency point can be determined.
D3=20*lg((Ra+R0)/ R0)(1)
Where 20 is the attenuation factor, which may be an empirical value. If the loudness attenuation D3 is 0dB, i.e., the loudness attenuation is 0, ra is equal to 0 ohm. If the loudness attenuation D3 is 1.02dB, the resistance Ra corresponding to the hearing-impaired frequency point is 4 ohms.
S2905, the processor sends a first control signal to the variable resistor.
S2906, the variable resistor adjusts its resistance to a first resistance value, i.e. a resistance value Ra corresponding to the hearing-impaired frequency point, in response to the first control signal.
S2907, the discriminator sends a second indication signal INT2 to the processor when it is determined that the current audio signal is not a hearing impaired frequency point. The second indication signal is used for indicating that the frequency point of the current audio signal is not a hearing impairment frequency point (namely a normal hearing frequency point).
S2908, the processor responds to the second indication signal INT2 to generate a second control signal for controlling the variable resistor to be adjusted to a second resistance value.
Illustratively, the second control signal is generated in a similar manner to the first control signal, and reference may be made to the relevant description in S2904. The second resistance Rb of the normal hearing frequency point may also be calculated by the above formula (1). The loudness attenuation D3 of the normal hearing frequency point may be a difference between the loudness value D1 before adjustment and the initial loudness value. For example, if the loudness set on the phone is 60dB (or the default output is 60 dB), the loudness before adjustment D1 is 90dB, and the loudness attenuation D3 of the normal hearing frequency point is 30dB.
S2909, the processor sends a second control signal to the variable resistor.
S2910, the variable resistor responds to the second control signal to adjust the resistance value of the variable resistor to a second resistance value, namely a resistance value Rb corresponding to the normal frequency point of the hearing.
Through the steps S2901 to S2910, the variable resistor has a certain impedance Rb when the frequency discriminator identifies the hearing-impaired frequency, and the impedance of the variable resistor can be reduced to Ra when the frequency discriminator identifies the hearing-impaired frequency, so that the loudness of the audio signal of the hearing-impaired frequency is greater than that of the audio signal of the hearing-impaired frequency, the loudness gain of the hearing-impaired frequency is improved, and the hearing protection of the human ear is realized.
In yet another embodiment, the frequency discriminator may perform frequency detection on the audio signal and transmit the detected frequency value to the processing module, unlike the audio processing flow shown in S2901 to S2910 described above. The processing module determines whether the frequency point corresponding to the audio signal is a hearing-impaired frequency point according to the frequency value.
And, in still other embodiments, a variable resistor 1323 may be coupled to the speaker 1314, and accordingly, the frequency discriminator may detect a hearing impaired frequency of the human ear, and the processor may determine a loudness gain value from the hearing impaired value of the hearing impaired frequency of the human ear, and determine a resistance value corresponding to the hearing impaired frequency from the loudness gain value. The specific manner is similar to that of the above-mentioned steps S2901-S2902, and the other contents will not be repeated.
Optionally, if the speaker is a speaker of the external playing device, the PA, the frequency discriminator, and the variable resistor may be disposed in the terminal device, and the audio signal output by the variable resistor is sent to the external playing device through the audio transmission connector or the bluetooth module. Through the setting mode, various external devices can realize hearing protection, and the applicability of a hearing protection scheme is improved. It should be noted that, according to a specific circuit structure, the PA, the frequency discriminator, and the variable resistor may be disposed in the external playback device, which is not particularly limited.
And, in still other embodiments, variable resistor 1323 may be connected with headphones. Accordingly, the left earphone may correspond to a variable frequency resistance of the left ear, and the right earphone may correspond to a variable frequency resistance of the right ear. The processor can determine the resistance value of the variable frequency resistor of the left ear at the hearing-impaired frequency point and the hearing-normal frequency point according to the hearing test data of the left ear so as to output the adjusted left ear audio data to the left earphone. Similarly, the processor may determine, according to the hearing test data of the right ear, a resistance value of the variable frequency resistor of the right ear at a hearing impaired frequency point and a hearing normal frequency point, so as to output the adjusted right ear audio data to the right earphone.
Optionally, the PA, the frequency discriminator, and the variable resistor may be disposed in the terminal device, and the audio signal output by the variable resistor is sent to the earphone through the audio transmission connector or the bluetooth module. Through the setting mode, the terminal equipment can realize hearing protection through various earphones, and the applicability of a hearing protection scheme is improved. Or the PA, the frequency discriminator, and the variable resistor may be further disposed in the earphone, which is not particularly limited.
Another audio processing scheme is provided by an embodiment of the present application, which is described below with reference to fig. 30-31.
Fig. 30 is a schematic flow chart of an audio adjustment process according to an embodiment of the present application. As shown in fig. 30, the audio adjustment process may include S3001-S3009.
S3001, the processing module acquires audio data. The audio data may be referred to as related description in S2901, and will not be described herein. The loudness of each audio signal in the audio data may be an initial loudness value.
The processing module may be a device having an audio data processing function, such as an SOC or a CODEC, in the terminal device.
S3002, the processing module detects whether the audio data has a hearing-impaired frequency point. If the detection result is no, step S3003 is skipped, and if the detection result is yes, step S3005 is skipped.
In some embodiments, the processing module may determine, according to a size of a hearing loss value of a frequency bin of the audio data in the hearing test data, that the frequency bin of the audio data is a hearing impaired frequency bin if the hearing loss value is greater than or equal to a preset hearing loss threshold. Similarly, under the condition that the hearing loss value is smaller than a preset hearing loss threshold value, the frequency point of the audio data is determined to be a normal hearing frequency point.
In other embodiments, the processing module may pre-store a list of hearing-impaired frequency points, and the processing module may determine whether the frequency points of the audio data are in the hearing-impaired list. If the frequency point of the audio data is in the hearing-impaired frequency point list, determining the frequency point of the audio data as the hearing-impaired frequency point. Similarly, if the frequency point of the audio data is not in the hearing-impaired frequency point list, determining that the frequency point of the audio data is a normal hearing frequency point. The list of hearing impaired frequency points may be predetermined according to the hearing test data, or may be determined according to other manners, which is not particularly limited.
S3003, the processing module sends an audio signal to the audio playing module. The audio playing module may be a device capable of converting an audio signal into a sound signal that can be recognized by the human ear, such as an earpiece, a speaker, a screen sounding device, and the like of the terminal device itself. Still alternatively, the audio playback module may be a speaker of a headset or a speaker of other external playback devices, etc.
It should be noted that, for devices such as an earpiece, a speaker, a screen sounding device, and the like of the terminal device itself, the processing module may send an audio signal to the device through the PA. For devices such as a speaker of an earphone or a speaker of other external playing devices, the processing module may send an audio signal to the device through the bluetooth module or the audio transmission interface, and a specific sending manner may refer to the related description of the foregoing part of the embodiment of the present application, which is not repeated herein.
S3004, the audio playing module plays the audio signal at the initial loudness value.
For example, the audio playback module may convert the audio signal into a sound signal in a vibratory manner, wherein the volume of the sound signal corresponds to the initial loudness value.
S3005, the processing module determines a loudness gain value from the hearing test data. The loudness gain value is used to compensate for the hearing impairment of the user at the target frequency, which may be determined from the hearing impairment value of the target frequency.
In some embodiments, S3005 may include step E1 and step E2.
In step E1, in case the hearing test data comprises a hearing test curve, the processing module may determine a hearing compensation curve from the hearing test curve. Fig. 31 is a schematic diagram of a hearing compensation curve according to an embodiment of the present application. As shown in fig. 31, the hearing test curve and the hearing compensation curve are symmetrical with each other with respect to the horizontal line corresponding to 0dB, and accordingly, the hearing compensation curve can be determined by the hearing test curve.
And E2, determining the loudness gain quantity corresponding to the target frequency point on the hearing compensation curve by the processing module.
It should be noted that, in the case where the hearing test data includes a correspondence list of hearing impairment values of a plurality of test frequency points, the correspondence list of each test frequency point and the loudness gain amount may be determined. In case the hearing test data comprises a hearing test curve, the hearing compensation curve may be determined from the hearing test curve.
Optionally, the hearing impairment value in the hearing test data is the same as the loudness gain amount in the hearing compensation data for the same frequency bin. Alternatively, in order to further improve the hearing compensation accuracy, the loudness gain may be calculated according to a hearing loss value, a transmission loss, environmental noise, and the like, and the calculation manner may be set according to actual situations, which is not particularly limited.
For example, with continued reference to fig. 31, if the target frequency point is 1kHZ, a corresponding loudness gain of 30dB may be determined on the target compensation curve for 1 kHZ.
In other embodiments, S3005 may include step E3 and step E4.
And E3, determining a hearing damage value of a hearing damage frequency point by the processing module through the hearing test data.
In step E4, the processing module determines a loudness gain corresponding to the hearing impairment value. Wherein, a hearing impairment value may correspond to a loudness gain, and a specific manner of determining the loudness gain may be referred to the relevant descriptions of the foregoing parts of the embodiments of the present application, which are not repeated herein.
S3006, the processing module will increase the loudness gain value to the target loudness value based on the initial loudness value.
In some embodiments, since the audio playing module is driven by current to generate sound, the change of the magnitude of sound (loudness) is actually the change of the magnitude of the electro-driver current, in the case that the audio signal is a digital signal, the audio signal may be a square wave signal, and the processing module may adjust the voltage of the audio signal by adjusting the amplitude of the square wave signal, so as to change the driving current of the audio playing module. Or the processing module can also adjust the duty ratio of the square wave signal to realize the adjustment of the effective value of the driving current, thereby changing the driving current.
In other embodiments, where the audio signal is an analog signal, the processing module may adjust the voltage level of the audio signal in analog form to vary the drive current.
S3007, the processing module sends an audio signal of the target loudness value to the audio playing module. Here, the audio signal transmission process may be described with reference to S3003, which is not particularly limited.
S3008, the audio playing module plays the audio signal at the target loudness value. The specific playing process may refer to the related description of S3004, which is not described herein.
It should be noted that, when the audio playing module is a receiver, if the audio receiving ear is a left ear, the processing module may adjust each audio signal in the audio data according to the hearing test data of the left ear, and make the audio playing module play the adjusted audio data.
When the audio playing module is an earphone, the processing module can adjust each audio signal in the audio data according to the hearing test data of the left ear to obtain the left ear audio data, and enable the left earphone to play the left ear audio data. And the processing module can adjust each audio signal in the audio data according to the hearing test data of the right ear to obtain the right ear audio data, and enable the right earphone to play the right ear audio data.
When the audio playing module is a speaker, the processing module may adjust each audio signal in the audio data according to the ear hearing test data (which may refer to the related description of the above part of the embodiment of the present application, and will not be repeated here), and make the audio playing module play the adjusted audio data. For example, if a certain frequency point is a hearing-impaired frequency point of both the left ear and the right ear, the hearing-impaired frequency point is determined as a hearing-impaired frequency point of the human ear. The smaller of the left ear hearing impairment value determined by the left ear hearing test data and the right ear hearing impairment determined by the right ear hearing test data is then determined as the human ear hearing impairment value, and the human ear hearing impairment value is determined as the loudness gain amount. For example, if the left ear hearing loss value is 30dB and the right ear hearing loss value is 5dB for a certain hearing loss frequency, the human ear hearing loss value is determined to be 5dB.
In some embodiments, in the above embodiments, in the headphone play mode, the processing module/processor may further detect whether the headphone is worn correctly by the sensing signal detected by the sensor on the headphone. For example, whether the left earphone is worn to the left ear and whether the right earphone is worn to the right ear. And, when the wearing is wrong, the above-described audio adjustment process may be stopped. Or when the user wears the earphone with the error, the left ear audio data can be sent to the right earphone, and the right ear audio data can be sent to the left earphone for playing. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. The present application can be implemented in hardware or a combination of hardware and computer software, in conjunction with the example algorithm steps described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In one example, fig. 32 shows a schematic block diagram of a device 3200 according to an embodiment of the application. The device 3200 may comprise: processor 3201 and transceiver/transceiving pin 3202, and optionally, memory 3203.
The various components of device 3200 are coupled together by a bus 3204, where bus 3204 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are referred to in the figures as bus 3204.
Alternatively, the memory 3203 may be used for instructions in the foregoing method embodiments. The processor 3201 is operable to execute instructions in the memory 3203 and to control the receive pins to receive signals and to control the transmit pins to transmit signals.
The apparatus 3200 may be an electronic device or a chip of an electronic device in the above-described method embodiments.
All relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The steps performed by the terminal device in the audio processing method provided by the embodiment of the present application may also be performed by a chip system included in the terminal device, where the chip system may include a processor and a bluetooth chip. The chip system may be coupled to a memory such that the chip system, when running, invokes a computer program stored in the memory, implementing the steps performed by the terminal device. The processor in the chip system can be an application processor or a non-application processor.
Similarly, in the above embodiment, the steps performed by the earphone or the external playing device may also be performed by a chip system included in the earphone or the external playing device, where the chip system may include a processor and a bluetooth chip. The chip system may be coupled to a memory such that the chip system, when running, invokes a computer program stored in the memory, implementing the steps performed by the headset or the external playback device described above. The processor in the chip system can be an application processor or a non-application processor.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
Claims (19)
1. An audio processing method, the method being applied to an audio processing circuit, the method comprising:
acquiring audio data;
Under the condition that the audio data comprises first audio signals of a first frequency point, obtaining hearing test data of a user, wherein the hearing test data comprises hearing impairment values of the user at a plurality of test frequency points;
Determining a loudness gain amount corresponding to the first frequency point according to the hearing test data, wherein the loudness gain amount corresponds to a hearing damage value of the user at the first frequency point;
increasing the loudness of the first audio signal by the loudness gain amount;
The first audio signal is output at an increased loudness.
2. The method of claim 1, wherein the audio processing circuit comprises a first processing module,
The obtaining hearing test data of the user includes:
The first processing module judges whether the audio data comprises the first audio signal or not;
the first processing module obtains the hearing test data if the audio data includes the first audio signal;
The determining, according to the hearing test data, the loudness gain amount corresponding to the first frequency point includes:
The first processing module determines a loudness gain amount corresponding to the first frequency point according to the hearing test data;
the increasing the loudness of the first audio signal by the loudness gain amount includes:
and the first processing module increases the loudness of the first audio signal by the loudness gain amount on the basis of the initial loudness value to obtain a target loudness value.
3. The method according to claim 2, wherein the method further comprises:
in the case where the audio data includes a second audio signal of a second frequency point, the first processing module outputs the second audio signal at an initial loudness value.
4. The method of claim 2, wherein the first processing module is included in a terminal device, the terminal device being coupled to an audio playing device, the audio playing device being configured to output the first audio signal at an increased loudness under control of the first processing module.
5. The method of claim 1, wherein the audio processing circuit comprises a frequency discriminator, a second processing module and a variable resistor, one end of the frequency discriminator is connected with a power amplifier of the terminal device, the other end of the frequency discriminator is connected with one end of the second processing module, the other end of the second processing module is connected with a control end of the variable resistor, a first connection end of the variable resistor is connected with the power amplifier, the other end of the variable resistor is connected with an audio playing module,
The increasing the loudness of the first audio signal by the loudness gain amount includes:
The frequency discriminator detects whether the audio data comprises the first audio signal, wherein the audio signal is output by the power amplifier and has a preset loudness value;
The frequency discriminator sends a first prompt signal to the second processing module under the condition that the audio data comprises the first audio signal;
the second processing module responds to the first prompt signal to acquire the hearing test data;
the increasing the loudness of the first audio signal by the loudness gain amount includes:
The second processing module determines the loudness attenuation of the first audio signal based on the loudness gain, the initial loudness value and the preset loudness value;
determining a second impedance of the variable resistor based on the loudness attenuation and a first impedance of the audio playing module;
And controlling the resistance value of the variable resistor to be adjusted to the second impedance so as to attenuate the loudness of the first audio signal from the preset loudness value to a target loudness value through the variable resistor of the second impedance, wherein the target loudness value is the sum of the initial loudness value and the loudness gain.
6. The method of claim 5, wherein the method further comprises:
the frequency discriminator sends a second prompt signal to the second processing module under the condition that the audio data comprise second audio data of a second frequency point;
the second processing module determines the loudness attenuation of the second audio signal based on the initial loudness value and the preset loudness value;
Determining a third impedance of the variable resistor based on the loudness attenuation of the second audio signal and the first impedance of the audio playback module;
controlling the resistance value of the variable resistor to be adjusted to the third impedance to attenuate the loudness of the second audio signal from the preset loudness value to the initial loudness value through the variable resistor of the third impedance,
Wherein the third impedance is greater than the second impedance.
7. The method according to any one of claims 1 to 6, wherein,
The hearing impairment value of the first audio signal is greater than or equal to a preset hearing impairment threshold;
The hearing impairment value of the second audio signal is smaller than the preset hearing impairment threshold.
8. The method of claim 1, wherein the obtaining hearing test data of the user comprises:
acquiring an audio playing mode of the audio data;
acquiring attitude information of terminal equipment under the condition that the audio playing mode is a receiver playing mode;
Determining a target ear of the user receiving the audio data according to the gesture information;
and obtaining hearing test data of the target ear.
9. The method of claim 1, wherein the obtaining hearing test data of the user comprises:
acquiring an audio playing mode of the audio data;
Judging whether the audio data comprises audio signals of a third frequency point or not under the condition that the audio playing mode is an earphone playing mode, wherein the third frequency point is a first frequency point of the left ear of the user;
Acquiring first hearing test data under the condition that the audio data comprise the audio signal of the third frequency point, wherein the first hearing test data are test data of the left ear in the hearing test data;
The determining, according to the hearing test data, the loudness gain amount corresponding to the first frequency point includes:
determining the loudness gain quantity corresponding to the third frequency point according to the first hearing test data;
Increasing the loudness of the first audio signal by the loudness gain amount includes:
Based on the loudness gain amount corresponding to the third frequency point, the loudness of the audio signal of the third frequency point is increased to a first loudness value;
The outputting the first audio signal at the increased loudness includes:
and controlling the left earphone to output the audio signal of the third frequency point at the first loudness value.
10. The method of claim 8, wherein the obtaining hearing test data of the user further comprises:
Judging whether the audio data comprises audio data of a fourth frequency point or not under the condition that the audio playing mode is an earphone playing mode, wherein the fourth frequency point is a first frequency point of the right ear of the user;
acquiring second hearing test data under the condition that the audio data comprise the audio signal of the fourth frequency point, wherein the second hearing test data are test data of the right ear in the hearing test data;
The determining, according to the hearing test data, the loudness gain amount corresponding to the first frequency point further includes:
Determining the loudness gain quantity corresponding to the fourth frequency point according to the second hearing test data;
increasing the loudness of the first audio signal by the loudness gain amount further comprises:
Based on the loudness gain amount corresponding to the fourth frequency point, the loudness of the audio signal of the fourth frequency point is increased to a second loudness value;
the outputting the first audio signal at the increased loudness further comprises:
And controlling the right earphone to output the audio signal of the fourth frequency point at the second loudness value.
11. The method of claim 1, wherein the obtaining hearing test data of the user comprises:
acquiring an audio playing mode of the audio data;
Acquiring first hearing test data of the user and second hearing test data of the user under the condition that the audio playing mode is a loudspeaker playing mode, wherein the first hearing test data comprise first hearing damage values of the left ear of the user at the plurality of test frequency points, and the second hearing test data comprise second hearing damage values of the right ear of the user at the plurality of test frequency points;
and determining the hearing test data based on the first hearing test data and the second hearing test data, wherein the hearing damage value of a single test frequency point in the hearing test data is the smaller value of the first hearing damage value of the single test frequency point and the second hearing damage value of the single test frequency point.
12. The method of claim 1, wherein the audio processing circuit further comprises an audio playback module,
The outputting the first audio signal at the increased loudness includes: controlling the audio playback module to output the first audio signal at an increased loudness,
The audio playing module is one of a loudspeaker of the terminal equipment, a receiver of the terminal equipment, a screen sounding device of the terminal equipment, a loudspeaker of the earphone or a loudspeaker of the playing equipment;
wherein the playing device is an audio playing device which is connected with the terminal device and is except the earphone.
13. The method of claim 1, wherein prior to the obtaining the hearing test data of the user, the method further comprises:
for a single test frequency point, playing at least one test audio of the single test frequency point to the user, wherein one test audio corresponds to one loudness value;
Determining a hearing impairment value of the user at the single test frequency point based on a feedback result of the user on the test audio;
and generating the hearing test data based on the hearing impairment values of the plurality of test frequency points.
14. An audio processing circuit, the circuit comprising:
the audio acquisition module is used for acquiring audio data;
The test data acquisition module is used for acquiring hearing test data of a user under the condition that the audio data comprises first audio signals of a first frequency point, wherein the hearing test data comprises hearing impairment values of the user at a plurality of test frequency points;
The gain amount determining module is used for determining a loudness gain amount corresponding to the first frequency point according to the hearing test data, wherein the loudness gain amount corresponds to a hearing damage value of the user at the first frequency point;
A loudness adjustment module for increasing the loudness of the first audio signal by the loudness gain amount;
and the audio output module is used for outputting the first audio signal at the increased loudness.
15. The circuit of claim 14, wherein the circuit comprises: the first processing module comprises an audio acquisition module, a test data acquisition module and a loudness adjustment module;
The first processing module is used for judging whether the audio data comprises the first audio signal or not; and further for obtaining the hearing test data if the audio data comprises the first audio signal; and the loudness gain quantity corresponding to the first frequency point is determined according to the hearing test data; and the loudness gain is further used for increasing the loudness of the first audio signal on the basis of the initial loudness value to obtain a target loudness value; and, further for controlling the audio output module to output the first audio signal at a target loudness value.
16. The circuit of claim 14, wherein the first processing module is included in a terminal device.
17. The circuit of claim 14, wherein the audio processing circuit comprises: the device comprises a frequency discriminator, a second processing module, a variable resistor and the audio playing module;
The frequency discriminator comprises the audio acquisition module, one end of the frequency discriminator is connected with a power amplifier of the terminal equipment, the other end of the frequency discriminator is connected with one end of the second processing module, the frequency discriminator is used for acquiring the audio signal and detecting whether the audio data comprises the first audio signal or not, the audio signal is output by the power amplifier, and the audio signal has a preset loudness value; the frequency discriminator is further used for sending a first prompt signal to the second processing module when detecting that the audio data comprises the first audio signal;
The second processing module comprises the test data acquisition module, the gain amount determination module and the loudness adjustment module, the other end of the second processing module is connected with the control end of the variable resistor, and the second processing module is used for responding to the first prompt signal to acquire the hearing test data; and the loudness gain amount corresponding to the first frequency point is determined according to the hearing test data, and the loudness attenuation amount of the first audio signal is determined based on the loudness gain amount, the initial loudness value and the preset loudness value; and determining a second impedance of the variable resistor based on the loudness attenuation and the first impedance of the audio playback module; and is further configured to control a resistance value of the variable resistor to adjust to the second impedance to attenuate a loudness of the first audio signal from the preset loudness value to a target loudness value through the variable resistor of the second impedance, the target loudness value being a sum of the initial loudness value and the loudness gain amount,
The first connecting end of the variable resistor is connected with the power amplifier, and the other end of the variable resistor is connected with the audio playing module.
18. An electronic device, comprising:
one or more processors;
A memory;
And one or more computer programs, wherein the one or more computer programs are stored on the memory, which when executed by the one or more processors, cause the processors to perform the audio processing method of any of claims 1 to 13.
19. A computer readable storage medium comprising a computer program, characterized in that the computer program, when run on an electronic device, causes the electronic device to perform the audio processing method of any one of claims 1 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410397174.XA CN118102175A (en) | 2024-04-03 | 2024-04-03 | Audio processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410397174.XA CN118102175A (en) | 2024-04-03 | 2024-04-03 | Audio processing method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118102175A true CN118102175A (en) | 2024-05-28 |
Family
ID=91147890
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410397174.XA Pending CN118102175A (en) | 2024-04-03 | 2024-04-03 | Audio processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118102175A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104811155A (en) * | 2015-04-20 | 2015-07-29 | 深圳市冠旭电子有限公司 | Balance device adjusting method and device |
US20200015242A1 (en) * | 2018-07-03 | 2020-01-09 | Boe Technology Group Co., Ltd. | Communication method for a communication device, electronic device, and storage medium |
CN110677717A (en) * | 2019-09-18 | 2020-01-10 | 深圳创维-Rgb电子有限公司 | Audio compensation method, smart television and storage medium |
CN110840462A (en) * | 2019-10-31 | 2020-02-28 | 佳禾智能科技股份有限公司 | Hearing-aid method for human ears based on earphone, computer readable storage medium and Bluetooth earphone |
CN114903473A (en) * | 2021-02-06 | 2022-08-16 | Oppo广东移动通信有限公司 | Hearing detection method, device, electronic equipment and storage medium |
CN114928803A (en) * | 2022-05-08 | 2022-08-19 | 袁丹 | Hearing-aid type sound control method and system |
CN115474927A (en) * | 2022-08-09 | 2022-12-16 | Oppo广东移动通信有限公司 | Hearing detection method and device, electronic equipment and storage medium |
CN116782084A (en) * | 2022-03-07 | 2023-09-19 | Oppo广东移动通信有限公司 | Audio signal processing method and device, earphone and storage medium |
WO2024027259A1 (en) * | 2022-07-30 | 2024-02-08 | 华为技术有限公司 | Signal processing method and apparatus, and device control method and apparatus |
-
2024
- 2024-04-03 CN CN202410397174.XA patent/CN118102175A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104811155A (en) * | 2015-04-20 | 2015-07-29 | 深圳市冠旭电子有限公司 | Balance device adjusting method and device |
US20200015242A1 (en) * | 2018-07-03 | 2020-01-09 | Boe Technology Group Co., Ltd. | Communication method for a communication device, electronic device, and storage medium |
CN110677717A (en) * | 2019-09-18 | 2020-01-10 | 深圳创维-Rgb电子有限公司 | Audio compensation method, smart television and storage medium |
CN110840462A (en) * | 2019-10-31 | 2020-02-28 | 佳禾智能科技股份有限公司 | Hearing-aid method for human ears based on earphone, computer readable storage medium and Bluetooth earphone |
CN114903473A (en) * | 2021-02-06 | 2022-08-16 | Oppo广东移动通信有限公司 | Hearing detection method, device, electronic equipment and storage medium |
CN116782084A (en) * | 2022-03-07 | 2023-09-19 | Oppo广东移动通信有限公司 | Audio signal processing method and device, earphone and storage medium |
CN114928803A (en) * | 2022-05-08 | 2022-08-19 | 袁丹 | Hearing-aid type sound control method and system |
WO2024027259A1 (en) * | 2022-07-30 | 2024-02-08 | 华为技术有限公司 | Signal processing method and apparatus, and device control method and apparatus |
CN115474927A (en) * | 2022-08-09 | 2022-12-16 | Oppo广东移动通信有限公司 | Hearing detection method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220148608A1 (en) | Method for Automatically Switching Bluetooth Audio Coding Scheme and Electronic Device | |
CN111601199A (en) | Wireless earphone box and system | |
JP7556948B2 (en) | Method and apparatus for improving the sound quality of a loudspeaker | |
US12047760B2 (en) | Bluetooth communication method and apparatus | |
CN111212412B (en) | Near field communication method and device, computer readable storage medium and electronic equipment | |
CN113496708B (en) | Pickup method and device and electronic equipment | |
US10827455B1 (en) | Method and apparatus for sending a notification to a short-range wireless communication audio output device | |
CN114466097A (en) | Mobile terminal capable of preventing sound leakage and sound output method of mobile terminal | |
CN112806092B (en) | Microphone MIC switching method and device | |
US20230209297A1 (en) | Sound box position adjustment method, audio rendering method, and apparatus | |
US20240114295A1 (en) | Method for identifying earbud wearing error and related device | |
KR20230009487A (en) | Active noise canceling method and apparatus | |
CN114422340A (en) | Log reporting method, electronic device and storage medium | |
JP2022536868A (en) | Call method and device | |
CN114466107A (en) | Sound effect control method and device, electronic equipment and computer readable storage medium | |
CN114157945A (en) | Data processing method and related device | |
US20240135946A1 (en) | Method and apparatus for improving sound quality of speaker | |
CN113129916B (en) | Audio acquisition method, system and related device | |
CN115226185A (en) | Transmission power control method and related equipment | |
CN114120950B (en) | Human voice shielding method and electronic equipment | |
CN118102175A (en) | Audio processing method, device and storage medium | |
CN114390406B (en) | Method and device for controlling displacement of loudspeaker diaphragm | |
US20230370718A1 (en) | Shooting Method and Electronic Device | |
JP2023552731A (en) | Data transmission methods and electronic devices | |
CN115706755A (en) | Echo cancellation method, electronic device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |