CN109065064B - Method for generating EQ curve, method for outputting audio and output equipment - Google Patents
Method for generating EQ curve, method for outputting audio and output equipment Download PDFInfo
- Publication number
- CN109065064B CN109065064B CN201810902267.8A CN201810902267A CN109065064B CN 109065064 B CN109065064 B CN 109065064B CN 201810902267 A CN201810902267 A CN 201810902267A CN 109065064 B CN109065064 B CN 109065064B
- Authority
- CN
- China
- Prior art keywords
- audio signal
- value
- test audio
- curve
- test
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 82
- 238000012360 testing method Methods 0.000 claims abstract description 257
- 230000005236 sound signal Effects 0.000 claims abstract description 209
- 238000004590 computer program Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 abstract description 5
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000012074 hearing test Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000010349 pulsation Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a method for generating an EQ curve, which relates to the technical field of listening equipment and comprises the following steps: outputting each test audio signal to a test user in sequence; the test audio signal comprises a frequency value, an amplitude value and a segment number value; determining a test audio signal corresponding to the minimum amplitude value in each frequency value, which is consistent with the segment value in the corresponding test audio signal, in each feedback information according to the segment value in the feedback information input by the user as an auditory reference line corresponding to each frequency value, and forming an EQ curve according to each auditory reference line; the method can accurately judge whether the user can hear the current test audio signal really and reliably through the segment number value provided by the user during feedback, and further can improve the accuracy of the EQ curve and further improve the compensation effect on the audio information needing to be output. The invention also discloses a device for generating the EQ curve, a method and a device for outputting the audio, an output device and a computer readable storage medium, and has the beneficial effects.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for generating an EQ curve, and further, to a method and an apparatus for audio output, an output device, and a computer-readable storage medium.
Background
Users have different hearing characteristics due to their personal characteristics (e.g., differences in ear configurations), e.g., some users may hear high pitch clarity while others may hear low pitch clarity. For this reason, in the prior art, the audio output from the speaker is compensated by the EQ curve to adapt to the personal characteristics of the user.
Currently, the EQ curve is generated by playing each audio signal to the user, and confirming whether the audio needs to be compensated through the information that the user can hear or cannot hear, thereby generating the final EQ curve. However, this method of generating EQ curves suffers from inaccuracy. For example, when the sound is small to a certain degree, the user cannot judge whether the sound can be heard, and if the user feeds back that the sound can be heard before, the user can misjudge the subsequent audio test, because of the existence of auditory psychology, and when the human ear continuously hears a weak audio signal, even if the audio signal stops, the human still seems to hear the weak audio signal. It is for this reason that the EQ curves measured in the prior art are not accurate.
Disclosure of Invention
The invention aims to provide a method and a device for generating an EQ curve, a method and a device for outputting audio, an output device and a computer readable storage medium, which can improve the accuracy of the generated EQ curve and further improve the compensation effect on audio information needing to be output.
To solve the above technical problem, the present invention provides a method for generating an EQ curve, the method comprising:
outputting each test audio signal to a test user in sequence; wherein the test audio signal comprises a frequency value, an amplitude value and a segment value;
and determining the test audio signal corresponding to the minimum amplitude value in each frequency value with the segment value in the corresponding test audio signal consistent with the segment value in the corresponding test audio signal as an auditory reference line corresponding to each frequency value according to the segment value in the feedback information of each test audio signal input by a user, and forming an EQ curve according to each auditory reference line.
Optionally, the feedback information is voice feedback information.
Optionally, when the test audio signal is a first test audio signal and the frequency value of the first test audio signal is a first frequency value, determining, according to a segment value in feedback information of each test audio signal input by a user, a test audio signal corresponding to a minimum amplitude value in each frequency value where the segment value in each feedback information is consistent with the segment value in the corresponding test audio signal, as an auditory reference line corresponding to each frequency value, including:
s1, receiving a segment number value in the feedback information of the first test audio signal input by the user, and judging whether the segment number value is consistent with the segment number value in the first test audio signal; if yes, entering S2, otherwise entering S5;
s2, reducing the amplitude value in the first test audio signal according to a first preset rule, and modifying the corresponding segment value to be used as a second test audio signal to be output to a test user;
s3, receiving a segment number value in the feedback information of the second test audio signal input by the user, and judging whether the segment number value is consistent with the segment number value in the second test audio signal; if so, entering S2 with the second test audio signal as the first test audio signal, otherwise, entering S4;
s4, taking the first test audio signal in the S2 as an auditory reference line corresponding to the first frequency value;
s5, increasing the amplitude value in the first test audio signal according to a second preset rule, and modifying the corresponding segment value to be used as a third test audio signal to be output to a test user;
s6, receiving a segment number value in the feedback information of the third test audio signal input by the user, and judging whether the segment number value is consistent with the segment number value in the third test audio signal; if not, entering S5 with the third test audio signal as the first test audio signal, and if so, entering S7;
and S7, taking the third test audio signal as an auditory reference line corresponding to the first frequency value.
The invention also provides a device for generating the EQ curve, which comprises the following components:
the test audio output module is used for outputting each test audio signal to a test user in sequence; wherein the test audio signal comprises a frequency value, an amplitude value and a segment value;
and the EQ curve generation module is used for determining the test audio signal corresponding to the minimum amplitude value in each frequency value with the segment value in the corresponding test audio signal consistent with the segment value in the corresponding test audio signal as an auditory reference line corresponding to each frequency value according to the segment value in the feedback information of each test audio signal input by a user, and forming an EQ curve according to each auditory reference line.
The invention also provides a method of audio output, the method comprising:
when an audio output instruction is detected, calling an EQ curve to compensate the target audio information; wherein the EQ curve is generated according to the method for generating the EQ curve;
and outputting the compensated target audio information.
Optionally, the invoking the EQ curve to compensate the target audio information includes:
and identifying the identity information of the user, and calling an EQ curve corresponding to the identity information to compensate the target audio information.
Optionally, the identifying the identity information of the user includes:
acquiring a heart rate value of the user through heart rate detection;
and identifying the identity information of the user according to the heart rate numerical value.
Optionally, the invoking an EQ curve corresponding to the identity information to compensate for the target audio information includes:
judging whether an EQ curve corresponding to the identity information exists or not;
if so, calling an EQ curve corresponding to the identity information to compensate the target audio information;
and if not, generating an EQ curve corresponding to the identity information.
The present invention also provides an apparatus for audio output, the apparatus comprising:
the calling module is used for calling the EQ curve to compensate the target audio information when the audio output instruction is detected; wherein the EQ curve is generated according to the method for generating the EQ curve;
and the output module is used for outputting the compensated target audio information.
Optionally, the invoking module includes:
the identity recognition unit is used for recognizing identity information of the user;
and the calling unit is used for calling the EQ curve corresponding to the identity information to compensate the target audio information.
The invention also provides an output device, a memory and a processor; wherein the memory is configured to store a computer program, and the processor is configured to implement the steps of the method for generating an EQ curve as described above when executing the computer program, and/or the processor is configured to implement the steps of the method for outputting audio as described above when executing the computer program.
Optionally, the output device is a wireless earphone or a hearing aid.
Optionally, the wireless headset has an identification component.
The present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of generating EQ curves as described above, and/or which, when executed by a processor, carries out the steps of the method of audio output as described above.
The invention provides a method for generating an EQ curve, which comprises the following steps: outputting each test audio signal to a test user in sequence; wherein, the test audio signal comprises a frequency value, an amplitude value and a segment numerical value; and according to the segment numerical value in the feedback information of each test audio signal input by the user, determining the test audio signal corresponding to the minimum amplitude value in each frequency value, which is consistent with the segment numerical value in the corresponding test audio signal, in each feedback information as an auditory reference line corresponding to each frequency value, and forming an EQ curve according to each auditory reference line.
Therefore, the method can accurately judge whether the user can really and reliably hear the current test audio signal or not through the segment number value provided by the user during feedback, and further can improve the accuracy of the generated EQ curve according to the accurate hearing detection condition, improve the compensation effect on the output audio information based on the EQ curve, and improve the user experience. The invention also provides a device for generating the EQ curve, a method and a device for outputting the audio, an output device and a computer readable storage medium, which have the beneficial effects and are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a method for generating an EQ curve according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating another method for generating an EQ curve according to an embodiment of the present invention;
fig. 3 is a block diagram illustrating an apparatus for generating an EQ curve according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method of providing audio output in accordance with an embodiment of the present invention;
fig. 5 is a block diagram of an audio output apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to overcome the problems that in the prior art, hearing detection is inaccurate due to existence of auditory psychology, and finally, the generated EQ curve is inaccurate, the hearing detection is performed in a feedback section numerical form, so that the accuracy of the generated EQ curve is improved. Referring to fig. 1 in detail, fig. 1 is a flowchart illustrating a method for generating an EQ curve according to an embodiment of the present invention; the method can comprise the following steps:
s100, outputting each test audio signal to a test user in sequence; the test audio signal includes a frequency value, an amplitude value, and a segment value.
In this embodiment, the test audio signal has three parameter information, which are a frequency value, an amplitude value, and a segment value. Wherein, the frequency value and the amplitude value are at least one difference between the frequency value and the amplitude value of each test audio signal of the elements of the test audio signal; the segment value is a judgment criterion set for accurately judging whether a user (also referred to as a test user) can hear the test audio signal. That is, the whole segment of test audio signal in the prior art is divided into several small segments for output, where the number of the divided segments is the segment value. The present embodiment does not limit the value of the segment value. The user can generate the segment number value corresponding to each test audio signal according to a preset rule. The preset rule is not limited in this embodiment. As long as each test audio signal has a corresponding segment number. For example, randomly generating a segment number corresponding to each test audio signal; or randomly generating the segment number corresponding to each test audio signal by using an integer in the range of [1,10 ].
The output order of the test audio signals is not limited in this embodiment, and for example, all the test audio signals may be output to the test user randomly and repeatedly, or output sequentially in the order of frequency values. If the frequency values are sorted from low to high, then the amplitude values are sorted from low to high under each frequency value, and finally, the test audio signals are sequentially output to the test user according to the formed sequence. And then, sequencing the frequency values from low to high, and outputting the test audio signal with the middle amplitude value at the current frequency value which should be output each time. And determining the next output test audio signal according to the feedback of the user to the test audio signal.
Further, the present embodiment does not limit the trigger timing of step S100. For example, when the terminal (which may be a test device (computer, PC) or a voice output device (e.g., an earphone, a hearing aid, etc.)) receives a test command, each test audio signal is sequentially output to the test user. The present embodiment also does not limit the triggering manner of step S100. For example, the user may press a trigger button to trigger the test instruction, or the user may issue a voice test instruction by voice.
In this embodiment, each test audio signal needs to be output to the test user in sequence, that is, only one test audio signal is sent to the user each time, and another test audio signal is output after the user outputs feedback information on the test audio signal. The present embodiment does not limit the trigger timing of outputting the test audio signal every time. For example, the test audio signal may be output to the test user when the test audio signal is output according to the signal sent by the test user, or the next test audio signal may be automatically output to the test user after receiving the feedback information. Of course, in order to avoid that the user does not hear the test audio signal completely or clearly, due to his mind or other reasons, the test user may choose to hear again, i.e. the terminal re-outputs the current test audio signal.
S101, according to the segment numerical values in the feedback information of the test audio signals input by a user, determining the test audio signal corresponding to the minimum amplitude value in the frequency values, wherein the segment numerical values in the feedback information are consistent with the segment numerical values in the corresponding test audio signals, as the auditory reference line corresponding to each frequency value, and forming an EQ curve according to each auditory reference line.
Specifically, the feedback information is feedback content given by the test user after hearing the test audio signal, but the content of the feedback information is not limited in this embodiment, but at least includes a segment value (i.e., the number of segments of the test audio signal that is given by the test user after hearing the test audio signal), and of course, the feedback information may also include other information, such as whether hearing is clear or not.
Further, the embodiment also does not limit the specific way for the user to input the feedback information, and for example, the feedback information may be input by text (for example, input by using a keyboard, a button, or other devices), or may be input by voice. In order to improve the efficiency and convenience of inputting the feedback information by the user, preferably, the feedback information in this embodiment is voice feedback information, that is, the test user may input the feedback information in a voice input mode.
The main purpose of this step is to determine the auditory reference line corresponding to each frequency value, and then obtain the EQ curve. The auditory reference line corresponding to each frequency value is obtained according to the test audio signal with the minimum amplitude value in each frequency value in the test audio signals which can be heard by each user. Namely, the test audio signal corresponding to the minimum amplitude value in each frequency value is found in the test audio signals which can be heard by the user and is used as an auditory reference line corresponding to each frequency value. For example, the amplitude value of each test audio signal corresponding to 1Khz that can be heard by the user is A, B, C, where when a is the smallest, the frequency value is 1Khz, and the test audio signal with the amplitude value a is an auditory baseline corresponding to 1 Khz.
The basis for judging whether the user can hear the test audio signal is whether the segment number value given by the user is consistent with the segment number value of the corresponding test audio signal. If the two signals are consistent, the user can hear the test audio signal, and if the two signals are inconsistent, the user cannot hear the test audio signal. In the determination of the hearing reference line, the confirmation is only made in the test audio signals that are available.
The embodiment does not specifically limit how the test audio signal corresponding to the minimum amplitude value in each frequency value, where the segment number value in the feedback information is consistent with the segment number value in the corresponding test audio signal, is used as the hearing reference line corresponding to each frequency value. It is sufficient that the test audio signal corresponding to the amplitude value which is the lowest among the frequency values that the user can hear and which is the smallest in comparison can be found as the hearing reference line corresponding to each frequency value.
For example, in step S100, all the test audio signals to be tested may be sequentially output to the test user, and in step S101, the frequency values existing in the test audio signals may be determined according to the obtained test audio signals that can be heard by all the users, and then the test audio signal corresponding to the minimum amplitude value existing in each frequency value may be determined. Namely, the auditory reference line corresponding to each existing frequency value is determined, and then an EQ curve corresponding to the test user can be formed.
In step S100, the test audio signals corresponding to the amplitude values of a certain frequency value (assumed as an x-frequency value) may be sequentially output, then step S101 is executed, and in step S101, the test audio signal corresponding to the minimum amplitude value in which the x-frequency value exists may be determined in the test audio signals according to that all the obtained users can hear the test audio signals. Namely determining an auditory reference line corresponding to the x frequency value; and then circularly executing the two steps according to other frequency values, namely the auditory reference line corresponding to each frequency value, and further forming an EQ curve corresponding to the test user.
Of course, the step S100 may also be to output the test audio signal corresponding to a certain amplitude value of a certain frequency value (assumed as an x-frequency value) in sequence, then execute the step S101, in the step S101, sequentially increase or decrease the amplitude value according to the recognition result (whether the result can be heard, and whether the corresponding segment value is consistent) of the step S100 to form a new test audio signal at the frequency to be output to the user, and then determine whether the audio signal can be heard according to the recognition result, and the auditory reference line corresponding to the frequency value can be determined through such an amplitude value fine tuning cycle; and then, other frequency values also execute the steps according to the test process cycle corresponding to the frequency value, namely the auditory reference line corresponding to each frequency value, and further an EQ curve corresponding to the test user can be formed.
Of course, in each of the above determination manners, there may be a frequency value at which the user does not have an audible test audio signal. The corresponding hearing reference line corresponding to the frequency value does not exist.
The execution body is not limited in this embodiment, and may be, for example, an earphone (which may be a wireless earphone or a wired earphone), a hearing aid, a mobile phone, a PC, or the like.
Based on the technical scheme, the method for generating the EQ curve provided by the embodiment of the invention can accurately judge whether the user can hear the current test audio signal really and reliably through the segment number value provided by the user during feedback, and overcomes the problems that the hearing detection is inaccurate due to the existence of auditory psychology and the generated EQ curve is inaccurate finally in the prior art; and then can improve the accuracy of the EQ curve that generates according to accurate hearing test condition, improve the compensation effect to output audio information based on this EQ curve, promote user experience.
Based on the foregoing embodiment, in order to improve the efficiency of generating the EQ curve, in this embodiment, a process of testing an auditory reference line of one frequency value (i.e., a first frequency value) is implemented through the specific process in fig. 2, and each frequency value is tested according to the method provided in this embodiment, so that an auditory reference line corresponding to each frequency value can be finally obtained, and an EQ curve is formed. The first frequency value is taken as an example to describe below, that is, when the test audio signal is the first test audio signal, the frequency value of the first test audio signal is the first frequency value. The first frequency value may be any frequency value, and each frequency value is executed in a loop according to the first frequency value. Referring specifically to fig. 2, the method may include:
s0, outputting a first test audio signal to a test user;
s1, receiving a segment number value in the feedback information of the first test audio signal input by the user, and judging whether the segment number value is consistent with the segment number value in the first test audio signal; if yes, entering S2, otherwise entering S5;
s2, reducing the amplitude value in the first test audio signal according to a first preset rule, and modifying the corresponding segment value to be used as a second test audio signal to be output to a test user;
s3, receiving a segment number value in the feedback information of the second test audio signal input by the user, and judging whether the segment number value is consistent with the segment number value in the second test audio signal; if so, entering S2 with the second test audio signal as the first test audio signal, otherwise, entering S4;
s4, taking the first test audio signal in the S2 as an auditory reference line corresponding to the first frequency value;
s5, increasing the amplitude value in the first test audio signal according to a second preset rule, and modifying the corresponding segment value to be used as a third test audio signal to be output to a test user;
s6, receiving a segment number value in the feedback information of the third test audio signal input by the user, and judging whether the segment number value is consistent with the segment number value in the third test audio signal; if not, entering S5 with the third test audio signal as the first test audio signal, and if so, entering S7;
and S7, taking the third test audio signal as an auditory reference line corresponding to the first frequency value.
Specifically, in step S0, a first test audio signal is first output to the user, where the first test audio signal includes a first frequency value, an amplitude value, and a corresponding segment value. The present embodiment does not limit the specific value of the amplitude value given here, for example, an amplitude value may be generated at random, and then the amplitude value is adaptively adjusted (for example, increased or decreased) according to the test result; or a minimum available amplitude value can be given firstly; it is of course also possible to provide an amplitude value with a size in the middle of all required amplitude values, and then to adapt (e.g. increase or decrease) the amplitude value according to the test result. That is, it can be simply understood that step S0 first provides a first test audio signal, performs a rough test, and then performs fine adjustment of the amplitude value according to the rough test result.
Step S1 is a test judgment of the first test audio signal corresponding to the first frequency value output to the user, i.e., whether the user can hear the first test audio signal is judged. Specifically, when a segment number value in feedback information of the received first test audio signal input by the user is consistent with a segment number value in the first test audio signal, it is proved that the user can hear the first test audio signal currently, and at this time, it is required to determine whether an amplitude value corresponding to the first test audio signal, that is, a first test audio signal, is a minimum amplitude value of the test audio signal corresponding to the first frequency that the user can hear. It is necessary to reduce the amplitude value corresponding to the first test audio signal in step S0 to form a new test audio signal for output to the user, so as to determine whether the amplitude value corresponding to the first test audio signal in step S0 is the smallest amplitude value that can be heard at the frequency. In order to improve the reliability of the test result of the user, not only the corresponding amplitude value but also the corresponding segment value needs to be modified so that the user is not affected by the last hearing of the test audio signal in S0. Therefore, the amplitude value of the first test audio signal can be reduced according to the process in step S2 to form a process for fine adjustment of the amplitude value of the second test audio signal. That is, steps S2 to S4 are performed in a loop to find the minimum amplitude value corresponding to the first frequency value that can be heard by the user.
When the segment number value in the feedback information of the first test audio signal input by the user is not consistent with the segment number value in the first test audio signal, it is proved that the user cannot hear the first test audio signal at present, and at this time, it is proved that the amplitude value corresponding to the first test audio signal, i.e., the first test audio signal, is possibly too small. It is necessary to increase the amplitude value corresponding to the first test audio signal in step S0 to form a new test audio signal for output to the user. In order to improve the reliability of the test result of the user, not only the corresponding amplitude value but also the corresponding segment value needs to be modified so that the user is not affected by the last hearing of the test audio signal in S0. Therefore, the amplitude value of the first test audio signal can be increased according to the process in step S5 to form a process for fine adjustment of the amplitude value of the second test audio signal. That is, steps S5 to S7 are performed in a loop to find the minimum amplitude value corresponding to the first frequency value that can be heard by the user.
The specific numerical values of the first preset rule and the second preset rule are not limited in this embodiment, and the user may set the values according to the setting condition of the actual amplitude value. When the values of the intervals between the amplitude values are the same, the corresponding first preset rule and the second preset rule may be equal.
Based on the technical scheme, the method for generating the EQ curve provided by the embodiment of the invention can accurately judge whether the user can hear the current test audio signal really and reliably through the segment number value provided by the user during feedback, and overcomes the problems that the hearing detection is inaccurate due to the existence of auditory psychology and the generated EQ curve is inaccurate finally in the prior art; and then can improve the accuracy of the EQ curve that generates according to accurate hearing test condition, improve the compensation effect to output audio information based on this EQ curve, promote user experience. And the efficiency of generating the EQ curve is accelerated through rough measurement and fine tuning.
The following describes an apparatus, a method, an apparatus, an output device, and a computer-readable storage medium for generating an EQ curve according to embodiments of the present invention, and the apparatus, the method, the apparatus, the output device, and the computer-readable storage medium for generating an EQ curve described below may be referred to in correspondence with the method for generating an EQ curve described above.
Referring to fig. 3, fig. 3 is a block diagram illustrating an apparatus for generating an EQ curve according to an embodiment of the present invention; the apparatus may include:
the test audio output module 101 is configured to output each test audio signal to a test user in sequence; wherein, the test audio signal comprises a frequency value, an amplitude value and a segment numerical value;
the EQ curve generating module 102 is configured to determine, according to a segment value in feedback information of each test audio signal input by a user, a test audio signal corresponding to a minimum amplitude value in each frequency value where the segment value in each feedback information is consistent with the segment value in a corresponding test audio signal, as an auditory reference line corresponding to each frequency value, and form an EQ curve according to each auditory reference line.
In particular, the feedback signal may be a speech feedback signal.
It should be noted that, based on any of the above embodiments, the apparatus may be implemented based on a programmable logic device, where the programmable logic device includes an FPGA, a CPLD, a single chip, and the like.
The embodiment is not limited to the execution subject of the method, and may be, for example, a hearing aid, an earphone (wireless earphone or wired earphone), or a device such as a PC. Referring to fig. 4, fig. 4 is a flowchart illustrating an audio output method according to an embodiment of the invention; the method can comprise the following steps:
s400, when an audio output instruction is detected, calling an EQ curve to compensate the target audio information; wherein the EQ curve is generated according to the method of generating an EQ curve of any of the embodiments described above;
and S401, outputting the compensated target audio information.
Specifically, the present embodiment does not limit the form of the audio output instruction, and may be, for example, a high level. According to the method, when the user needs to output the target audio information, the corresponding EQ curve can be called for compensation, so that the user can hear clearer and more comfortable audio, and the user experience is improved.
Further, there may be situations where multiple users may be used in conjunction with each device, for example, the headset may be used by two people or by people throughout the home. In this case, the hearing curves of the corresponding users may be different due to different ages, sexes, ear configurations, etc., and the EQ curves formed by the corresponding users may be different. Therefore, in order to improve the reliability and accuracy of hearing compensation, the device needs to be able to automatically identify the user, and then configure different EQ curves. Based on the foregoing embodiment, in this embodiment, preferably, the invoking the EQ curve to compensate the target audio information may include:
and identifying the identity information of the user, and calling an EQ curve corresponding to the identity information to compensate the target audio information.
Specifically, the present embodiment does not limit the manner of identifying the user identity information, as long as the identity of the current user can be identified. For example, the recognition may be performed by fingerprint information, image information, heart rate, voice control command, or voiceprint. The user can select a corresponding identification mode according to specific conditions of the equipment (such as the size of the equipment and the hardware computing capacity) and requirements of user identification accuracy, further collect corresponding parameters to perform corresponding identification, and finally call an EQ curve corresponding to the identity information according to an identification result to compensate the target audio information. For example, if the user selects fingerprint identification, the fingerprint information of the user can be collected, and the identity of the user can be judged according to the fingerprint information. Or a heart rate sensor is used for collecting heart rate values of the user, and the identity of the user is judged according to the heart rate values of the user. Preferably, when the device is a hearing aid or an earphone (such as a wireless earphone or an in-ear earphone), the heart rate sensor can be used to determine the identity of the user in order to facilitate identification of the user due to the relatively small size of the device. Specifically, a heart rate numerical value of a user is obtained through heart rate detection; and identifying the identity information of the user according to the heart rate numerical value.
Specifically, the specific position of the heart rate sensor is not limited in this embodiment, for example, a heart rate sensor may be added at any position of the device, but in order to extract a more accurate heart rate value, when the device is an earphone, the heart rate sensor may be disposed in a front cavity of the earphone for measuring a heart rate pulsation parameter and a heart rate value of a user. At this moment, the circuit board in the earphone handles the heart rate numerical value of gathering, judges user's identity.
Based on the above embodiment, invoking the EQ curve corresponding to the identity information to compensate for the target audio information may include:
judging whether an EQ curve corresponding to the identity information exists or not;
if so, calling an EQ curve corresponding to the identity information to compensate the target audio information;
and if not, generating an EQ curve corresponding to the identity information.
Specifically, the main purpose of this embodiment is to determine whether there is an EQ curve corresponding to the identity information. When present, indicating that the user has performed a hearing test, there is a corresponding EQ curve. When not present, it proves that no EQ curve is stored in the device in relation to it. At this time, an EQ curve corresponding to the user identity needs to be generated. The execution subject executing the EQ curve corresponding to the identity information is not limited in this embodiment. For example, the device itself may perform a hearing test process (i.e., the process of generating the EQ curve in any of the embodiments described above), or the device may issue a test instruction to another device, and the other device completes the hearing test on the user to form a corresponding EQ curve and receives the EQ curve completed by the other device. For example, when the device is a wireless headset, the wireless headset itself may perform the step of generating the EQ curve, or the wireless headset may send a test instruction to a handset login terminal connected to the wireless headset itself, and the terminal such as a handset performs the step of generating the EQ curve and sends the generated EQ curve to the wireless headset.
Before generating the EQ curve corresponding to the identity information, the present embodiment may further include outputting a prompt message to prompt the user whether the EQ curve generation process is required, that is, whether the user needs to perform a hearing test. And after receiving an instruction which is input by a user and confirms that the EQ curve generation is required to be executed, executing the step of generating the EQ curve corresponding to the identity information. The embodiment does not limit the specific manner of confirming that the EQ curve generation instruction needs to be executed, which is input by the user. Such as voice input.
The following takes the earphone as an example to illustrate the above specific process:
when the user uses the earphone for the first time, the hearing curve of the user can be obtained through testing according to the prompt operation of the earphone through a certain using process, and a corresponding EQ curve is generated through the hearing curve. When the user wears the earphone, the earphone detects that the earphone enters a wearing state, the user identity is identified through the heart rate sensor according to the heart rate curve, and a corresponding EQ curve is configured. When another user uses the product, the product can automatically recognize that the product is worn and selects the EQ curve. If the EQ curve corresponding to the user is not stored in the earphone, the earphone automatically issues a voice reminding command to remind the user to start hearing test, and the EQ curve is formed and stored after the hearing test is finished.
Referring to fig. 5, fig. 5 is a block diagram of an audio output apparatus according to an embodiment of the present invention; the apparatus may include:
the calling module 201 is configured to call an EQ curve to compensate for target audio information when an audio output instruction is detected; wherein the EQ curve is generated according to the method of generating an EQ curve of any of the embodiments described above;
and an output module 202, configured to output the compensated target audio information.
Based on the above embodiments, the invoking module 200 may include:
the identity recognition unit is used for recognizing identity information of the user;
and the calling unit is used for calling the EQ curve corresponding to the identity information to compensate the target audio information.
It should be noted that, based on any of the above embodiments, the apparatus may be implemented based on a programmable logic device, where the programmable logic device includes an FPGA, a CPLD, a single chip, and the like.
The embodiment of the invention also provides an output device, a memory and a processor; wherein the memory is configured to store a computer program, and the processor is configured to implement the steps of the method for generating an EQ curve according to any of the embodiments described above when the computer program is executed by the processor, and/or the steps of the method for outputting audio according to any of the embodiments described above when the computer program is executed by the processor.
The processor is used for outputting each test audio signal to a test user in sequence when executing the computer program; wherein, the test audio signal comprises a frequency value, an amplitude value and a segment numerical value; and according to the segment numerical value in the feedback information of each test audio signal input by the user, determining the test audio signal corresponding to the minimum amplitude value in each frequency value, which is consistent with the segment numerical value in the corresponding test audio signal, in each feedback information as an auditory reference line corresponding to each frequency value, and forming an EQ curve according to each auditory reference line. And/or the processor is used for calling the EQ curve to compensate the target audio information when detecting an audio output instruction when executing the computer program; wherein the EQ curve is generated according to the method for generating the EQ curve; and outputting the compensated target audio information.
Based on the above embodiments, the output device is a wireless earphone or a hearing aid.
Based on the above embodiments, the wireless headset has an identification component. The identification component can be a heart rate sensor or a voiceprint collector.
The present invention also provides a computer readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of generating EQ curves as described in any of the embodiments above, and/or which, when being executed by a processor, carries out the steps of the method of audio output as described in any of the embodiments above.
Wherein, the computer program realizes outputting each testing audio signal to the testing user in turn when being executed by the processor; wherein, the test audio signal comprises a frequency value, an amplitude value and a segment numerical value; and according to the segment numerical value in the feedback information of each test audio signal input by the user, determining the test audio signal corresponding to the minimum amplitude value in each frequency value, which is consistent with the segment numerical value in the corresponding test audio signal, in each feedback information as an auditory reference line corresponding to each frequency value, and forming an EQ curve according to each auditory reference line. And/or when the computer program is executed by the processor, the EQ curve is called to compensate the target audio information when the audio output instruction is detected; wherein the EQ curve is generated according to the method for generating the EQ curve; and outputting the compensated target audio information.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The method and apparatus for generating EQ curves, the method and apparatus for audio output, the output device and the computer readable storage medium provided by the present invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
Claims (14)
1. A method of generating an EQ curve, the method comprising:
outputting each test audio signal to a test user in sequence; wherein the test audio signal comprises a frequency value, an amplitude value and a segment value; generating a segment number value corresponding to each test audio signal according to a preset rule;
and determining the test audio signal corresponding to the minimum amplitude value in each frequency value with the segment value in the corresponding test audio signal consistent with the segment value in the corresponding test audio signal as an auditory reference line corresponding to each frequency value according to the segment value in the feedback information of each test audio signal input by a user, and forming an EQ curve according to each auditory reference line.
2. The method of claim 1, wherein the feedback information is voice feedback information.
3. The method as claimed in claim 1 or 2, wherein when the test audio signal is a first test audio signal, and the frequency value of the first test audio signal is a first frequency value, determining, according to a segment value in feedback information input by a user for each test audio signal, a test audio signal corresponding to a minimum amplitude value in each frequency value, where the segment value in each feedback information is consistent with the segment value in the corresponding test audio signal, as an auditory baseline corresponding to each frequency value, comprises:
s1, receiving a segment number value in the feedback information of the first test audio signal input by the user, and judging whether the segment number value is consistent with the segment number value in the first test audio signal; if yes, entering S2, otherwise entering S5;
s2, reducing the amplitude value in the first test audio signal according to a first preset rule, and modifying the corresponding segment value to be used as a second test audio signal to be output to a test user;
s3, receiving a segment number value in the feedback information of the second test audio signal input by the user, and judging whether the segment number value is consistent with the segment number value in the second test audio signal; if so, entering S2 with the second test audio signal as the first test audio signal, otherwise, entering S4;
s4, taking the first test audio signal in the S2 as an auditory reference line corresponding to the first frequency value;
s5, increasing the amplitude value in the first test audio signal according to a second preset rule, and modifying the corresponding segment value to be used as a third test audio signal to be output to a test user;
s6, receiving a segment number value in the feedback information of the third test audio signal input by the user, and judging whether the segment number value is consistent with the segment number value in the third test audio signal; if not, entering S5 with the third test audio signal as the first test audio signal, and if so, entering S7;
and S7, taking the third test audio signal as an auditory reference line corresponding to the first frequency value.
4. An apparatus for generating an EQ curve, comprising:
the test audio output module is used for outputting each test audio signal to a test user in sequence; wherein the test audio signal comprises a frequency value, an amplitude value and a segment value; generating a segment number value corresponding to each test audio signal according to a preset rule;
and the EQ curve generation module is used for determining the test audio signal corresponding to the minimum amplitude value in each frequency value with the segment value in the corresponding test audio signal consistent with the segment value in the corresponding test audio signal as an auditory reference line corresponding to each frequency value according to the segment value in the feedback information of each test audio signal input by a user, and forming an EQ curve according to each auditory reference line.
5. A method of audio output, the method comprising:
when an audio output instruction is detected, calling an EQ curve to compensate the target audio information; wherein the EQ curve is generated by the method of generating EQ curve of any of claims 1-3;
and outputting the compensated target audio information.
6. The method of claim 5, wherein invoking the EQ curve to compensate for target audio information comprises:
and identifying the identity information of the user, and calling an EQ curve corresponding to the identity information to compensate the target audio information.
7. The method of claim 6, wherein the identifying the identity information of the user comprises:
acquiring a heart rate value of the user through heart rate detection;
and identifying the identity information of the user according to the heart rate numerical value.
8. The method of claim 6, wherein invoking the EQ curve corresponding to the identity information to compensate for target audio information comprises:
judging whether an EQ curve corresponding to the identity information exists or not;
if so, calling an EQ curve corresponding to the identity information to compensate the target audio information;
and if not, generating an EQ curve corresponding to the identity information.
9. An apparatus for audio output, the apparatus comprising:
the calling module is used for calling the EQ curve to compensate the target audio information when the audio output instruction is detected; wherein the EQ curve is generated by the method of generating EQ curve of any of claims 1-3;
and the output module is used for outputting the compensated target audio information.
10. The apparatus of claim 9, wherein the means for invoking comprises:
the identity recognition unit is used for recognizing identity information of the user;
and the calling unit is used for calling the EQ curve corresponding to the identity information to compensate the target audio information.
11. An output device, characterized by a memory, a processor; wherein the memory is configured to store a computer program, and the processor is configured to implement the steps of the method for generating an EQ curve according to any one of claims 1-3 when the computer program is executed by the processor, and/or to implement the steps of the method for generating an audio output according to any one of claims 5-8 when the computer program is executed by the processor.
12. The output device of claim 11, wherein the output device is a wireless earphone or a hearing aid.
13. The output device of claim 12, wherein the wireless headset has an identification component.
14. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for generating an EQ curve according to any one of claims 1-3, and/or which, when being executed by a processor, carries out the steps of the method for audio output according to any one of claims 5-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810902267.8A CN109065064B (en) | 2018-08-09 | 2018-08-09 | Method for generating EQ curve, method for outputting audio and output equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810902267.8A CN109065064B (en) | 2018-08-09 | 2018-08-09 | Method for generating EQ curve, method for outputting audio and output equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109065064A CN109065064A (en) | 2018-12-21 |
CN109065064B true CN109065064B (en) | 2020-10-20 |
Family
ID=64678879
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810902267.8A Active CN109065064B (en) | 2018-08-09 | 2018-08-09 | Method for generating EQ curve, method for outputting audio and output equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109065064B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109688503B (en) * | 2019-01-15 | 2020-11-06 | 浙江强脑科技有限公司 | Psychological perception state detection system and method |
CN109982231B (en) * | 2019-04-04 | 2021-04-30 | 腾讯音乐娱乐科技(深圳)有限公司 | Information processing method, device and storage medium |
CN112905833A (en) * | 2021-01-19 | 2021-06-04 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio playback equipment preheating method, device, equipment and medium |
CN113015059B (en) * | 2021-02-23 | 2022-10-18 | 歌尔科技有限公司 | Audio optimization method, device, equipment and readable storage medium |
CN115831143A (en) * | 2022-11-21 | 2023-03-21 | 深圳前海沃尔科技有限公司 | Auditory enhancement method, system, readable storage medium and computer device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101140795A (en) * | 2007-07-09 | 2008-03-12 | 应义财 | Digital audio player having two-channel equalizer |
CN105118519A (en) * | 2015-07-10 | 2015-12-02 | 中山大学孙逸仙纪念医院 | Hearing evaluation system |
CN107615651A (en) * | 2015-03-20 | 2018-01-19 | 因诺沃Ip有限责任公司 | System and method for improved audio perception |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3989904A (en) * | 1974-12-30 | 1976-11-02 | John L. Holmes | Method and apparatus for setting an aural prosthesis to provide specific auditory deficiency corrections |
KR100636213B1 (en) * | 2004-12-28 | 2006-10-19 | 삼성전자주식회사 | Method for compensating audio frequency characteristic in real-time and sound system thereof |
KR101456570B1 (en) * | 2007-12-21 | 2014-10-31 | 엘지전자 주식회사 | Mobile terminal having digital equalizer and controlling method using the same |
JP6510487B2 (en) * | 2013-03-26 | 2019-05-08 | バラット, ラックラン, ポールBARRATT, Lachlan, Paul | Voice filter using sine function |
CN105262887B (en) * | 2015-09-07 | 2020-05-05 | 惠州Tcl移动通信有限公司 | Mobile terminal and audio setting method thereof |
CN107592636A (en) * | 2017-08-17 | 2018-01-16 | 深圳市诚壹科技有限公司 | A kind of method of processing information, terminal and server |
CN108040171A (en) * | 2017-11-30 | 2018-05-15 | 北京小米移动软件有限公司 | Voice operating method, apparatus and computer-readable recording medium |
-
2018
- 2018-08-09 CN CN201810902267.8A patent/CN109065064B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101140795A (en) * | 2007-07-09 | 2008-03-12 | 应义财 | Digital audio player having two-channel equalizer |
CN107615651A (en) * | 2015-03-20 | 2018-01-19 | 因诺沃Ip有限责任公司 | System and method for improved audio perception |
CN105118519A (en) * | 2015-07-10 | 2015-12-02 | 中山大学孙逸仙纪念医院 | Hearing evaluation system |
Also Published As
Publication number | Publication date |
---|---|
CN109065064A (en) | 2018-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109065064B (en) | Method for generating EQ curve, method for outputting audio and output equipment | |
KR101810806B1 (en) | Controlling a speech recognition process of a computing device | |
CN102265335B (en) | Hearing aid adjustment device and method | |
CN104735579B (en) | Headset system and method for adapting a headset system to a user | |
EP2720224B1 (en) | Voice Converting Apparatus and Method for Converting User Voice Thereof | |
CN112272346B (en) | In-ear detection method, earphone and computer readable storage medium | |
CN108766468B (en) | Intelligent voice detection method, wireless earphone, TWS earphone and terminal | |
US20220272465A1 (en) | Hearing device comprising a stress evaluator | |
US20120072213A1 (en) | Speech sound intelligibility assessment system, and method and program therefor | |
CN110234044A (en) | A kind of voice awakening method, voice Rouser and earphone | |
JP6294747B2 (en) | Notification sound sensing device, notification sound sensing method and program | |
CN111065032A (en) | Method for operating a hearing instrument and hearing system comprising a hearing instrument | |
CN107547978B (en) | Microphone control method and microphone | |
EP3823306B1 (en) | A hearing system comprising a hearing instrument and a method for operating the hearing instrument | |
CN113767431A (en) | Speech detection | |
CN105704619A (en) | Sound volume adjusting method and device | |
EP3879853A1 (en) | Adjusting a hearing device based on a stress level of a user | |
US9294848B2 (en) | Adaptation of a classification of an audio signal in a hearing aid | |
US20130202124A1 (en) | Method for testing hearing aids | |
US11457320B2 (en) | Selectively collecting and storing sensor data of a hearing system | |
EP3232906B1 (en) | Hearing test system | |
CN114830692A (en) | System comprising a computer program, a hearing device and a stress-assessing device | |
JP2011221101A (en) | Communication device | |
JPS59225441A (en) | Voice input device | |
EP4429273A1 (en) | Automatically informing a user about a current hearing benefit with a hearing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |