CN116567511A - Audio processing method and system based on big data - Google Patents

Audio processing method and system based on big data Download PDF

Info

Publication number
CN116567511A
CN116567511A CN202310530335.3A CN202310530335A CN116567511A CN 116567511 A CN116567511 A CN 116567511A CN 202310530335 A CN202310530335 A CN 202310530335A CN 116567511 A CN116567511 A CN 116567511A
Authority
CN
China
Prior art keywords
target
audiogram
determining
hearing
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310530335.3A
Other languages
Chinese (zh)
Inventor
翟兴
周小龙
库韶坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tianxingcheng Technology Co ltd
Original Assignee
Shenzhen Tianxingcheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tianxingcheng Technology Co ltd filed Critical Shenzhen Tianxingcheng Technology Co ltd
Priority to CN202310530335.3A priority Critical patent/CN116567511A/en
Publication of CN116567511A publication Critical patent/CN116567511A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Tone Control, Compression And Expansion, Limiting Amplitude (AREA)

Abstract

The embodiment of the application discloses an audio processing method and system based on big data, wherein the method comprises the following steps: performing pure-tone hearing test on the specified frequency band according to a first specified standard to obtain a first audiogram; measuring the specified frequency band according to a second specified standard to obtain an equal-loudness curve; acquiring signals of the specified frequency band in the environment by using a preset microphone to obtain a target environment signal; and compensating through an objective function relation and the equal-loudness curve, wherein the objective function relation is a function relation determined in advance according to the first audiogram and the target environment signal. By adopting the embodiment of the application, the hearing compensation effect can be improved.

Description

Audio processing method and system based on big data
Technical Field
The application relates to the technical field of big data or audio processing, in particular to an audio processing method and system based on big data.
Background
Along with the popularization of a large number of applications of terminal equipment (such as mobile phones, tablet computers and the like), applications which can be supported by the terminal equipment are more and more, functions are more and more powerful, the terminal equipment is developed towards diversification and individuation, and the terminal equipment becomes an indispensable electronic product in the life of users.
At present, the meaning of the hearing test for realizing the hearing compensation is very important, but the current hearing test is rough, so the problem of how to perfect the detail of the hearing test to improve the hearing compensation effect is needed to be solved.
Disclosure of Invention
The embodiment of the application provides an audio processing method and system based on big data, which can perfect hearing test details and improve hearing compensation effect.
In a first aspect, an embodiment of the present application provides an audio processing method based on big data, including:
performing pure-tone hearing test on the specified frequency band according to a first standard to obtain a first audiogram;
measuring the designated frequency band according to a second standard to obtain an equal-loudness curve;
acquiring signals of the specified frequency band in the environment by using a preset microphone to obtain a target environment signal;
and compensating through an objective function relationship, wherein the objective function relationship is a function relationship determined in advance according to the first audiogram and the target environment signal.
In a second aspect, embodiments of the present application provide an audio processing system based on big data, the apparatus including: the device comprises a test unit, a measurement unit, an acquisition unit and a compensation unit, wherein,
the test unit is used for conducting pure-tone hearing test on the specified frequency band based on a first specified standard to obtain a first audiogram;
the measuring unit is used for measuring the specified frequency band based on a second specified standard to obtain an equal-loudness curve;
the acquisition unit is used for acquiring the signals of the specified frequency band in the environment through a preset microphone to obtain target environment signals;
the compensation unit is used for compensating through an objective function relation, wherein the objective function relation is a function relation determined in advance according to the first audiogram and the target environment signal.
By implementing the embodiment of the application, the following beneficial effects are achieved:
it can be seen that, according to the audio processing method and system based on big data described in the embodiments of the present application, pure-tone hearing test is performed on a specified frequency band according to a first specified standard, a first audiogram is obtained, measurement is performed on the specified frequency band according to a second specified standard, an equal-loudness curve is obtained, a signal of the specified frequency band in the environment is acquired by using a preset microphone, a target environment signal is obtained, compensation is performed through a target function relationship and the equal-loudness curve, and the target function relationship is a function relationship determined in advance according to the first audiogram and the target environment signal, so that targeted compensation can be achieved, a hearing compensation effect is improved, and performance of terminal equipment is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an audio processing method based on big data according to an embodiment of the present application;
FIG. 2 is a flow chart of another audio processing method based on big data according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 4 is a functional unit block diagram of an audio processing system based on big data according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the embodiment of the present application, the terminal device may include at least one of the following: smart phones, tablet computers, hearing aids, wireless headsets, etc., are not limited herein.
In the embodiment of the present application, the equal loudness curve refers to a cluster of curves with equal main look and feel (loudness level) of the loudness of the sound obtained by subjective measurement. When the loudness of a certain sound is the same as the loudness of a standard sound, this intensity level of the standard sound is the loudness level of the sound. The loudness and loudness level theory is established, and sounds perceived to be equally loud are experimentally measured, and the plotted set of curves is called equal loudness curves, wherein each curve marks sounds having the same loudness, i.e., sounds equivalent to a certain loudness level.
The embodiments of the present application are described in detail below.
Referring to fig. 1, fig. 1 is a flow chart of an audio processing method based on big data provided in an embodiment of the present application, as shown in the drawing, applied to a terminal device, the audio processing method based on big data includes:
s101, performing pure-tone hearing test on the specified frequency band based on a first specified standard to obtain a first audiogram.
Wherein the first reference standard may be set by the user at his own discretion or by default of the system, e.g. the first reference standard may be ISO (2010) 8253:2010. The specified frequency band may be defaulted by the system. In a specific implementation, the terminal device may perform a pure-tone hearing test on the selected key frequency band (i.e. the specified frequency band) based on ISO (2010) 8253:2010, to obtain a hearing evaluation result, i.e. a first audiogram. The first audiogram is an image of the threshold and frequency functions.
In a specific implementation, the terminal device may perform a pure-tone hearing test on one or more users in a specified frequency band based on a first specified standard, to obtain a first audiogram. When a user is involved, a first audiogram is obtained, namely the audiogram of the user; when a plurality of users are involved, the audiogram of each user can be fitted to obtain a first audiogram, namely, the first audiogram can be obtained based on a big data mode.
S102, measuring the specified frequency band based on a second specified standard to obtain an equal-loudness curve.
Wherein the second specified criteria may be defaulted by the system, in the embodiment of the present application, the first specified criteria is different from the second specified criteria. For example, the second specified standard may be ISO (2006) 16803:2006.
In a specific implementation, the measurement result of the equal-loudness curve is subjective evaluation (for example, frequency band: 1000Hz, sound pressure level: 30dBHL, subjective evaluation: very light) of the loudness of pure tones of different sound pressure levels in a key frequency band by a user. In addition, the measurement result of the frequency domain resolution is the correlation between the distance from the center of the key frequency (the designated frequency band) and the equal-loudness masking hearing threshold of the center frequency band-stop filter (for example, the frequency band is 1000Hz, the decentration distance is 10%, and the equal-loudness masking hearing threshold is 40 dBHL); the measurement of the time-domain resolution is a perceived domain of the critical frequency band narrow break-band noise break-time (e.g., frequency band: 1000Hz, break-time perception threshold: 10 ms).
In the specific implementation, the one or more users are measured in the specified frequency band based on the second specified standard to obtain an equal-loudness curve, and when one user is, the equal-loudness curve is obtained; when a plurality of users are involved, the equal-loudness curves of the users can be fitted to obtain an equal-loudness curve, namely, the equal-loudness curve is obtained based on a big data mode.
S103, acquiring signals of the specified frequency band in the environment through a preset microphone to obtain a target environment signal.
In a specific implementation, the terminal device may also acquire signals (such as voice, wind sound, and the like) of a specified frequency band in the environment through the calibrated microphone, so as to obtain a target environment signal. The preset microphones may include calibrated microphones.
Specifically, the terminal device may record the current (during audiometry) environmental signal through the calibrated microphone, perform time-frequency domain analysis, and correct the audiometry result (including the results of step S101 and step S102) through the spectrum intensity of the current ambient environmental signal, where the correction method may be obtained by counting the existing observation fit.
And S104, compensating through an objective function relation and the equal-loudness curve, wherein the objective function relation is a function relation determined in advance according to the first audiogram and the target environment signal.
In a specific implementation, the terminal device can compensate through an objective function relationship and an equal-loudness curve, so that the corresponding compensation effect is improved, and the objective function relationship is a function relationship determined in advance according to the first audiogram and the target environment signal.
Optionally, the method further comprises the following steps:
a1, obtaining a target hearing evaluation result;
a2, executing the step of compensating through the objective function relation when the objective hearing evaluation result is in a first preset range;
a3, when the target hearing evaluation result is in a second preset range, determining a target classification algorithm corresponding to the target hearing evaluation result, and performing compensation operation based on the target classification algorithm, wherein no intersection exists between the first preset range and the second preset range.
In a specific implementation, the target hearing evaluation result may be obtained based on the first audiogram, and different audiograms may correspond to different evaluation results, or the evaluation results obtained by the evaluation may be performed again. The first preset range may be an empirical value, for example, which may be set by the user or default by the system. When the target hearing evaluation result is in the first preset range, the terminal equipment can compensate through the target function relationship, so that the corresponding compensation effect is improved, and the target function relationship is a function relationship determined in advance according to the first audiogram and the target environment signal.
In addition, the second preset range may be an empirical value, which may be set by the user or default by the system, for example, without an intersection between the first preset range and the second preset range. When the target hearing evaluation result is in the second preset range, the terminal device can determine a target classification algorithm corresponding to the target hearing evaluation result, and then perform compensation operation based on the target classification algorithm, and further, according to the hearing evaluation result and the classification algorithm, a test and allocation scheme can be selected from the N test and allocation schemes which have been sufficiently tested. In this way, the objective function can be used when fine compensation is required, and the classification algorithm can be used when fine compensation is not required (as in the case of rapid fitting, etc.).
Optionally, the method further comprises the following steps:
b1, determining a measurement hearing threshold through the first audiogram;
b2, determining noise volume and signal-to-noise ratio according to the target environment signal;
b3, constructing the objective function relation based on the measurement hearing threshold, the noise volume and the signal to noise ratio.
In a specific implementation, the terminal device may determine the measurement hearing threshold based on the first audiogram, which may change with the environmental volume or the signal-to-noise ratio, and may determine the noise volume and the signal-to-noise ratio according to the target environmental signal, where the correction of the measurement hearing threshold is performed based on the following formula:
wherein T is q To correct the post-hearing threshold T n For the measured hearing threshold under noise (i.e., the measured hearing threshold), N is the noise volume, and SNR is the signal-to-noise ratio (the ratio of the test volume to the noise volume); in addition, correction of the frequency domain resolution is performed based on the formula (objective function relation):
wherein ERB is an abbreviation of equivalent rectangular bandwidth (equivalent rectangular bandwidth), is a general expression coefficient of frequency domain resolution, and f, N, S, T are respectively a center frequency band currently tested, noise energy, signal energy and a pure tone threshold (after correction) of a user in the center frequency band. A, B, C are parameters obtained according to big data statistics.
Optionally, the step S104, which compensates through the objective function relationship, may include the following steps:
41. correcting the first audiogram through the objective function relation to obtain a reference audiogram;
42. smoothing the reference audiogram to obtain a second audiogram;
43. determining a base threshold parameter based on the second audiogram;
44. when the absolute value of the pure tone threshold difference value of the two ears of the user is larger than a preset threshold value, adjusting the basic threshold parameter to obtain a first threshold parameter;
45. generating a target gain factor based on the first threshold parameter;
46. determining a target weight factor according to the equal-loudness curve;
47. and determining a target compression ratio according to the target gain coefficient and the target weight factor.
The preset threshold may be preset or default.
In a specific implementation, in the embodiment of the present application, the preset threshold may be set by the user or default by the system. The terminal equipment can correct the first audiogram through an objective function relation to obtain a reference audiogram, specifically, a plurality of audiograms are selected based on the audiograms of the first audiogram, the audiograms are substituted into the objective function relation based on an objective function to calculate ERB, a plurality of ERBs are obtained, the mean square error of the ERBs is determined, the objective mean square error is obtained, the objective optimization factor corresponding to the mean square error is determined according to the mapping relation between the preset mean square error and the optimization factor, each audiogram in the selected audiograms is optimized based on the objective optimization factors to obtain a plurality of reference audiograms, the audiograms are converted into update points based on the reference audiograms, a plurality of update points are obtained, and the first audiogram is corrected based on the update points to obtain the reference audiogram.
In a specific implementation, points corresponding to the selected multiple hearing thresholds are filtered out from the first audiogram, multiple points can be obtained on the filtered first audiogram, and the multiple points and multiple updated points are fitted to obtain the first audiogram, wherein the first audiogram comprises multiple updated points.
And then, smoothing the reference audiogram to obtain a second audiogram, wherein different audiograms can correspond to different basic audiogram parameters, namely, a mapping relation between the audiogram and the basic audiogram parameters can be preset, the corresponding basic audiogram parameters are determined based on the second audiogram and the mapping relation, and when the absolute value of the pure tone audiogram difference value of the ears of the user is larger than a preset threshold value, the basic audiogram parameters are adjusted to obtain the first audiogram parameters.
Furthermore, the mapping relation between the hearing threshold parameter and the gain coefficient may be stored in the terminal device in advance, that is, the target gain coefficient may be generated based on the first hearing threshold parameter, the loudness parameters corresponding to a plurality of points of the specified frequency band may be extracted from the equal loudness curve, a plurality of loudness parameters may be obtained, the average value of the plurality of loudness parameters may be determined, the target average value may be obtained, the target weight factor corresponding to the target factor may be determined according to the preset mapping relation between the average value and the weight factor, and then the target compression ratio may be obtained in the following manner:
R=A*G
where G represents a gain parameter, K represents a weight factor, and R represents a target compression ratio.
In a specific implementation, the compression ratio, that is, the ratio of the dynamic range of the signal before compression to the dynamic range after compression, is generally a positive value, for example, the greater the compression ratio is greater than or equal to 1, the greater the value thereof, the smaller the gain obtained for a high volume (loud) input, when the compression ratio is equal to 1: and 1, the linear amplification is represented, and the compression technology is a selective sound amplification method, so that the working parameters of the terminal equipment can be further adjusted, the terminal equipment is better suitable for ears of users, and the user experience can be improved.
Optionally, the step 44 may further include adjusting the basic threshold parameter to obtain a first threshold parameter, where the step may further include:
441. acquiring a target absolute value of a pure tone threshold difference value of two ears of the user;
442. determining a reference ratio between pure tone thresholds of both ears of the user;
443. determining a target adjusting parameter corresponding to the target absolute value according to a mapping relation between a preset absolute value and the adjusting parameter;
444. determining a target fine tuning coefficient corresponding to the reference ratio according to a mapping relation between a preset ratio and the fine tuning coefficient;
445. and adjusting the basic threshold parameter according to the target adjusting parameter and the target fine tuning coefficient to obtain the first threshold parameter.
Specifically, the mapping relationship between the preset absolute value and the adjustment parameter, and the mapping relationship between the preset ratio and the fine adjustment coefficient may be stored in the terminal device in advance.
In specific implementation, the first threshold parameter may be understood as a measurement threshold, the terminal device may obtain a target absolute value of a pure tone threshold difference value of two ears of the user, determine a reference ratio between the pure tone threshold values of two ears of the user, determine a target adjustment parameter corresponding to the target absolute value according to a mapping relationship between a preset absolute value and an adjustment parameter, determine a target fine adjustment coefficient corresponding to the reference ratio according to a mapping relationship between the preset ratio and the fine adjustment coefficient, and finally adjust the basic threshold parameter according to the target adjustment parameter and the target fine adjustment coefficient to obtain the first threshold parameter, where a specific calculation formula is as follows:
first threshold parameter= (basic threshold parameter+target adjustment parameter) ×1+target fine adjustment coefficient
Furthermore, the pure tone threshold can be further adjusted according to the ear difference of the user, so that personalized compensation can be realized, namely targeted compensation can be realized for the ear difference of the user.
Optionally, after determining the target compression ratio according to the target gain coefficient in step 47, the method may further include the steps of:
c1, determining a target working parameter corresponding to the target compression ratio according to a mapping relation between a preset compression ratio and the working parameter;
and C2, working according to the target working parameters.
In this embodiment of the present application, a mapping relationship between a preset compression ratio and an operating parameter may be stored in advance in a terminal device, where the operating parameter may be at least one of the following: operating voltage, operating current, operating power, sensitivity, etc., are not limited herein.
Specifically, the terminal device can determine the target working parameter corresponding to the target compression ratio according to the mapping relation between the preset compression ratio and the working parameter, and further work according to the target working parameter, so that the performance of the terminal device is improved, and further user experience is improved.
Optionally, the step A3 of determining a target classification algorithm corresponding to the target hearing assessment result may include the following steps:
a31, determining a target score corresponding to the target hearing evaluation result;
a32, determining a target grade corresponding to the target score according to a mapping relation between a preset score and the grade;
a33, determining the target classification algorithm corresponding to the target grade according to a mapping relation between a preset grade and the classification algorithm.
In a specific implementation, a mapping relationship between a preset score and a class and a mapping relationship between a preset class and a classification algorithm may be stored in the terminal device in advance. Specifically, the terminal device may determine a target score corresponding to the target hearing evaluation result, determine a target level corresponding to the target score according to a mapping relationship between a preset score and a level, and determine a target classification algorithm corresponding to the target level according to a mapping relationship between the preset level and a classification algorithm, that is, the evaluation result reflects the hearing condition, and then specifically select the classification algorithm according to the hearing condition, so as to implement accurate compensation.
It can be seen that, according to the audio processing method based on big data described in the embodiments of the present application, pure-tone hearing test is performed on a specified frequency band according to a first specified standard, a first audiogram is obtained, measurement is performed on the specified frequency band according to a second specified standard, an equal-loudness curve is obtained, a signal of the specified frequency band in the environment is collected by using a preset microphone, a target environment signal is obtained, compensation is performed through a target function relationship and the equal-loudness curve, and the target function relationship is a function relationship determined in advance according to the first audiogram and the target environment signal, so that targeted compensation can be achieved, a hearing compensation effect is improved, and performance of terminal equipment is improved.
Referring to fig. 2, fig. 2 is a flow chart of an audio processing method based on big data provided in an embodiment of the present application, as shown in the drawing, applied to a terminal device, the audio processing method based on big data includes:
s201, performing pure-tone hearing test on the specified frequency band based on a first specified standard to obtain a first audiogram.
S202, measuring the specified frequency band based on a second specified standard to obtain an equal-loudness curve.
S203, acquiring signals of the specified frequency band in the environment through a preset microphone to obtain a target environment signal.
S204, determining a target hearing evaluation result according to the first audiogram and the target environment signal.
In specific implementation, the terminal device can determine a reference hearing evaluation result through a first audiogram, obtain a target signal-to-noise ratio corresponding to a target environment signal, determine a target influence factor corresponding to the target signal-to-noise ratio according to a mapping relation between a preset signal-to-noise ratio and an influence factor, and determine a target hearing evaluation result according to the target influence factor and the reference hearing evaluation result.
And S205, when the target hearing evaluation result is in a first preset range, compensating through an objective function relation and the equal-loudness curve, wherein the objective function relation is a function relation determined in advance according to the first audiogram and the target environment signal.
S206, when the target hearing evaluation result is in a second preset range, determining a target classification algorithm corresponding to the target hearing evaluation result, and performing compensation operation based on the target classification algorithm, wherein no intersection exists between the first preset range and the second preset range.
The specific descriptions of steps S201 to S203 and steps S205 to S206 may refer to the corresponding steps of the audio processing method based on big data described in fig. 1, and are not described herein.
It can be seen that, in the audio processing method based on big data described in the embodiments of the present application, pure-tone hearing test is performed on a specified frequency band based on a first specified standard, a first audiogram is obtained, measurement is performed on the specified frequency band based on a second specified standard, an equal-loudness curve is obtained, a signal of the specified frequency band in the environment is acquired through a preset microphone, a target environment signal is obtained, a target hearing evaluation result is determined according to the first audiogram and the target environment signal, when the target hearing evaluation result is in a first preset range, compensation is performed through a target function relationship and the equal-loudness curve, and the target function relationship is a function relationship determined in advance according to the first audiogram and the target environment signal; and when the target hearing evaluation result is in the second preset range, determining a target classification algorithm corresponding to the target hearing evaluation result, and performing compensation operation based on the target classification algorithm, wherein no intersection exists between the first preset range and the second preset range, so that targeted compensation can be realized, the hearing compensation effect is improved, and the performance of the terminal equipment is improved.
In accordance with the above embodiment, referring to fig. 3, fig. 3 is a schematic structural diagram of a terminal device provided in the embodiment of the present application, as shown in the fig. 3, the terminal device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in the embodiment of the present application, the programs include instructions for executing the following steps:
performing pure-tone hearing test on the specified frequency band according to a first specified standard to obtain a first audiogram;
measuring the specified frequency band according to a second specified standard to obtain an equal-loudness curve;
acquiring signals of the specified frequency band in the environment by using a preset microphone to obtain a target environment signal;
and compensating through an objective function relation and the equal-loudness curve, wherein the objective function relation is a function relation determined in advance according to the first audiogram and the target environment signal.
Optionally, the above program further comprises instructions for performing the steps of:
determining a measurement hearing threshold from the first audiogram;
determining noise volume and signal-to-noise ratio according to the target environment signal;
the objective function relationship is constructed based on the measured hearing threshold, the noise volume, and the signal-to-noise ratio.
Optionally, in the aspect of compensating by the objective function relationship, the program includes instructions for performing the steps of:
correcting the first audiogram through the objective function relation to obtain a reference audiogram;
smoothing the reference audiogram to obtain a second audiogram;
determining a base threshold parameter based on the second audiogram;
when the absolute value of the pure tone threshold difference value of the two ears of the user is larger than a preset threshold value, adjusting the basic threshold parameter to obtain a first threshold parameter;
generating a target gain factor based on the first threshold parameter;
determining a target weight factor according to the equal-loudness curve;
and determining a target compression ratio according to the target gain coefficient and the target weight factor.
Optionally, after the determining the target compression ratio according to the target gain factor, the program further includes instructions for:
determining a target working parameter corresponding to the target compression ratio according to a mapping relation between a preset compression ratio and the working parameter;
and working according to the target working parameters.
Optionally, the above program further comprises instructions for performing the steps of:
obtaining a target hearing evaluation result;
executing the step of compensating through the objective function relationship when the objective hearing evaluation result is in a first preset range;
and when the target hearing evaluation result is in a second preset range, determining a target classification algorithm corresponding to the target hearing evaluation result, and performing compensation operation based on the target classification algorithm, wherein no intersection exists between the first preset range and the second preset range.
Optionally, in the determining a target classification algorithm corresponding to the target hearing assessment result, the program comprises instructions for:
determining a target score corresponding to the target hearing assessment result;
determining a target grade corresponding to the target score according to a mapping relation between a preset score and the grade;
and determining the target classification algorithm corresponding to the target grade according to the mapping relation between the preset grade and the classification algorithm.
It can be seen that, in the terminal device described in the embodiment of the present application, a pure-tone hearing test is performed on a specified frequency band according to a first specified standard, a first audiogram is obtained, a measurement is performed on the specified frequency band according to a second specified standard, an equal-loudness curve is obtained, a signal of the specified frequency band in the environment is collected by using a preset microphone, a target environment signal is obtained, and compensation is performed through a target function relationship and the equal-loudness curve, where the target function relationship is a function relationship determined in advance according to the first audiogram and the target environment signal, so that targeted compensation can be implemented, a hearing compensation effect is improved, and performance of the terminal device is improved.
Fig. 4 is a functional block diagram of a big data based audio processing system 400 referred to in an embodiment of the present application. The big data based audio processing system 400 is applied to a terminal device, and the big data based audio processing system 400 comprises: a test unit 401, a measurement unit 402, an acquisition unit 403 and a compensation unit 404, wherein,
the test unit 401 is configured to obtain a first audiogram based on a pure-tone hearing test performed on a specified frequency band according to a first specified standard;
the measurement unit 402 is configured to measure the specified frequency band according to a second specified standard to obtain an equal-loudness curve;
the acquisition unit 403 is configured to acquire a signal of the specified frequency band in the environment by using a preset microphone, so as to obtain a target environmental signal;
the compensation unit 404 is configured to compensate by using an objective function relationship and the equal-loudness curve, where the objective function relationship is a function relationship determined in advance according to the first audiogram and the target environmental signal.
Optionally, the system 400 is further specifically configured to:
determining a measurement hearing threshold from the first audiogram;
determining noise volume and signal-to-noise ratio according to the target environment signal;
the objective function relationship is constructed based on the measured hearing threshold, the noise volume, and the signal-to-noise ratio.
Optionally, in the aspect of the compensation by the objective function relationship, the compensation unit 404 is specifically configured to:
correcting the first audiogram through the objective function relation to obtain a reference audiogram;
smoothing the reference audiogram to obtain a second audiogram;
determining a base threshold parameter based on the second audiogram;
when the absolute value of the pure tone threshold difference value of the two ears of the user is larger than a preset threshold value, adjusting the basic threshold parameter to obtain a first threshold parameter;
generating a target gain factor based on the first threshold parameter;
determining a target weight factor according to the equal-loudness curve;
and determining a target compression ratio according to the target gain coefficient and the target weight factor.
Optionally, after the determining the target compression ratio according to the target gain coefficient, the system is further specifically configured to:
determining a target working parameter corresponding to the target compression ratio 400 according to a mapping relation between a preset compression ratio and the working parameter;
and working according to the target working parameters.
Optionally, the system 400 is further specifically configured to:
obtaining a target hearing evaluation result;
executing the step of compensating through the objective function relationship when the objective hearing evaluation result is in a first preset range;
and when the target hearing evaluation result is in a second preset range, determining a target classification algorithm corresponding to the target hearing evaluation result, and performing compensation operation based on the target classification algorithm, wherein no intersection exists between the first preset range and the second preset range.
Optionally, in the determining a target classification algorithm corresponding to the target hearing assessment result, the system 400 is specifically configured to:
determining a target score corresponding to the target hearing assessment result;
determining a target grade corresponding to the target score according to a mapping relation between a preset score and the grade;
and determining the target classification algorithm corresponding to the target grade according to the mapping relation between the preset grade and the classification algorithm.
It can be seen that, the audio processing system based on big data described in the embodiments of the present application is applied to a terminal device, and performs pure-tone hearing test on a specified frequency band according to a first specified standard to obtain a first audiogram, measures the specified frequency band according to a second specified standard to obtain an equal-loudness curve, collects signals of the specified frequency band in the environment by using a preset microphone to obtain a target environment signal, and compensates the target environment signal by using a target function relationship and the equal-loudness curve, where the target function relationship is a function relationship determined in advance according to the first audiogram and the target environment signal, so that targeted compensation can be achieved, a hearing compensation effect can be improved, and performance of the terminal device can be improved.
It may be appreciated that the functions of each program module of the audio processing system based on big data in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not repeated herein.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program makes a computer execute part or all of the steps of any one of the above method embodiments, and the computer includes a terminal device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising a terminal device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. An audio processing method based on big data, which is applied to a terminal device, comprises the following steps:
performing pure-tone hearing test on the specified frequency band according to a first specified standard to obtain a first audiogram;
measuring the specified frequency band according to a second specified standard to obtain an equal-loudness curve;
acquiring signals of the specified frequency band in the environment by using a preset microphone to obtain a target environment signal;
and compensating through an objective function relation and the equal-loudness curve, wherein the objective function relation is a function relation determined in advance according to the first audiogram and the target environment signal.
2. The method according to claim 1, wherein the method further comprises:
determining a measurement hearing threshold from the first audiogram;
determining noise volume and signal-to-noise ratio according to the target environment signal;
the objective function relationship is constructed based on the measured hearing threshold, the noise volume, and the signal-to-noise ratio.
3. The method according to claim 1 or 2, wherein the compensating by an objective function relationship comprises:
correcting the first audiogram through the objective function relation to obtain a reference audiogram;
smoothing the reference audiogram to obtain a second audiogram;
determining a base threshold parameter based on the second audiogram;
when the absolute value of the pure tone threshold difference value of the two ears of the user is larger than a preset threshold value, adjusting the basic threshold parameter to obtain a first threshold parameter;
generating a target gain factor based on the first threshold parameter;
determining a target weight factor according to the equal-loudness curve;
and determining a target compression ratio according to the target gain coefficient and the target weight factor.
4. A method according to claim 3, wherein after said determining a target compression ratio from said target gain factor, the method further comprises:
determining a target working parameter corresponding to the target compression ratio according to a mapping relation between a preset compression ratio and the working parameter;
and working according to the target working parameters.
5. The method according to claim 1, wherein the method further comprises:
obtaining a target hearing evaluation result;
executing the step of compensating through the objective function relationship when the objective hearing evaluation result is in a first preset range;
and when the target hearing evaluation result is in a second preset range, determining a target classification algorithm corresponding to the target hearing evaluation result, and performing compensation operation based on the target classification algorithm, wherein no intersection exists between the first preset range and the second preset range.
6. The method of claim 5, wherein the determining a target classification algorithm corresponding to the target hearing assessment result comprises:
determining a target score corresponding to the target hearing assessment result;
determining a target grade corresponding to the target score according to a mapping relation between a preset score and the grade;
and determining the target classification algorithm corresponding to the target grade according to the mapping relation between the preset grade and the classification algorithm.
7. An audio processing system based on big data, the system comprising: the device comprises a test unit, a measurement unit, an acquisition unit and a compensation unit, wherein,
the test unit is used for obtaining a first audiogram based on pure-tone hearing test of the specified frequency band according to a first specified standard;
the measuring unit is used for measuring the specified frequency band according to a second specified standard to obtain an equal-loudness curve;
the acquisition unit is used for acquiring the signals of the specified frequency band in the environment by using a preset microphone to obtain target environment signals;
the compensation unit is used for compensating through an objective function relation and the equal-loudness curve, wherein the objective function relation is a function relation determined in advance according to the first audiogram and the target environment signal.
8. The system according to claim 7, characterized in that it is also specifically adapted to:
determining a measurement hearing threshold from the first audiogram;
determining noise volume and signal-to-noise ratio according to the target environment signal;
the objective function relationship is constructed based on the measured hearing threshold, the noise volume, and the signal-to-noise ratio.
9. The system according to claim 7 or 8, characterized in that in said compensating by means of an objective function relationship, said compensating unit is specifically adapted to:
correcting the first audiogram through the objective function relation to obtain a reference audiogram;
smoothing the reference audiogram to obtain a second audiogram;
determining a base threshold parameter based on the second audiogram;
when the absolute value of the pure tone threshold difference value of the two ears of the user is larger than a preset threshold value, adjusting the basic threshold parameter to obtain a first threshold parameter;
generating a target gain factor based on the first threshold parameter;
determining a target weight factor according to the equal-loudness curve;
and determining a target compression ratio according to the target gain coefficient and the target weight factor.
10. The system according to claim 9, wherein after said determining a target compression ratio from said target gain factor, said system is further specifically configured to:
determining a target working parameter corresponding to the target compression ratio according to a mapping relation between a preset compression ratio and the working parameter;
and working according to the target working parameters.
CN202310530335.3A 2023-05-10 2023-05-10 Audio processing method and system based on big data Pending CN116567511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310530335.3A CN116567511A (en) 2023-05-10 2023-05-10 Audio processing method and system based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310530335.3A CN116567511A (en) 2023-05-10 2023-05-10 Audio processing method and system based on big data

Publications (1)

Publication Number Publication Date
CN116567511A true CN116567511A (en) 2023-08-08

Family

ID=87497725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310530335.3A Pending CN116567511A (en) 2023-05-10 2023-05-10 Audio processing method and system based on big data

Country Status (1)

Country Link
CN (1) CN116567511A (en)

Similar Documents

Publication Publication Date Title
US11671770B2 (en) Systems and methods for providing personalized audio replay on a plurality of consumer devices
US10631105B2 (en) Hearing aid system and a method of operating a hearing aid system
US8045737B2 (en) Method of obtaining settings of a hearing instrument, and a hearing instrument
CN114143646B (en) Detection method, detection device, earphone and readable storage medium
US8542841B2 (en) Method to estimate the sound pressure level at eardrum using measurements away from the eardrum
KR101837331B1 (en) Method of operating a hearing aid system and a hearing aid system
US9532148B2 (en) Method of operating a hearing aid and a hearing aid
US20110125494A1 (en) Speech Intelligibility
CN106797520A (en) The method and hearing aid device system of operating hearing aid system
WO2022174727A1 (en) Howling suppression method and apparatus, hearing aid, and storage medium
WO2019238799A1 (en) Method of testing microphone performance of a hearing aid system and a hearing aid system
US8036392B2 (en) Method and device for determining an effective vent
WO2022115154A1 (en) Apparatus and method for estimation of eardrum sound pressure based on secondary path measurement
EP1830602B1 (en) A method of obtaining settings of a hearing instrument, and a hearing instrument
CN111669682A (en) Method for optimizing sound quality of loudspeaker equipment
AU2015207943A1 (en) Method and apparatus for feedback suppression
WO2019238800A1 (en) Method of testing microphone performance of a hearing aid system and a hearing aid system
CN116567511A (en) Audio processing method and system based on big data
CN113613147B (en) Hearing effect correction and adjustment method, device, equipment and medium of earphone
US20210258701A1 (en) Method of fitting a hearing aid system and a hearing aid system
Kuk et al. Using digital hearing aids to visualize real-life effects of signal processing
EP3808101A1 (en) Method of fine tuning a hearing aid system and a hearing aid system
US11985485B2 (en) Method of fitting a hearing aid gain and a hearing aid fitting system
CN117376800A (en) Signal processing method, hearing aid device, and computer-readable storage medium
WO2024030337A1 (en) Statistical audiogram processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination