CN112866889B - Self-adaptive multi-channel loudness compensation method for hearing aid and hearing aid chip - Google Patents

Self-adaptive multi-channel loudness compensation method for hearing aid and hearing aid chip Download PDF

Info

Publication number
CN112866889B
CN112866889B CN202110019096.6A CN202110019096A CN112866889B CN 112866889 B CN112866889 B CN 112866889B CN 202110019096 A CN202110019096 A CN 202110019096A CN 112866889 B CN112866889 B CN 112866889B
Authority
CN
China
Prior art keywords
channel
signal
sound pressure
channel signal
gain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110019096.6A
Other languages
Chinese (zh)
Other versions
CN112866889A (en
Inventor
熊志辉
陈旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Xinhailing Semiconductor Co ltd
Original Assignee
Hunan Xinhailing Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Xinhailing Semiconductor Co ltd filed Critical Hunan Xinhailing Semiconductor Co ltd
Priority to CN202110019096.6A priority Critical patent/CN112866889B/en
Publication of CN112866889A publication Critical patent/CN112866889A/en
Application granted granted Critical
Publication of CN112866889B publication Critical patent/CN112866889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Tone Control, Compression And Expansion, Limiting Amplitude (AREA)

Abstract

The invention discloses a self-adaptive multichannel loudness compensation method and a hearing aid chip for a hearing aid, wherein the method comprises the following steps: acquiring an input signal, performing AD conversion to obtain a digital signal, and framing to obtain a voice signal; filtering and transforming the voice signals to obtain a plurality of channel signals, and dividing all the channel signals into a first channel combination and a second channel combination according to an audible range; carrying out piecewise linear gain on each channel signal in the first channel combination; carrying out nonlinear gain on each channel signal in the second channel combination; and synthesizing each channel signal in the first channel combination after gain compensation and each channel signal in the second channel combination after gain compensation to obtain a voice signal after gain compensation and outputting the voice signal. Firstly, the signal is decomposed into two groups, then the channels are divided into two groups, the channels with smaller audible range are combined to carry out piecewise linear gain, and the other channel is combined to carry out nonlinear gain, thereby effectively improving the voice quality after gain compensation.

Description

Self-adaptive multi-channel loudness compensation method for hearing aid and hearing aid chip
Technical Field
The invention relates to the technical field of voice signal processing, in particular to a hearing aid-oriented self-adaptive multi-channel loudness compensation method and a hearing aid chip.
Background
The auditory sense is one of the important senses of human beings, is an important link for communication with the surroundings, and has an importance not inferior to the visual sense. The global population of hearing loss has increased in recent years due to global noise pollution and the aging of the world population structure. For centuries scientists have been using various means to help hearing impaired patients to improve their hearing, and wearing hearing aids is one of the most common methods to compensate for hearing loss without major medical breakthroughs.
With the development of DSP technology, hearing aids have evolved from analog to digital in the last 90 s, where typically the expansion from single channel to multiple channel: on the one hand, most hearing losses have a strong frequency dependence; on the other hand, the input signals have different characteristics in different frequency bands, which requires the hearing aid to perform targeted processing on signals with different frequencies to improve the hearing compensation effect.
Loudness compensation is one of the most critical in digital hearing aid algorithms. Its main function is to gain compensate the loudness of sound to improve the comprehension of the language by the patient. Loudness compensation is related to many factors such as sound intensity, frequency, and loudness. This complicates the compensation method, as different compensation schemes may be required for different hearing impaired patients. The implementation of such software by means of digital signal processing makes it possible to implement personalized configurations for different patients. Digital hearing aids offer great convenience and flexibility for loudness compensation.
Early hearing aids employed linear amplification techniques to amplify the signal using the same gain for signals at different sound pressure levels. Linear amplification works well for input signals at medium sound pressure levels to provide adequate gain, but provides inadequate gain and poor compensation for input signals at low or high sound pressure levels.
Today, almost all hearing aids employ Wide Dynamic Range Compression (WDRC) techniques to amplify the sound signal. The WDRC technique gives a large gain to the low pressure level input signal to make the sound audible and uses a small gain to the high pressure level input signal to avoid the sound from hearing uncomfortably. The advantage of the WDRC is that different gains are given to signals of different sound pressure levels, so that low sound pressure level signals are audible and high sound pressure level signals are not noisy. However, WDRC techniques make the signal audible and also introduce signal distortion, and the voice quality of the WDRC compensated signal is even lower than that obtained by linear compensation when the input signal sound pressure level is in some range.
Disclosure of Invention
Aiming at the defects of the existing loudness compensation method, the invention provides a self-adaptive multi-channel loudness compensation method and a hearing aid chip for a hearing aid, and the voice quality after gain compensation is effectively improved.
In order to achieve the above object, the present invention provides an adaptive multichannel loudness compensation method for a hearing aid, comprising the following steps:
step 1, acquiring an input signal of a microphone of a digital hearing aid, performing AD conversion on the input signal to obtain a digital signal, and framing the converted digital signal to obtain a voice signal;
step 2, filtering and transforming the voice signals to obtain a plurality of channel signals, and dividing all the channel signals into a first channel combination and a second channel combination according to an audible range, wherein the audible range of the channel signals in the second channel combination is far larger than that of the channel signals in the first channel combination;
step 3, performing piecewise linear gain on each channel signal in the first channel combination to obtain a gain-compensated first channel combination;
step 4, carrying out nonlinear gain on each channel signal in the second channel combination to obtain a gain-compensated second channel combination;
and 5, synthesizing each channel signal in the first channel combination after gain compensation and each channel signal in the second channel combination after gain compensation to obtain a voice signal after gain compensation and outputting the voice signal.
Further preferably, in step 3, performing piecewise linear gain on each channel signal in the first channel combination specifically includes:
step 3.1, channel signal t in the first channel combination is obtainedi,a(k) Sound pressure level of (c):
Figure BDA0002887774100000021
in the formula, SBL (i, a) represents the channel signal ti,a(k) A represents that the channel signal belongs to a first channel combination, i represents that the first channel combination belongs to the ith frame voice signal, k represents a frequency point, and n represents the number of sampling points;
step 3.2, taking the current frame i as the center, calculating the sum channel signal t in the front and back m frames of voice signalsi,a(k) The sound pressure level of each corresponding channel signal is obtained, and then the channel signal t is obtainedi,a(k) Average sound pressure level of:
Figure BDA0002887774100000022
in the formula, M _ SBL (i, a) tableSignal t of display channeli,a(k) G is a summation intermediate parameter;
step 3.3, based on the channel signal ti,a(k) The sound pressure level and the average sound pressure level obtain a channel signal ti,a(k) The primary compensation result of the sound pressure is as follows:
Figure BDA0002887774100000031
in the formula, new1_ SBL (i, a) represents the channel signal ti,a(k) Preliminary compensation result of sound pressure, TnIndicating the hearing threshold, U, of a normal personnIndicates pain threshold in normal persons, MnRepresents the optimal threshold value, T, of a normal personuIndicating the hearing threshold, U, of the patientuIndicates the pain threshold of the patient, MuRepresents an optimal threshold for the patient;
step 3.4, based on the channel signal ti,a(k) The preliminary compensation result of the sound pressure level and the sound pressure obtains a channel signal ti,a(k) Final compensation result of sound pressure:
Figure BDA0002887774100000032
in the formula, new2_ SBL (i, a) represents the channel signal ti,a(k) Final compensation results of sound pressure;
step 3.5, based on the channel signal ti,a(k) The final compensation result of the sound pressure level and the sound pressure is used for the channel signal ti,a(k) And (3) gain compensation is carried out:
new_ti,a(k)=ti,a(k)×10(new2_SBL(i,a)-SBL(i,a))/20
in the formula, new _ ti,a(k) Representing the gain-compensated channel signal ti,a(k)。
More preferably, in step 3.2, m.gtoreq.10.
Further preferably, in step 4, the performing nonlinear gain on each channel signal in the second channel combination specifically includes:
step (ii) of4.1, obtaining the channel signal t in the second channel combinationi,b(k) Sound pressure level of (c):
Figure BDA0002887774100000033
in the formula, SBL (i, b) represents the channel signal ti,b(k) B represents that the channel signal belongs to a second channel combination, i represents that the second channel combination belongs to the ith frame voice signal, k represents a frequency point, and n represents the number of sampling points;
step 4.2, based on the channel signal ti,b(k) The sound pressure level of which yields the channel signal ti,b(k) The primary compensation result of the sound pressure is as follows:
Figure BDA0002887774100000034
in the formula, new1_ SBL (i, b) represents the channel signal ti,b(k) Preliminary compensation result of sound pressure, TnIndicating the hearing threshold, U, of a normal personnIndicates pain threshold in normal persons, MnRepresents the optimal threshold value, T, of a normal personuIndicating the hearing threshold, U, of the patientuIndicates the pain threshold of the patient, MuRepresents an optimal threshold for the patient;
step 4.3, based on the channel signal ti,b(k) The preliminary compensation result of the sound pressure level and the sound pressure obtains a channel signal ti,b(k) Final compensation result of sound pressure:
Figure BDA0002887774100000041
Figure BDA0002887774100000042
Figure BDA0002887774100000043
in the formula, new2_ SBL (i, b) represents the channel signal ti,b(k) Final compensation of the sound pressure, λ1、λ2Indicating an adjustment parameter;
step 4.4, based on the channel signal ti,b(k) The final compensation result of the sound pressure level and the sound pressure is used for the channel signal ti,b(k) And (3) gain compensation is carried out:
new_ti,b(k)=ti,b(k)×10(new2_SBL(i,b)-SBL(i,b))/20
in the formula, new _ ti,b(k) Representing the gain-compensated channel signal ti,b(k)。
Preferably, in step 2, the filtering and transforming are performed on the voice signals, specifically:
the WOLA filter transformation is performed on the voice signals.
Further preferably, in step 2, the filtering and transforming are performed on the voice signals to obtain a plurality of channel signals, specifically:
and performing filtering transformation on each frame of voice signal, and transforming each frame of voice signal to obtain 16 channel signals.
In order to achieve the above object, the present invention further provides an adaptive multi-channel loudness compensation system for a digital hearing aid, comprising a memory and a processor, wherein the memory stores a computer program, and wherein the processor implements the steps of the above method when executing the computer program.
To achieve the above object, the present invention also provides a hearing aid chip having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the above method.
The invention provides a self-adaptive multichannel loudness compensation method for a hearing aid, which is characterized in that firstly, multichannel decomposition is carried out on signals, then channels are divided into two groups, different processing methods are respectively carried out, the channel combination with a smaller audible range is obtained, only a simple piecewise linear method is adopted for gain, and the other channel combination is carried out, so that self-adaptive adjustment is carried out through the distance relation between the sound pressure level of the channel signals and the hearing threshold, pain threshold and optimal threshold of normal people, and the voice quality after gain compensation is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flow chart of a loudness compensation method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all the directional indicators (such as up, down, left, right, front, and rear … …) in the embodiment of the present invention are only used to explain the relative position relationship between the components, the movement situation, etc. in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "connected," "secured," and the like are to be construed broadly, and for example, "secured" may be a fixed connection, a removable connection, or an integral part; the connection can be mechanical connection, electrical connection, physical connection or wireless communication connection; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
The embodiment discloses a hearing aid-oriented adaptive multichannel loudness compensation method, which includes the steps of firstly performing multichannel decomposition on a signal by using a WOLA (weighted overlap-add) filter, then dividing channels into two groups, respectively performing different processing methods, combining channels with a smaller audible range, performing gain only by using a simple piecewise linear method, and performing adaptive adjustment on the other channel combination according to the distance relationship between the sound pressure level of the channel signal and the hearing threshold, the pain threshold and the optimal threshold of a normal person, and referring to fig. 1, the method includes the following specific steps:
step 1, obtaining an input signal y (n) of a microphone of a digital hearing aid, performing AD conversion on the input signal y (n) to obtain a digital signal, and framing the converted digital signal to obtain a voice signal yi(n), wherein i represents a frame number, and n represents the number of sampling points;
step 2, voice signal yi(n) performing WOLA filter transformation to obtain 16-channel signal, which is denoted as { ti,j(k) 1,2,', 16}, where k denotes frequency points, and j denotes a channelA serial number;
all channel signals are then divided into a first channel combination and a second channel combination according to the audible range, wherein the audible range of the channel signals in the second channel combination is much larger than the audible range of the channel signals in the first channel combination:
in this embodiment, since most of the audible voice signals in the 16 channel signals are in the middle channel, the channel signals when j is 5 to j is 12 are considered to contain most of the audible voice signals, and therefore the channel signals when j is 1, 3, 4, 13, 14, 15, and 16 are grouped into the first channel combination, which is denoted as { t ═ 1, 3, 4, 13, 14, 15, and 16i,a(k) 1, | a ═ 1, 3, 4, 13, 14, 15, 16 }; and combining the channel signals of j 5, 6, 7, 8, 9, 10, 11 and 12 into a second channel combination, which is marked as { ti,b(k)|b=5、6、7、8、9、10、11、12}。
Step 3, performing piecewise linear gain on each channel signal in the first channel combination to obtain the gain-compensated first channel combination, taking the channel signal in the ith frame of speech signal as an example, specifically including:
step 3.1, channel signal t in the first channel combination is obtainedi,a(k) Sound pressure level of (c):
Figure BDA0002887774100000061
in the formula, SBL (i, a) represents the channel signal ti,a(k) A represents that the channel signal belongs to a first channel combination, i represents that the first channel combination belongs to the ith frame voice signal, k represents a frequency point, and n represents the number of sampling points;
step 3.2, taking the current frame i as the center, calculating the sum channel signal t in the front and back m frames of voice signalsi,a(k) The sound pressure level of each corresponding channel signal is obtained, and then the channel signal t is obtainedi,a(k) Average sound pressure level of:
Figure BDA0002887774100000062
wherein M _ SBL (i, a)) Representing the channel signal ti,a(k) G is a summation intermediate parameter, for example, when m is 10, i is 12, and a is 1, the average sound pressure level of the first channel signal in the i-th frame speech signal is the average of the sound pressure levels of the first channel signals of the respective 2 nd to 22 nd frame speech signals;
step 3.3, based on the channel signal ti,a(k) The sound pressure level and the average sound pressure level obtain a channel signal ti,a(k) The primary compensation result of the sound pressure is as follows:
Figure BDA0002887774100000071
in the formula, new1_ SBL (i, a) represents the channel signal ti,a(k) Preliminary compensation result of sound pressure, TnIndicating the hearing threshold, U, of a normal personnIndicates pain threshold in normal persons, MnRepresents the optimal threshold value, T, of a normal personuIndicating the hearing threshold, U, of the patientuIndicates the pain threshold of the patient, MuRepresents an optimal threshold for the patient;
step 3.4, based on the channel signal ti,a(k) The preliminary compensation result of the sound pressure level and the sound pressure obtains a channel signal ti,a(k) Final compensation result of sound pressure:
Figure BDA0002887774100000072
in the formula, new2_ SBL (i, a) represents the channel signal ti,a(k) Final compensation results of sound pressure;
step 3.5, based on the channel signal ti,a(k) The final compensation result of the sound pressure level and the sound pressure is used for the channel signal ti,a(k) And (3) gain compensation is carried out:
new_ti,a(k)=ti,a(k)×10(new2_SBL(i,a)-SBL(i,a))/20
in the formula, new _ ti,a(k) Representing the gain-compensated channel signal ti,a(k)。
Step 4, performing nonlinear gain on each channel signal in the second channel combination to obtain a gain-compensated second channel combination, which specifically includes:
step 4.1, channel signal t in the second channel combination is obtainedi,b(k) Sound pressure level of (c):
Figure BDA0002887774100000073
in the formula, SBL (i, b) represents the channel signal ti,b(k) B represents that the channel signal belongs to a second channel combination, i represents that the second channel combination belongs to the ith frame voice signal, k represents a frequency point, and n represents the number of sampling points;
step 4.2, based on the channel signal ti,b(k) The sound pressure level of which yields the channel signal ti,b(k) As a result of the preliminary compensation of the sound pressure, in this embodiment, the sound pressure level of the channel signal is preliminarily compensated by using a WDRC method:
Figure BDA0002887774100000081
in the formula, new1_ SBL (i, b) represents the channel signal ti,b(k) Preliminary compensation result of sound pressure, TnIndicating the hearing threshold, U, of a normal personnIndicates pain threshold in normal persons, MnRepresents the optimal threshold value, T, of a normal personuIndicating the hearing threshold, U, of the patientuIndicates the pain threshold of the patient, MuRepresents an optimal threshold for the patient;
step 4.3, self-adaptive parameter adjustment is carried out according to the current sound pressure level, the signal distortion degree is reduced, and the sound pressure level compensation effect is further improved, namely based on the channel signal ti,b(k) The preliminary compensation result of the sound pressure level and the sound pressure obtains a channel signal ti,b(k) Final compensation result of sound pressure:
Figure BDA0002887774100000082
Figure BDA0002887774100000083
Figure BDA0002887774100000084
in the formula, new2_ SBL (i, b) represents the channel signal ti,b(k) Final compensation of the sound pressure, λ1、λ2Expressing an adjustment parameter, and adjusting according to the distance relationship between the original sound pressure level and the hearing threshold value, the pain threshold value and the optimal threshold value of a normal person, wherein the sound pressure level is adaptively increased when being between the hearing threshold value and the optimal threshold value, and the adjustment strength is increased when the value is larger; when the sound pressure level is between the optimal threshold and the pain threshold, the sound pressure level is reduced in a self-adaptive manner, and the adjustment force is smaller when the value is larger; the self-adaptive adjusting method can effectively reduce the signal distortion degree and further improve the compensation effect;
step 4.4, based on the channel signal ti,b(k) The final compensation result of the sound pressure level and the sound pressure is used for the channel signal ti,b(k) And (3) gain compensation is carried out:
new_ti,b(k)=ti,b(k)×10(new2_SBL(i,b)-SBL(i,b))/20
in the formula, new _ ti,b(k) Representing the gain-compensated channel signal ti,b(k)。
And 5, synthesizing all the 16 channel signals subjected to gain compensation by using a WOLA synthesis filter to obtain a frame of complete signals, and obtaining and outputting the voice signals subjected to gain compensation.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (7)

1. A method of adaptive multi-channel loudness compensation for a hearing aid, comprising the steps of:
step 1, acquiring an input signal of a microphone of a digital hearing aid, performing AD conversion on the input signal to obtain a digital signal, and framing the converted digital signal to obtain a voice signal;
step 2, filtering and transforming the voice signals to obtain a plurality of channel signals, and dividing all the channel signals into a first channel combination and a second channel combination according to an audible range, wherein the audible range of the channel signals in the second channel combination is far larger than that of the channel signals in the first channel combination;
step 3, performing piecewise linear gain on each channel signal in the first channel combination to obtain a gain-compensated first channel combination;
step 4, carrying out nonlinear gain on each channel signal in the second channel combination to obtain a gain-compensated second channel combination;
step 5, synthesizing each channel signal in the first channel combination after gain compensation and each channel signal in the second channel combination after gain compensation to obtain a voice signal after gain compensation and output the voice signal;
in step 3, performing piecewise linear gain on each channel signal in the first channel combination specifically includes:
step 3.1, channel signal t in the first channel combination is obtainedi,a(k) Sound pressure level of (c):
Figure FDA0003428930720000011
in the formula, SBL (i, a) represents the channel signal ti,a(k) A represents that the channel signal belongs to a first channel combination, i represents that the first channel combination belongs to the ith frame voice signal, k represents a frequency point, and n represents the number of sampling points;
step 3.2, taking the current frame i as the center, calculating the sum channel signal t in the front and back m frames of voice signalsi,a(k) The sound pressure level of each corresponding channel signal is obtained, and then the channel signal t is obtainedi,a(k) Average sound pressure level of:
Figure FDA0003428930720000012
wherein M _ SBL (i, a) represents the channel signal ti,a(k) G is a summation intermediate parameter;
step 3.3, based on the channel signal ti,a(k) The sound pressure level and the average sound pressure level obtain a channel signal ti,a(k) The primary compensation result of the sound pressure is as follows:
Figure FDA0003428930720000013
in the formula, new1_ SBL (i, a) represents the channel signal ti,a(k) Preliminary compensation result of sound pressure, TnIndicating the hearing threshold, U, of a normal personnIndicates pain threshold in normal persons, MnRepresents the optimal threshold value, T, of a normal personuIndicating the hearing threshold, U, of the patientuIndicates the pain threshold of the patient, MuRepresents an optimal threshold for the patient;
step 3.4, based on the channel signal ti,a(k) The preliminary compensation result of the sound pressure level and the sound pressure obtains a channel signal ti,a(k) Final compensation result of sound pressure:
Figure FDA0003428930720000021
in the formula, new2_ SBL (i, a) represents the channel signal ti,a(k) Final compensation results of sound pressure;
step 3.5, based on the channel signal ti,a(k) The final compensation result of the sound pressure level and the sound pressure is used for the channel signal ti,a(k) And (3) gain compensation is carried out:
new_ti,a(k)=ti,a(k)×10(new2_SBL(i,a)-SBL(i,a))/20
in the formula, new _ ti,a(k) Representing the gain-compensated channel signal ti,a(k)。
2. The hearing aid-oriented adaptive multi-channel loudness compensation method according to claim 1, characterized in that in step 3.2, m is greater than or equal to 10.
3. The hearing aid-oriented adaptive multichannel loudness compensation method according to claim 1, wherein in step 4, the performing nonlinear gain on each channel signal in the second channel combination specifically comprises:
step 4.1, channel signal t in the second channel combination is obtainedi,b(k) Sound pressure level of (c):
Figure FDA0003428930720000022
in the formula, SBL (i, b) represents the channel signal ti,b(k) B represents that the channel signal belongs to a second channel combination, i represents that the second channel combination belongs to the ith frame voice signal, k represents a frequency point, and n represents the number of sampling points;
step 4.2, based on the channel signal ti,b(k) The sound pressure level of which yields the channel signal ti,b(k) The primary compensation result of the sound pressure is as follows:
Figure FDA0003428930720000023
in the formula, new1_ SBL (i, b) represents the channel signal ti,b(k) Preliminary compensation result of sound pressure, TnIndicating the hearing threshold, U, of a normal personnIndicates pain threshold in normal persons, MnRepresents the optimal threshold value, T, of a normal personuIndicating the hearing threshold, U, of the patientuIndicates the pain threshold of the patient, MuRepresents an optimal threshold for the patient;
step 4.3, based on the channel signal ti,b(k) The preliminary compensation result of the sound pressure level and the sound pressure obtains a channel signal ti,b(k) Final compensation result of sound pressure:
Figure FDA0003428930720000031
Figure FDA0003428930720000032
Figure FDA0003428930720000033
in the formula, new2_ SBL (i, b) represents the channel signal ti,b(k) Final compensation of the sound pressure, λ1、λ2Indicating an adjustment parameter;
step 4.4, based on the channel signal ti,b(k) The final compensation result of the sound pressure level and the sound pressure is used for the channel signal ti,b(k) And (3) gain compensation is carried out:
new_ti,b(k)=ti,b(k)×10(new2_SBL(i,b)-SBL(i,b))/20
in the formula, new _ ti,b(k) Representing the gain-compensated channel signal ti,b(k)。
4. The adaptive multi-channel loudness compensation method according to claim 1,2 or 3, wherein in step 2, the filtering transformation is performed on each of the speech signals, specifically:
the WOLA filter transformation is performed on the voice signals.
5. The adaptive multi-channel loudness compensation method according to claim 1,2 or 3, wherein in step 2, the filtering transformation is performed on each of the speech signals to obtain a plurality of channel signals, specifically:
and performing filtering transformation on each frame of voice signal, and transforming each frame of voice signal to obtain 16 channel signals.
6. An adaptive multi-channel loudness compensation system for a digital hearing aid, comprising a memory storing a computer program and a processor, characterized in that the processor realizes the steps of the method according to any of claims 1 to 5 when executing the computer program.
7. A hearing aid chip having a computer program stored thereon, characterized in that the computer program, when being executed by a processor, realizes the steps of the method according to any one of claims 1 to 5.
CN202110019096.6A 2021-01-07 2021-01-07 Self-adaptive multi-channel loudness compensation method for hearing aid and hearing aid chip Active CN112866889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110019096.6A CN112866889B (en) 2021-01-07 2021-01-07 Self-adaptive multi-channel loudness compensation method for hearing aid and hearing aid chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110019096.6A CN112866889B (en) 2021-01-07 2021-01-07 Self-adaptive multi-channel loudness compensation method for hearing aid and hearing aid chip

Publications (2)

Publication Number Publication Date
CN112866889A CN112866889A (en) 2021-05-28
CN112866889B true CN112866889B (en) 2022-03-25

Family

ID=76004949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110019096.6A Active CN112866889B (en) 2021-01-07 2021-01-07 Self-adaptive multi-channel loudness compensation method for hearing aid and hearing aid chip

Country Status (1)

Country Link
CN (1) CN112866889B (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102984634A (en) * 2011-11-22 2013-03-20 南京工程学院 Digital hearing-aid unequal-width sub-band automatic gain control method
WO2015166516A1 (en) * 2014-04-28 2015-11-05 Linear Srl Method and apparatus for preserving the spectral clues of an audio signal altered by the physical presence of a digital hearing aid and tuning thereafter.

Also Published As

Publication number Publication date
CN112866889A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
US11270688B2 (en) Deep neural network based audio processing method, device and storage medium
US11812223B2 (en) Electronic device using a compound metric for sound enhancement
JP4705300B2 (en) Hearing aid incorporating signal processing technology
JP2904272B2 (en) Digital hearing aid and hearing aid processing method thereof
US6072885A (en) Hearing aid device incorporating signal processing techniques
AU2008361614B2 (en) Method for sound processing in a hearing aid and a hearing aid
WO2020029998A1 (en) Electroencephalogram-assisted beam former, beam forming method and ear-mounted hearing system
CN113993053B (en) Channel self-adaptive digital hearing aid wide dynamic range compression method
JP3269669B2 (en) Hearing compensator
KR100603042B1 (en) Hearing aid having noise reduction function and method for reducing noise
US9408001B2 (en) Separate inner and outer hair cell loss compensation
CN112866889B (en) Self-adaptive multi-channel loudness compensation method for hearing aid and hearing aid chip
KR100633122B1 (en) Method for embodying function of hearing aid using personal digital assistant and apparatus thereof
US11445307B2 (en) Personal communication device as a hearing aid with real-time interactive user interface
US9124963B2 (en) Hearing apparatus having an adaptive filter and method for filtering an audio signal
CN114866939A (en) Novel superstrong audiphone speech processing system who makes an uproar that falls
Preves Approaches to noise reduction in analog, digital, and hybrid hearing aids
CN112738701B (en) Full-digital PWM audio output method for hearing aid chip and hearing aid chip
JPS5879400A (en) Hearing aid
EP4040806A2 (en) A hearing device comprising a noise reduction system
EP4054210A1 (en) A hearing device comprising a delayless adaptive filter
CN111755023A (en) Frequency shift real-time loudness compensation method based on equal loudness curve
JP2000352991A (en) Voice synthesizer with spectrum correction function
JP3616797B2 (en) Auditory organ function promoting device
Mota-Gonzalez et al. Hearing aid based on frequency transposing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant