CN103761974A - Cochlear implant - Google Patents

Cochlear implant Download PDF

Info

Publication number
CN103761974A
CN103761974A CN201410042057.8A CN201410042057A CN103761974A CN 103761974 A CN103761974 A CN 103761974A CN 201410042057 A CN201410042057 A CN 201410042057A CN 103761974 A CN103761974 A CN 103761974A
Authority
CN
China
Prior art keywords
mrow
noise
microphone
cochlear implant
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410042057.8A
Other languages
Chinese (zh)
Other versions
CN103761974B (en
Inventor
樊伟
王振
许长建
刘新东
孙增军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lishengte Medical Science & Tech Co Ltd
Original Assignee
Lishengte Medical Science & Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lishengte Medical Science & Tech Co Ltd filed Critical Lishengte Medical Science & Tech Co Ltd
Priority to CN201410042057.8A priority Critical patent/CN103761974B/en
Publication of CN103761974A publication Critical patent/CN103761974A/en
Application granted granted Critical
Publication of CN103761974B publication Critical patent/CN103761974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Abstract

The invention provides a cochlear implant which comprises an external cochlear implant device and an internal cochlear implant device. The external cochlear implant device comprises a front microphone and a rear microphone. The front microphone and the rear microphone are arranged on the external cochlear implant device, are different in sound tube direction and are both used for receiving original voice signals, and two microphone voice noise-reduction processing systems are connected with the front microphone and the rear microphone respectively and carry out noise reduction processing on the two original voice signals to extract pure voice signals. According to the cochlear implant, a dual-stage noise reduction mode is adopted, the sound tube directions of the two microphones are ingeniously designed, the power spectral subtraction theory is introduced in cooperation, noise-contained voice and noise are estimated, and the algorithm complexity is greatly lowered compared with a traditional spectral subtraction algorithm; in addition, the defect that noise of noise-contained voice signals is inaccurately estimated in the traditional spectral subtraction algorithm is overcome, and adverse effects of music noise caused by the traditional spectral subtraction noise reduction algorithm on use of the cochlear implant are also avoided.

Description

Artificial cochlea
Technical Field
The invention belongs to the technical field of electronic products, and relates to a cochlear implant.
Background
The Cochlear Implant (Cochlear Implant) is an electronic device for recovering or gaining hearing of adults or children with severe, extremely severe or total deafness, can completely replace damaged inner ear hair cells, can convert external sound into nerve electric pulse signals, bypasses necrotic hair cells in an auditory system, directly stimulates spiral ganglia of auditory nerves and transmits information to the brain. In cases where hearing loss is severe, cochlear implants are the only hope and choice for deaf patients. The artificial cochlea mainly comprises an external machine and an internal machine, the working principle of the artificial cochlea is shown in figure 1, and a microphone collects external sound signals; the sound processor processes the sound signals by adopting a certain sound processing strategy, encodes the processed sound signals and transmits the encoded sound signals through coil wireless radio frequency. After the in vivo subcutaneous coil receives the radio frequency signal, the radio frequency signal is decoded by the chip, and the stimulator sends corresponding stimulation pulse signals to corresponding electrodes according to the decoding information; the pulse signal directly stimulates the auditory nerve and is transmitted to the auditory center of the brain, so that people can generate auditory sense. I.e., the artificial cochlea bypasses the auditory pathway in front of the inner ear hair cells, directly stimulates the auditory nerve, and ultimately produces a perception of sound in the brain.
At present, 800 thousands of patients with severe deafness exist in the whole country, and the artificial cochlea is the best choice. The artificial cochlea converts sound into an electric signal in a certain coding form by an external speech processor, and directly excites auditory nerve by an electrode system implanted in the human body to recover or rebuild the auditory function of the deaf.
The receiving of the external speech is mainly realized by a microphone, the type selection of the microphone in the cochlear implant is directional or omnidirectional, the number of the microphones is changed from single path to double path, and the noise reduction effect is better and better. One of the core technologies of cochlear implants is to reduce noise of a voice signal received by an external microphone, and the cochlear implants can greatly reduce the interference stimulation of background noise and various interference signals to electrodes of cochlear implants, so that patients and family members of the patients can better accept cochlear implants, and the development of social welfare industry is promoted. In the field of the existing cochlear implant technology, a common noise reduction processing method is spectral subtraction. The spectral subtraction method is a common speech enhancement method and is characterized by small operand and easy real-time implementation. But a commonly used spectral subtraction is to replace the current noise spectral component with the average noise of the silence segment statistics of the speech signal. In practical processing, on one hand, the search for "silent sections of a speech signal" avoids introducing the speech signal, which requires a relatively robust algorithm; on the other hand, in the case of low signal-to-noise ratio, the "silent segments of the speech signal" will be confused with the speech signal, thereby causing a large amount of "music noise" to remain, which will affect the intelligibility of the enhanced speech. Aiming at the problems, Udrea et al introduce an over-subtraction coefficient to improve on the basis of the original spectral subtraction method, and can intentionally subtract more or less noise by adjusting the over-subtraction coefficient, so as to better highlight the voice power spectrum. However, how to take the value of the over-subtraction coefficient can control the noise amount to be subtracted well, which becomes a new problem. Therefore, the key to noise reduction by spectral subtraction is that the estimation of the noise power spectrum is exactly close to the power spectrum of the background noise of the speech signal, i.e. the real-time accurate estimation of the noise power spectrum.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention is directed to a cochlear implant for solving the problem of speech intelligibility after music noise enhancement in the existing cochlear implant by using spectral subtraction noise reduction processing.
To achieve the above and other related objects, the present invention provides a cochlear implant comprising an cochlear implant external unit and an cochlear implant internal unit, the cochlear implant external unit comprising: the front microphone is arranged on the outer machine of the artificial cochlea; the rear microphone is arranged on the outer machine of the artificial cochlea; the sound tubes of the front microphone and the rear microphone have different directions and are used for receiving original voice signals; two-way microphone pronunciation noise reduction processing system includes: the A/D sampling module is connected with the front microphone and the rear microphone and is used for carrying out A/D sampling on two paths of original voice signals received by the two microphones to obtain two paths of digital voice data; the framing and windowing module is connected with the A/D sampling module and is used for framing and windowing the two paths of digital voice data; the noisy speech signal estimation module is connected with the framing and windowing module, and is used for performing summation operation on the two paths of digital speech data subjected to framing and windowing processing and then performing average processing to obtain noisy speech signal estimation; the noise signal estimation module is connected with the framing and windowing module, and is used for performing difference operation on the two paths of digital voice data subjected to framing and windowing processing and then performing average processing to obtain noise signal estimation; the pre-emphasis processing module is respectively connected with the noise-containing voice signal estimation module and the noise signal estimation module and respectively performs pre-emphasis processing on the noise-containing voice signal estimation and the noise signal estimation; the voice signal amplitude spectrum acquisition module is connected with the pre-emphasis processing module, short-time fast Fourier transform is respectively carried out on the noise-containing voice signal estimation after the pre-emphasis processing and the noise signal estimation after the pre-emphasis processing, the power spectrum of the obtained noise-containing voice signal estimation is subtracted from the power spectrum of the noise signal estimation to obtain a difference power spectrum, and the amplitude spectrum estimation of the voice signal is further obtained through calculation; the threshold filtering module is connected with the voice signal amplitude spectrum acquiring module, compares the logarithm of the voice signal amplitude spectrum estimation with a preset threshold, and zeros the part smaller than the preset threshold in the voice signal amplitude spectrum estimation; and the pure voice signal acquisition module is connected with the threshold filtering module, performs short-time inverse Fourier transform on the result of multiplying the amplitude spectrum estimation of the voice signal subjected to zero-crossing judgment processing and the phase of the noise-containing voice signal, and performs overlap addition on the short-time inverse Fourier transform results of adjacent frames to obtain the pure voice signal of the denoised time domain.
Preferably, the model function of the frame windowing module is: <math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>s</mi> <mo>_</mo> <mn>2</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>s</mi> <mo>_</mo> <mn>1</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>n</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> <mo>_</mo> <mn>2</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>v</mi> <mo>_</mo> <mn>1</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>n</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math> wherein N is the sampling time, i is the ith frame digital voice data, w (N) is the window function, and N is the window length.
Preferably, the model function of the noisy speech signal estimation module is: s _3(n, i) = (s _2(n, i) + v _2(n, i))/2; the model function of the noise signal estimation module is: v _3(n, i) = (s _2(n, i) -v _2(n, i))/2.
Preferably, the pre-emphasis processing module is a first-order FIR high-pass digital filter, and the model function of the pre-emphasis processing module is: y (n) = x (n) - α x (n-1), where α is a pre-emphasis coefficient, 0.9< α < 1.0; the outputs of s _3(n, i) and v _3(n, i) after pre-emphasis are denoted as s (n, i) and v (n, i).
Preferably, the voice signal magnitude spectrum obtaining module includes: the device comprises a frequency characteristic acquisition unit for estimating a noise-containing voice signal, a frequency characteristic acquisition unit for estimating a noise signal, a difference power spectrum acquisition unit and a voice signal amplitude spectrum estimation unit; the model function of the frequency characteristic obtaining unit for estimating the noisy speech signal is as follows: s (k, i) = SFFT (S (n, i)) = | S (k, i) | expjθ(k,i)K is more than or equal to 0 and less than or equal to N-1; wherein θ (k, i) represents a phase-frequency characteristic function of the noisy speech signal estimation; the model function of the frequency characteristic obtaining unit for noise signal estimation is as follows:wherein,
Figure BDA0000463563140000032
a phase frequency characteristic function representing an estimate of the noise signal; the model function of the difference power spectrum acquisition unit is as follows: | Delta (k, i) & ltnon |)2=|S(k,i)|2-|V(k,i)|2K is more than or equal to 0 and less than or equal to N-1; the model function of the amplitude spectrum estimation unit of the speech signal is as follows:
Figure BDA0000463563140000033
preferably, the model function of the threshold filtering module is:the pure voice signal acquisition module comprises a frequency spectrum estimation unit, a short-time inverse Fourier transform unit and an overlap addition unit of the connected voice signals; the model function of the spectral estimation unit of the speech signal is: s '(k, i) = Δ' (k, i) expjθ(k,i)Wherein the phase expjθ(k,i)A spectrum S (k, i) taken from a noisy speech signal S (n, i); the model function of the short-time inverse Fourier transform unit is as follows: s '(k, i) = real (ISFFT (Δ' (k, i) exp)jθ(k,i)) Real is the operation of real number; and the overlap addition unit performs overlap addition on s' (k, i) of the adjacent frames output by the short-time inverse Fourier transform unit to obtain a pure speech signal of a denoised time domain.
Preferably, the front microphone is a directional microphone and the rear microphone is an omni-directional microphone.
Preferably, the front microphone is an omnidirectional microphone, and the rear microphone is an omnidirectional microphone.
Preferably, the sound pipe of the front microphone points to the front of the cochlear implant external unit, and the sound pipe of the rear microphone points to the rear of the cochlear implant external unit.
Preferably, the sound pipe of the front microphone points to the upper part of the cochlear implant external unit, and the sound pipe of the rear microphone points to the rear part of the cochlear implant external unit.
As described above, the two-way microphone voice noise reduction processing method and system according to the present invention have the following beneficial effects:
the artificial cochlea adopts a two-stage noise reduction mode, skillfully designs the sound tube direction of the double microphones, then is matched with and introduces a power spectrum subtraction theory, only uses simpler addition and subtraction operation in the first noise reduction mode, and realizes the estimation of noise-containing voice and noise, and compared with an endpoint detection algorithm in the traditional spectrum subtraction algorithm, the algorithm complexity is greatly reduced; in addition, because the noise estimation is carried out in real time, the method is suitable for estimating steady and non-steady noise, avoids the defect of inaccurate noise estimation of noisy speech signals in the traditional spectral subtraction algorithm, and also avoids the adverse effect of 'music noise' caused by the traditional spectral subtraction noise reduction method on the use of the artificial cochlea.
Drawings
Fig. 1 is a schematic diagram of the working principle of a conventional cochlear implant.
Fig. 2a is a schematic flow chart of a two-way microphone speech noise reduction processing method according to the present invention.
Fig. 2b is a schematic flow chart of a two-way microphone speech noise reduction processing method according to a second embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating a predetermined threshold according to the present invention.
Fig. 4 is a schematic structural diagram of a two-way microphone speech noise reduction processing system according to the present invention.
Fig. 5 is a structural block diagram of the two microphones of the invention arranged on the cochlear implant external machine.
Fig. 6 is a block diagram of a structure of a speech signal magnitude spectrum obtaining module according to the present invention.
Fig. 7 is a block diagram of a threshold filtering module according to the present invention.
FIG. 8 is a block diagram of a clean speech signal obtaining module according to the present invention.
Description of the element reference numerals
Figure BDA0000463563140000041
Figure BDA0000463563140000051
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention.
Please refer to the attached drawings. It should be noted that the drawings provided in the present embodiment are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The present invention will be described in detail with reference to the following examples and drawings.
Examples
The present embodiment provides a two-way microphone voice noise reduction processing method, as shown in fig. 2a and fig. 2b, the two-way microphone voice noise reduction processing method includes:
A/D sampling is carried out on two paths of original voice signals received by the two microphones, and a first path of digital voice data s _1(n) output by the front microphone at n moments and a second path of digital voice data v _1(n) output by the rear microphone are obtained. Among them, the a/D sampling rate may be preferably set to 16 kHz. The sound tubes of the front microphone and the rear microphone are directed differently. The two microphones can be both capacitance microphones and can also be other types of microphones such as MEMS microphones; the models of the two microphones can be the same or different; when the microphone types are the same, both microphones may be omnidirectional microphones; when the microphone types are different, the two microphones may be one directional microphone and one omnidirectional microphone. When the noise reduction system and the method are applied to the artificial cochlear outer machine, the two microphones can be arranged on the ear hook of the artificial cochlear outer machine and also can be arranged at any suitable position of the artificial cochlear outer machine. The two microphones may also be pointed in a variety of settings, such as: the sound tube of the front microphone points to the front of the cochlear implant external unit, and the sound tube of the rear microphone points to the rear of the cochlear implant external unit; or the sound tube of the front microphone points to the upper part of the cochlear implant external machine, and the sound tube of the rear microphone points to the rear part of the cochlear implant external machine.
And performing framing and windowing processing on the two paths of digital voice data. Further, the specific process of framing and windowing the two paths of digital voice data comprises: the process of framing and windowing the first path of digital voice data s _1(n) and the second path of digital voice data v _1(n) comprises the following steps:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>s</mi> <mo>_</mo> <mn>2</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>s</mi> <mo>_</mo> <mn>1</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>n</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> <mo>_</mo> <mn>2</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>v</mi> <mo>_</mo> <mn>1</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>n</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math>
wherein,
Figure BDA0000463563140000062
and the number is a Hamming window, i is the ith frame digital voice data, N is the sampling time, and N is the window length. The two paths of digital voice data after frame division and window addition are s _2(N, i) = s _1(N, i) w (N), N is more than or equal to 0 and less than or equal to N-1, v _2(N, i) = v _1(N, i) w (N), and N is more than or equal to 0 and less than or equal to N-1. The functions used for windowing according to the present invention may be Hanning windows, Hamming windows, etc., and the Hamming windows are used as examples in the present invention, but the protection scope of the present invention is not limited to the types of window functions. The voice signal has short-time stationarity, generally, the voice signal is considered to be approximately unchanged within the range of 10-30 ms, the A/D sampling frequency is 16kHz, the corresponding voice signal analysis frame is 160-480, namely, the window length range, the window length is 256, the ratio of the frame shift to the analysis frame is generally 0-0.5, and the frame shift can be 64.
Summing two paths of digital voice data subjected to framing and windowing processing, and then carrying out averaging processing to obtain noisy voice signal estimation; the noisy speech signal is estimated as s _3(n, i) = (s _2(n, i) + v _2(n, i))/2.
Performing difference operation on the two paths of digital voice data subjected to framing and windowing processing, and then performing average processing to obtain noise signal estimation; the noise signal is estimated as v _3(n, i) = (s _2(n, i) -v _2(n, i))/2. Through reasonable type selection and proper placement of the positions of the two microphones, preliminary estimation of noise-containing signals and noise signals can be completed, and an algorithm model for further noise reduction can be well met, so that a foundation is laid for subsequent noise reduction processing.
Pre-emphasis processing is respectively carried out on the noise-containing speech signal estimation and the noise signal estimation; further, the pre-emphasis processing may be implemented by using a first-order FIR high-pass digital filter, and the pre-emphasis difference equation is: y (n) = x (n) - α x (n-1), where α is a pre-emphasis coefficient, 0.9< α < 1.0; α may preferably be 0.95; the outputs of s _3(n, i) and v _3(n, i) after pre-emphasis are denoted as s (n, i) = s _3(n, i) × y (n) and v (n, i) = v _3(n, i) × y (n). The pre-emphasis is to emphasize the high frequency part of the speech signal, remove the influence of lip radiation, and increase the high frequency component of the speech signal. For the cochlear implant product, the speech intelligibility of the patient can be further improved by pre-emphasis, which is also very important for the usage effect of the cochlear implant.
And respectively carrying out short-time fast Fourier transform on the noise-containing speech signal estimation after the pre-emphasis processing and the noise signal estimation after the pre-emphasis processing, and calculating a power spectrum.
Subtracting the power spectrum estimated by the noise-containing voice signal from the power spectrum estimated by the noise signal to obtain a difference power spectrum, namely the power spectrum of the voice signal frame, and further calculating to obtain the amplitude spectrum estimation of the voice signal; further, the specific process of performing amplitude spectrum estimation on the noise-containing speech signal estimate s (n, i) and the noise signal estimate v (n, i) after the pre-emphasis processing includes:
the frequency characteristics of the noisy speech signal estimate are: s (k, i) = SFFT (S (n, i)) = | S (k, i) | expjθ(k,i)K is more than or equal to 0 and less than or equal to N-1; wherein, theta (k) represents a phase-frequency characteristic function of the noisy speech signal estimation;
the frequency characteristics of the noise signal estimate are:
Figure BDA0000463563140000071
wherein,
Figure BDA0000463563140000072
a phase frequency characteristic function representing an estimate of the noise signal;
the difference power spectrum is: | Delta (k, i) & ltnon |)2=|S(k,i)|2-|V(k,i)|2,0≤k≤N-1;
The amplitude spectrum of the speech signal is estimated as: <math> <mrow> <mi>&Delta;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <msup> <mrow> <mo>|</mo> <mi>S</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <msup> <mrow> <mo>|</mo> <mi>V</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>k</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> <mo>.</mo> </mrow> </math>
comparing the logarithm of the amplitude spectrum estimation of the voice signal with a preset threshold, and zeroing a part smaller than the preset threshold in the amplitude spectrum estimation; further, as shown in FIG. 3, the predetermined thresholds include a quiet mode threshold, a normal mode threshold, and a noisy mode threshold.
And performing short-time inverse Fourier transform on the result of multiplying the amplitude spectrum estimation of the voice signal subjected to zero judgment processing and the phase of the noise-containing voice signal, and performing overlap addition on the short-time inverse Fourier transform results of adjacent frames to obtain the pure voice signal of the denoised time domain. Further, the specific process of performing short-time inverse fourier transform on the result of multiplying the amplitude spectrum estimation of the speech signal after the zero-decision processing and the phase of the noisy speech signal includes:
and the amplitude spectrum of the voice signal is judged as follows:
Figure BDA0000463563140000074
the spectral estimation of a speech signal is: s '(k, i) = Δ' (k, i) expjθ(k,i)Wherein the phase expjθ(k,i)A spectrum S (k, i) taken from a noisy speech signal S (n, i);
the result of the short-time inverse fourier transform is: s '(k, i) = real (ISFFT (Δ' (k, i) exp)jθ(k,i)) Real is the operation of real number. The inverse fourier transformed time domain signal is overlapped and added to restore the clean speech signal corresponding to the previous frame shift.
The two-way microphone voice noise reduction processing method of the present invention can process two microphone voice signals simultaneously by one processing system in the manner shown in fig. 2a to obtain the final processing result, and can also process one of the microphone voice signals by two processing systems respectively in the manner shown in fig. 2b to obtain the final processing result. The protection scope of the present invention is not limited to one or two paths of systems for processing two paths of microphone voice signals, and all the schemes implemented by using the noise reduction principle of the present invention are included in the protection scope of the present invention.
The embodiment also provides a two-way microphone voice noise reduction processing system, which can implement the two-way microphone voice noise reduction processing method according to the present invention, but the implementation apparatus of the method includes, but is not limited to, the system according to the present invention.
As shown in fig. 4, the two-way microphone speech noise reduction processing system 400 includes: two microphones 410, an a/D sampling module 420, a framing windowing module 430, a noisy speech signal estimation module 440, a noise signal estimation module 450, a pre-emphasis processing module 460, a speech signal magnitude spectrum acquisition module 470, a threshold filtering module 480, and a clean speech signal acquisition module 490.
When the present invention is applied to an external machine for artificial cochlea, as shown in fig. 5, two microphones 410 (i.e., two-way microphones) are provided on the external machine for artificial cochlea; one front microphone 411 and the other rear microphone 412; the direction of the arrow indicates the direction of the microphone sound hole, the sound pipe of the front microphone 411 shown in fig. 5 points to the front of the external machine of the artificial cochlear, and the sound pipe of the rear microphone 412 points to the upper part of the external machine of the artificial cochlear. The two microphones of the present invention have different duct orientations, but the scope of the present invention is not limited to the specific duct orientations of the two-way microphones as shown in fig. 5. The two microphones can be both capacitance microphones and can also be other types of microphones such as MEMS microphones, wherein the types of the microphones can be the same or different; when the microphone types are the same, both microphones may be omnidirectional microphones; when the microphone types are different, the microphone type can also be a directional microphone and an omnidirectional microphone. In the 1000Hz-8000Hz band, the front microphone 411 has an average sensitivity of 2dB higher than the sensitivity of the rear microphone 412. The two microphones can be arranged on the extra-corporeal unit of the cochlear prosthesis and also can be arranged at any suitable position of the extra-corporeal unit of the cochlear prosthesis.
The a/D sampling module 420 is connected to the two microphones 410, and performs a/D conversion on the two paths of original voice signals received by the two microphones to obtain two paths of digital voice data. The a/D sampling module 420 is connected to the microphone module 410, and performs a/D sampling on the two paths of original voice signals to obtain a first path of digital voice data s _1(n) output by the front microphone at n moments and a second path of digital voice data v _1(n) output by the rear microphone.
The framing and windowing module 430 is connected to the voice chip 420, and frames and windows the two paths of digital voice data. Further, the model function of the frame windowing module 430 is: <math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>s</mi> <mo>_</mo> <mn>2</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>s</mi> <mo>_</mo> <mn>1</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>n</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> <mo>_</mo> <mn>2</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>v</mi> <mo>_</mo> <mn>1</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>n</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math> wherein,for a Hamming window, i is the ith frame digital voice data, N is the sampling time, and N is the frame length.
The noisy speech signal estimation module 440 is connected to the framing and windowing module 430, and performs summation operation on the two paths of digital speech data after framing and windowing, and then performs averaging processing to obtain noisy speech signal estimation. Further, the model function of the noisy speech signal estimation module is: s _3(n, i) = (s _2(n, i) + v _2(n, i))/2.
The noise signal estimation module 450 is connected to the framing and windowing module 430, and performs difference calculation on the two paths of digital voice data after framing and windowing, and then performs averaging processing to obtain noise signal estimation. Further, the model function of the noise signal estimation module is: v _3(n, i) = (s _2(n, i) -v _2(n, i))/2.
The pre-emphasis processing module 460 is connected to the noisy speech signal estimation module 440 and the noise signal estimation module 450, respectively, and performs pre-emphasis processing on the noisy speech signal estimation and the noise signal estimation, respectively. Further, the pre-emphasis processing module 460 is a first-order FIR high-pass digital filter, and the model function of the pre-emphasis processing module is: y (n) = x (n) - α x (n-1), where α is a pre-emphasis coefficient, 0.9< α < 1.0; the outputs of s _3(n, i) and v _3(n, i) after pre-emphasis are denoted as s (n, i) and v (n, i).
The voice signal amplitude spectrum obtaining module 470 is connected to the pre-emphasis processing module 460, and performs short-time fast fourier transform on the pre-emphasized noisy voice signal estimation and the pre-emphasized noise signal estimation, respectively, and subtracts the obtained power spectrum of the noisy voice signal estimation from the power spectrum of the noise signal estimation to obtain a difference power spectrum, and further calculates to obtain the amplitude spectrum estimation of the voice signal. Further, as shown in fig. 6, the voice signal magnitude spectrum obtaining module 470 includes: a frequency characteristic acquiring unit 471 of noise-containing voice signal estimation, a frequency characteristic acquiring unit 472 of noise signal estimation, a difference power spectrum acquiring unit 473, a magnitude spectrum estimating unit 474 of a voice signal; the model function of the frequency characteristic obtaining unit 471 of the noisy speech signal estimation is: s (k, i) = SFFT (S (n, i)) = | S (k, i) | expjθ(k,i)K is more than or equal to 0 and less than or equal to N-1; theta (k) denotes a noise-containing wordA phase-frequency characteristic function of the sound signal estimation; the model function of the noise signal estimation frequency characteristic obtaining unit 472 is:
Figure BDA0000463563140000091
Figure BDA0000463563140000092
a phase frequency characteristic function representing an estimate of the noise signal; the model function of the difference power spectrum acquisition unit 473 is: | Delta (k, i) & ltnon |)2=|S(k,i)|2-|V(k,i)|2K is more than or equal to 0 and less than or equal to N-1; the model function of the amplitude spectrum estimation unit 474 of the speech signal is:
Figure BDA0000463563140000093
the threshold filtering module 480 is connected to the voice signal amplitude spectrum obtaining module 470, and compares the logarithm of the voice signal amplitude spectrum estimation with a predetermined threshold, and sets the part smaller than the predetermined threshold in the voice signal amplitude spectrum estimation to zero. Further, as shown in fig. 7, the threshold filtering module 480 includes: an operation mode unit 481, an operation threshold unit 482, and a threshold filter unit 483. The operation mode unit 481 selects three modes of "quiet mode (corresponding to threshold 1)", "normal mode (corresponding to threshold 2)", and "noisy mode (corresponding to threshold 3)". The operating threshold unit 482 determines a corresponding threshold based on the operating mode selected by the operating mode unit 481. These three thresholds, in turn, correspond to the three modes in the operating mode unit 481. The threshold filtering unit 483 makes a decision on the output of the speech signal magnitude spectrum acquisition module 470 based on the output of the working threshold unit 482. The model function of the threshold filter module 480 is:
Figure BDA0000463563140000094
the pure speech signal obtaining module 490 is connected to the threshold filtering module 480, and compares the result of the multiplication of the amplitude spectrum estimation of the speech signal after the zero decision processing and the phase of the noisy speech signalAnd carrying out short-time inverse Fourier transform, and carrying out overlap addition on short-time inverse Fourier transform results of adjacent frames to obtain a pure voice signal of a denoised time domain. Further, as shown in fig. 8, the clean speech signal acquisition module 490 includes a spectrum estimation unit 491, a short-time inverse fourier transform unit 492, and an overlap-add unit 493 of the connected speech signals; the model function of the spectral estimation unit 491 of the speech signal is: s '(k, i) = Δ' (k, i) expjθ(k,i)Wherein the phase expjθ(k,i)A spectrum S (k, i) taken from a noisy speech signal S (n, i); the model function of the short-time inverse fourier transform unit 492 is: s '(k, i) = real (ISFFT (Δ' (k, i) exp)jθ(k,i)) Real is the operation of real number. The overlap-add unit 493 performs overlap-add on s' (k, i) of the adjacent frames output from the short-time inverse fourier transform unit 492 to obtain a denoised time-domain clean speech signal.
In the present invention, once the placement of the front and rear microphones on the bte unit is fixed, the present invention does not require access to the front and rear microphones, i.e., one case is where the front microphone is accessed as "input 1" and the rear microphone is accessed as "input 2" and vice versa.
The invention adopts a double-stage noise reduction mode, and in the first-stage noise reduction mode, the preliminary estimation of noise-containing voice signals and noise is completed; in the second-stage noise reduction mode, the noise-containing voice is further finely processed by skillfully combining the traditional voice noise reduction algorithm. In the first noise reduction mode, the estimation of the noise-containing voice and the noise is realized only by using simpler addition and subtraction operation, and compared with the traditional endpoint detection algorithm (such as an endpoint detection algorithm based on energy/zero-crossing rate voice and an improved algorithm) in the spectral subtraction algorithm, the method distinguishes and positions the noise-containing voice, so that the algorithm complexity is greatly reduced; on the other hand, the invention is not sensitive to the signal-to-noise ratio (SNR) of the voice signal, namely the invention can still achieve better noise reduction effect on the signal with low signal-to-noise ratio (SNR).
In addition, the invention skillfully introduces a more classical power spectrum subtraction theory under the condition of double-microphone input, and because the noise estimation is carried out in real time, the invention is suitable for stable and non-stable noise, avoids the defect of inaccurate noise estimation of noise-containing signals in the traditional spectrum subtraction noise reduction algorithm, and also avoids the problem of 'music noise' brought by the traditional spectrum subtraction method.
The invention can set three working modes of 'quiet', 'normal' and 'noisy' according to the actual environment condition, the default working mode is the 'normal environment' mode, and the user can freely switch according to different environments, which is also in line with the use characteristics of hearing aid products or cochlear implant products in the current market.
The main salient point of the invention is that the invention adopts 'double microphones' and 'spectral subtraction technology', and the method is not found to be applied in hearing aids and cochlear prosthesis products. The main reasons are that: on the one hand, in general, a spectral subtraction algorithm estimates a noise signal obtained from a speech silence segment as a noise signal in current noisy speech, which causes a problem of noise mismatch and inevitably causes a problem of "musical noise". The noise reduction method and system of the invention estimate the noise in real time and is close to the real noise data, and the music noise is removed very cleanly and the speech signal definition is higher by the following three optional threshold filters. On the other hand, the invention only uses a Fast Fourier Transform (FFT) algorithm, and the theory and the practical application of the algorithm are mature; compared with algorithms such as wavelet transformation, independent component analysis, blind source separation and the like, the algorithm provided by the invention has small operand and is easy to realize on a DSP.
For the noise reduction effect, the SNR can be improved by about 6dB by single-microphone noise reduction generally, and the noise reduction algorithm disclosed by the invention can improve the SNR by 15-20 dB.
In conclusion, the present invention effectively overcomes various disadvantages of the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A cochlear implant comprises an extra-cochlear implant unit and an intra-cochlear implant unit, and is characterized in that the extra-cochlear implant unit comprises:
the front microphone is arranged on the outer machine of the artificial cochlea;
the rear microphone is arranged on the outer machine of the artificial cochlea; the sound tubes of the front microphone and the rear microphone have different directions and are used for receiving original voice signals;
two-way microphone pronunciation noise reduction processing system includes:
the A/D sampling module is connected with the front microphone and the rear microphone and is used for carrying out A/D sampling on two paths of original voice signals received by the two microphones to obtain two paths of digital voice data;
the framing and windowing module is connected with the A/D sampling module and is used for framing and windowing the two paths of digital voice data;
the noisy speech signal estimation module is connected with the framing and windowing module, and is used for performing summation operation on the two paths of digital speech data subjected to framing and windowing processing and then performing average processing to obtain noisy speech signal estimation;
the noise signal estimation module is connected with the framing and windowing module, and is used for performing difference operation on the two paths of digital voice data subjected to framing and windowing processing and then performing average processing to obtain noise signal estimation;
the pre-emphasis processing module is respectively connected with the noise-containing voice signal estimation module and the noise signal estimation module and respectively performs pre-emphasis processing on the noise-containing voice signal estimation and the noise signal estimation;
the voice signal amplitude spectrum acquisition module is connected with the pre-emphasis processing module, short-time fast Fourier transform is respectively carried out on the noise-containing voice signal estimation after the pre-emphasis processing and the noise signal estimation after the pre-emphasis processing, the power spectrum of the obtained noise-containing voice signal estimation is subtracted from the power spectrum of the noise signal estimation to obtain a difference power spectrum, and the amplitude spectrum estimation of the voice signal is further obtained through calculation;
the threshold filtering module is connected with the voice signal amplitude spectrum acquiring module, compares the logarithm of the voice signal amplitude spectrum estimation with a preset threshold, and zeros the part smaller than the preset threshold in the voice signal amplitude spectrum estimation;
and the pure voice signal acquisition module is connected with the threshold filtering module, performs short-time inverse Fourier transform on the result of multiplying the amplitude spectrum estimation of the voice signal subjected to zero-crossing judgment processing and the phase of the noise-containing voice signal, and performs overlap addition on the short-time inverse Fourier transform results of adjacent frames to obtain the pure voice signal of the denoised time domain.
2. The cochlear implant of claim 1, wherein the model function of the framing windowing module is:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>s</mi> <mo>_</mo> <mn>2</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>s</mi> <mo>_</mo> <mn>1</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>n</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> <mo>_</mo> <mn>2</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>v</mi> <mo>_</mo> <mn>1</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>n</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </math>
wherein N is the sampling time, i is the ith frame digital voice data, w (N) is the window function, and N is the window length.
3. The cochlear implant of claim 2, wherein: the model function of the noisy speech signal estimation module is: s _3(n, i) = (s _2(n, i) + v _2(n, i))/2; the model function of the noise signal estimation module is: v _3(n, i) = (s _2(n, i) -v _2(n, i))/2.
4. The cochlear implant of claim 3, wherein: the pre-emphasis processing module is a first-order FIR high-pass digital filter, and the model function of the pre-emphasis processing module is as follows: y (n) = x (n) - α x (n-1), where α is a pre-emphasis coefficient, 0.9< α < 1.0; the outputs of s _3(n, i) and v _3(n, i) after pre-emphasis are denoted as s (n, i) and v (n, i).
5. The cochlear implant of claim 4, wherein the speech signal magnitude spectrum acquisition module comprises: the device comprises a frequency characteristic acquisition unit for estimating a noise-containing voice signal, a frequency characteristic acquisition unit for estimating a noise signal, a difference power spectrum acquisition unit and a voice signal amplitude spectrum estimation unit;
the model function of the frequency characteristic obtaining unit for estimating the noisy speech signal is as follows:
S(k,i)=SFFT(s(n,i))=|S(k,i)|expjθ(k,i),0≤k≤N-1;
wherein θ (k, i) represents a phase-frequency characteristic function of the noisy speech signal estimation;
the model function of the frequency characteristic obtaining unit for noise signal estimation is as follows:
Figure FDA0000463563130000021
wherein,
Figure FDA0000463563130000022
a phase frequency characteristic function representing an estimate of the noise signal;
the model function of the difference power spectrum acquisition unit is as follows:
|Δ(k,i)|2=|S(k,i)|2-|V(k,i)|2,0≤k≤N-1;
the model function of the amplitude spectrum estimation unit of the speech signal is as follows:
<math> <mrow> <mi>&Delta;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <msup> <mrow> <mo>|</mo> <mi>S</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <msup> <mrow> <mo>|</mo> <mi>V</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>k</mi> <mo>&le;</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> <mo>.</mo> </mrow> </math>
6. the cochlear implant of claim 5, wherein: the model function of the threshold filtering module is:
Figure FDA0000463563130000024
the pure voice signal acquisition module comprises a frequency spectrum estimation unit, a short-time inverse Fourier transform unit and an overlap addition unit of the connected voice signals;
the model function of the spectral estimation unit of the speech signal is: s '(k, i) = Δ' (k, i) expjθ(k,i)Wherein the phase expjθ(k,i)A spectrum S (k, i) taken from a noisy speech signal S (n, i);
the model function of the short-time inverse Fourier transform unit is as follows: s '(k, i) = real (ISFFT (Δ' (k, i) exp)jθ(k,i)) Real is the operation of real number;
and the overlap addition unit performs overlap addition on s' (k, i) of the adjacent frames output by the short-time inverse Fourier transform unit to obtain a pure speech signal of a denoised time domain.
7. The cochlear implant of any of claims 1 to 6, wherein: the front microphone is a directional microphone, and the rear microphone is an omnidirectional microphone.
8. The cochlear implant of any of claims 1 to 6, wherein: the rear microphone is an omnidirectional microphone, and the front microphone is an omnidirectional microphone.
9. The cochlear implant of any of claims 1 to 6, wherein: the sound tube of the front microphone points to the front of the cochlear implant external unit, and the sound tube of the rear microphone points to the rear of the cochlear implant external unit.
10. The cochlear implant of any of claims 1 to 6, wherein: the sound tube of the front microphone points to the upper part of the cochlear implant external unit, and the sound tube of the rear microphone points to the rear part of the cochlear implant external unit.
CN201410042057.8A 2014-01-28 2014-01-28 Cochlear implant Active CN103761974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410042057.8A CN103761974B (en) 2014-01-28 2014-01-28 Cochlear implant

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410042057.8A CN103761974B (en) 2014-01-28 2014-01-28 Cochlear implant

Publications (2)

Publication Number Publication Date
CN103761974A true CN103761974A (en) 2014-04-30
CN103761974B CN103761974B (en) 2017-01-25

Family

ID=50529200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410042057.8A Active CN103761974B (en) 2014-01-28 2014-01-28 Cochlear implant

Country Status (1)

Country Link
CN (1) CN103761974B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105044478A (en) * 2015-07-23 2015-11-11 国家电网公司 Transmission line audible noise multi-channel signal extraction method
CN106409297A (en) * 2016-10-18 2017-02-15 安徽天达网络科技有限公司 Voice recognition method
CN106658323A (en) * 2017-02-28 2017-05-10 浙江诺尔康神经电子科技股份有限公司 Dual microphone noise reduction system and method for cochlear implants and hearing aids
CN106714063A (en) * 2016-12-16 2017-05-24 深圳信息职业技术学院 Beam forming method and system of microphone voice signals of hearing aid device, and hearing aid device
CN107833579A (en) * 2017-10-30 2018-03-23 广州酷狗计算机科技有限公司 Noise cancellation method, device and computer-readable recording medium
CN109327755A (en) * 2018-08-20 2019-02-12 深圳信息职业技术学院 A kind of cochlear implant and noise remove method
WO2019227590A1 (en) * 2018-05-29 2019-12-05 平安科技(深圳)有限公司 Voice enhancement method, apparatus, computer device, and storage medium
CN112151056A (en) * 2020-09-27 2020-12-29 浙江诺尔康神经电子科技股份有限公司 Intelligent cochlear sound processing system and method with customization
CN114189781A (en) * 2021-11-27 2022-03-15 苏州蛙声科技有限公司 Noise reduction method and system for double-microphone neural network noise reduction earphone

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1348583A (en) * 1999-02-18 2002-05-08 安德烈电子公司 System, method and apparatus for cancelling noise
CN101510426A (en) * 2009-03-23 2009-08-19 北京中星微电子有限公司 Method and system for eliminating noise
CN101593522A (en) * 2009-07-08 2009-12-02 清华大学 A kind of full frequency domain digital hearing aid method and apparatus
CN101807404A (en) * 2010-03-04 2010-08-18 清华大学 Pretreatment system for strengthening directional voice at front end of electronic cochlear implant
CN102456351A (en) * 2010-10-14 2012-05-16 清华大学 Voice enhancement system
US8233651B1 (en) * 2008-09-02 2012-07-31 Advanced Bionics, Llc Dual microphone EAS system that prevents feedback
CN203724273U (en) * 2014-01-28 2014-07-23 上海力声特医学科技有限公司 Cochlear implant

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1348583A (en) * 1999-02-18 2002-05-08 安德烈电子公司 System, method and apparatus for cancelling noise
US8233651B1 (en) * 2008-09-02 2012-07-31 Advanced Bionics, Llc Dual microphone EAS system that prevents feedback
CN101510426A (en) * 2009-03-23 2009-08-19 北京中星微电子有限公司 Method and system for eliminating noise
CN101593522A (en) * 2009-07-08 2009-12-02 清华大学 A kind of full frequency domain digital hearing aid method and apparatus
CN101807404A (en) * 2010-03-04 2010-08-18 清华大学 Pretreatment system for strengthening directional voice at front end of electronic cochlear implant
CN102456351A (en) * 2010-10-14 2012-05-16 清华大学 Voice enhancement system
CN203724273U (en) * 2014-01-28 2014-07-23 上海力声特医学科技有限公司 Cochlear implant

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄永: ""人工耳蜗芯片中语音增强算法的研究和实现"", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 3, 15 March 2013 (2013-03-15) *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105044478A (en) * 2015-07-23 2015-11-11 国家电网公司 Transmission line audible noise multi-channel signal extraction method
CN105044478B (en) * 2015-07-23 2018-03-13 国家电网公司 A kind of multi channel signals extracting method of transmission line of electricity audible noise
CN106409297A (en) * 2016-10-18 2017-02-15 安徽天达网络科技有限公司 Voice recognition method
CN106714063A (en) * 2016-12-16 2017-05-24 深圳信息职业技术学院 Beam forming method and system of microphone voice signals of hearing aid device, and hearing aid device
CN106714063B (en) * 2016-12-16 2019-05-17 深圳信息职业技术学院 Hearing-aid device microphone voice signal Beamforming Method, system and hearing-aid device
CN106658323A (en) * 2017-02-28 2017-05-10 浙江诺尔康神经电子科技股份有限公司 Dual microphone noise reduction system and method for cochlear implants and hearing aids
CN107833579A (en) * 2017-10-30 2018-03-23 广州酷狗计算机科技有限公司 Noise cancellation method, device and computer-readable recording medium
CN107833579B (en) * 2017-10-30 2021-06-11 广州酷狗计算机科技有限公司 Noise elimination method, device and computer readable storage medium
WO2019227590A1 (en) * 2018-05-29 2019-12-05 平安科技(深圳)有限公司 Voice enhancement method, apparatus, computer device, and storage medium
CN109327755B (en) * 2018-08-20 2019-11-26 深圳信息职业技术学院 A kind of cochlear implant and noise remove method
CN109327755A (en) * 2018-08-20 2019-02-12 深圳信息职业技术学院 A kind of cochlear implant and noise remove method
CN112151056A (en) * 2020-09-27 2020-12-29 浙江诺尔康神经电子科技股份有限公司 Intelligent cochlear sound processing system and method with customization
CN112151056B (en) * 2020-09-27 2023-08-04 浙江诺尔康神经电子科技股份有限公司 Intelligent cochlea sound processing system and method with customization function
CN114189781A (en) * 2021-11-27 2022-03-15 苏州蛙声科技有限公司 Noise reduction method and system for double-microphone neural network noise reduction earphone

Also Published As

Publication number Publication date
CN103761974B (en) 2017-01-25

Similar Documents

Publication Publication Date Title
CN103761974B (en) Cochlear implant
CN104810024A (en) Double-path microphone speech noise reduction treatment method and system
CN103778920B (en) Speech enhan-cement and compensating for frequency response phase fusion method in digital deaf-aid
CN102157156B (en) Single-channel voice enhancement method and system
CN105741849B (en) The sound enhancement method of phase estimation and human hearing characteristic is merged in digital deaf-aid
CN100535993C (en) Speech enhancement method applied to deaf-aid
CN109410976A (en) Sound enhancement method based on binaural sound sources positioning and deep learning in binaural hearing aid
CN102456351A (en) Voice enhancement system
CN105679330B (en) Based on the digital deaf-aid noise-reduction method for improving subband signal-to-noise ratio (SNR) estimation
KR20220062598A (en) Systems and methods for generating audio signals
CN103325381A (en) Speech separation method based on fuzzy membership function
CN106098077A (en) Artificial cochlea&#39;s speech processing system of a kind of band noise reduction and method
CN112367600A (en) Voice processing method and hearing aid system based on mobile terminal
WO2020087716A1 (en) Auditory scene recognition method for artificial cochlea
Wang et al. Improving the intelligibility of speech for simulated electric and acoustic stimulation using fully convolutional neural networks
Vanjari et al. Hearing Loss Adaptivity of Machine Learning Based Compressive Sensing Speech Enhancement for Hearing Aids
CN103475986A (en) Digital hearing aid speech enhancing method based on multiresolution wavelets
WO2013067714A1 (en) Method for reducing burst noise
WO2010091339A1 (en) Method and system for noise reduction for speech enhancement in hearing aid
CN106328160A (en) Double microphones-based denoising method
Hsu et al. Spectro-temporal subband wiener filter for speech enhancement
CN109327755B (en) A kind of cochlear implant and noise remove method
CN106658323A (en) Dual microphone noise reduction system and method for cochlear implants and hearing aids
Lezzoum et al. Noise reduction of speech signals using time-varying and multi-band adaptive gain control for smart digital hearing protectors
Huang et al. Combination and comparison of sound coding strategies using cochlear implant simulation with mandarin speech

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant