CN112951260B - Method for enhancing speech by double microphones - Google Patents

Method for enhancing speech by double microphones Download PDF

Info

Publication number
CN112951260B
CN112951260B CN202110227971.XA CN202110227971A CN112951260B CN 112951260 B CN112951260 B CN 112951260B CN 202110227971 A CN202110227971 A CN 202110227971A CN 112951260 B CN112951260 B CN 112951260B
Authority
CN
China
Prior art keywords
noise
signal
differential
formula
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110227971.XA
Other languages
Chinese (zh)
Other versions
CN112951260A (en
Inventor
曾庆宁
王红丽
龙超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202110227971.XA priority Critical patent/CN112951260B/en
Publication of CN112951260A publication Critical patent/CN112951260A/en
Application granted granted Critical
Publication of CN112951260B publication Critical patent/CN112951260B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0224Processing in the time domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02165Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming

Abstract

The invention discloses a method for enhancing double-microphone voice, which comprises the steps of firstly carrying out differential operation on two channel noise-containing voice signals to inhibit directional noise, not carrying out recovery operation on voice signal distortion caused by the differential operation, but further eliminating residual noise by adopting an improved Adaptive Noise Cancellation (ANC) algorithm based on Voice Activity Detection (VAD), then carrying out recovery operation on the distorted voice signals, and in the process of carrying out the recovery operation on the distorted voice signals, adopting a time domain recovery algorithm to obtain smaller operation amount and smaller time delay than the existing frequency domain recovery algorithm. The method can inhibit directional noise, improve the quality of voice, has strong robustness and is easy to realize.

Description

Method for enhancing voice of double microphones
Technical Field
The invention relates to a speech signal processing technology, in particular to a method for enhancing speech by using an array signal denoising technology, and specifically relates to a method for enhancing speech by using two microphones.
Background
The main purpose of speech enhancement is to suppress noise signals from noisy speech to obtain pure speech, so as to improve speech quality and intelligibility. Speech enhancement can be divided into single-channel speech enhancement and multi-channel speech enhancement, and it has been proved that single-channel speech enhancement inevitably causes damage to speech signals, while multi-channel speech enhancement generally has better enhancement effect than single-channel enhancement because it has more multi-channel signals available than single channel.
Multi-channel speech enhancement usually occurs in the form of microphone arrays, which have a large spatial aperture, which seriously affect their application in many applications, while dual microphones consisting of two microphones can be regarded as the simplest microphone array, which has a wider application range, especially miniature dual microphones consisting of two microphones spaced very close to each other, which can be easily embedded in devices such as earphones, telephones, mobile phones and hearing aids.
Disclosure of Invention
The invention aims to provide a method for enhancing speech by two microphones aiming at the defects of the prior art.
The method can inhibit directional noise, improve the quality of voice, has strong robustness and is easy to realize.
The technical scheme for realizing the purpose of the invention is as follows:
a method of dual microphone speech enhancement comprising the steps of:
1) two microphones are used to receive two-channel noisy speech signal, the speech source is located on the connection of the two microphones at one end of one of the microphones M1, the noise source is located on the connection of the two microphones at one end of the other microphone M2, and the sampling frequency f issAnd sound velocity c, the distance d between the two microphones is shown in the formula (1):
Figure BDA0002957531030000011
the voice signal obtained by M1 at the ith sampling time is s (i), the noise signal obtained by M2 is n (i), and the voice signal with noise obtained by M1 and M2 is x1(i) And x2(i) As shown in formula (2) and formula (3):
x1(i)=s(i)+βn(i-1) (2),
x2(i)=n(i)+βs(i-1) (3),
where beta is 0 < beta < 1 represents a signal amplitude attenuation factor for sound traveling a distance d from M1 to M2,
firstly, the difference operation is carried out on the two-channel noise-containing voice signals, namely, fixed difference beam forming is adopted to inhibit the direction noise, and the difference signal y required by the subsequent processing is obtaineds(i) And differential noise yn(i) The method specifically comprises the following steps: carrying out differential operation on the two paths of microphone signals to obtain a differential signal ys(i) As shown in equation (4):
Figure BDA0002957531030000021
differential noise yn(i) As shown in equation (5):
Figure BDA0002957531030000022
from equation (4): after carrying on the difference operation to the voice with noise of M1 and M2, noise signal n (i) is deducted, and speech signal s (i) is kept, the speech signal has already distorted, need through the subsequent processing to resume, in the practical application, the noise may not lie in two microphone connecting lines and in M2 one end, adopt the fixed differential beam forming to strengthen the voice of the desired direction, inhibit the noise signal of the undesired direction at the same time, the fixed differential beam forming can't deduct the noise completely, its residual noise situation can be confirmed according to the beam pattern that the fixed differential beam forms, when the noise is in different angles within 360 degrees, the noise direction is closer to 180 degrees, the noise attenuation is bigger, form the zero point in 180 degrees; when the noise direction is closer to the sound source direction (0 °), the noise attenuation is smaller, and because the influence of direct incidence, reflection and scattering is usually generated in the transmission process of voice and noise in the actual environment, the noise cannot be completely reduced by differential operation, and the residual noise needs to be further eliminated;
2) for the differential signal y obtained in the step 1)s(i) And differential noise yn(i) Without performing a restoration operation on the difference signal ys(i) And differential noise yn(i) Voice Activity detection-based VAD (Voice Activity Detector, VA for short) is adoptedD) The improved Adaptive Noise Cancellation (ANC) algorithm eliminates the differential signal ys(i) Residual noise in the ANC algorithm, and the adaptive processor in the improved ANC algorithm selects Least Mean Square LMS (Least Mean Square, LMS for short), and the main input in the ANC algorithm is a differential signal ys(i) Reference input is differential noise yn(i) As shown in formula (6), formula (7), formula (8), formula (9), and formula (10):
y(i)=WT(i)·Yn(i) (6),
Figure BDA0002957531030000023
W(i)=[w0(i),w1(i),w2(i),...,wN(i)]T (8),
e(i)=ys(i)-y(i) (9),
W(i+1)=W(i)+μe(i)Yn(i) (10),
wherein Y (i) is the output signal of the adaptive filter, Yn(i) As a differential noise signal yn(i) Vector formed by values taken at different moments, e (i) is an error signal, a target signal required by an adaptive algorithm is generated by subtracting the difference signal and an output signal of a filter to determine automatic updating of a filter coefficient, N is the order of an adaptive FIR filter, W (i) is a coefficient vector of the adaptive filter, the mean square expected value of the error signal e (i) is minimized by adjusting the coefficient W (i) of the adaptive filter, mu represents the updating step length of the coefficient of the adaptive filter, which influences the convergence rate, stability and noise reduction effect of noise cancellation, adaptive filtering is carried out on the difference noise to obtain an approximate estimation of the noise in the difference signal, then the approximate estimation is subtracted by the difference signal to generate an estimation of a useful voice signal, noise cancellation processing is carried out on the difference signal and the difference noise obtained in the step 1, wherein VAD passes ys(i) Detecting the sound/soundless time period, and using the detection result to control whether the adaptive filter coefficient is updated or not;
3)because only pure noise signals are difficult to guarantee in the reference input of ANC, and a part of voice is mixed inevitably, so that noise cancellation is inevitably caused, and meanwhile, partial cancellation is caused to voice, in order to reduce the influence of adaptive noise cancellation on a useful voice signal, the ANC voice enhancement is adopted: the adaptive filter coefficients of ANC are adaptively updated only in the time period without speech, and the adaptive filter coefficients are kept unchanged in the time period with speech, that is, the ANC algorithm based on VAD adopts a time domain Finite Impulse Response filter recovery algorithm FIR (FIR for short) to the differential signal y obtained in step 1)s(i) And differential noise yn(i) The speech distortion of (2) is restored:
from equation (4): signal y output by differential operations(i) Not the speech signal s (i), but the distorted signal s (i) - β of s (i)2s (i-2), obtained by Z-transforming equation (4):
Ys(z)=(1-β2Z-2)S(Z) (11),
equation (12) is derived from equation (11):
Figure BDA0002957531030000031
when the formula (12) is converted from the Z domain to the time domain, and L is 30, the following are:
Figure BDA0002957531030000032
get
Figure BDA0002957531030000033
Figure BDA0002957531030000034
Then
Figure BDA0002957531030000035
Where h is the coefficient of the recovery filter and L is the order of the recovery filter, since adaptive noise cancellation ANC can be applied to the difference signal ys(i) The residual noise in (a) is suppressed, i.e. e (i) in formula (9) has a ratio ys(i) Higher signal-to-noise ratio, and hence the restoration filtering is done for e (i), i.e.:
Figure BDA0002957531030000041
wherein the content of the first and second substances,
Figure BDA0002957531030000042
obviously, the speech signal recovery is performed by the formula (17), only L-order Finite Impulse Response (FIR) filtering is needed, and L is usually equal to 30, so that the computation amount is less than that of the frequency domain recovery method, and the method is easier to implement in real time. The technical scheme can be used in the fields of digital hearing aids, cochlear implants, recording pens, mobile phones, voice recognition and the like.
The method can inhibit directional noise, improve the quality of voice, has strong robustness and is easy to realize.
Drawings
FIG. 1 is a schematic block diagram of the principles of an embodiment method;
FIG. 2 is a schematic diagram of signals collected by two microphones in an embodiment;
FIG. 3 is a beam pattern of fixed differential beamforming in an embodiment;
fig. 4 is a schematic block diagram of an improved adaptive noise cancellation method in an embodiment.
Detailed Description
The invention will be further described, but not limited, by reference to the following figures and examples:
example (b):
referring to fig. 1, a method of dual microphone speech enhancement includes the steps of:
1) two microphones are used to receive a two-channel noisy speech signal, as shown in FIG. 2, with the speech source on the connection between the two microphones at one end of one of the microphones M1 and the noise source on the connection between the two microphones at one end of the other microphone M2, in this case at a sampling frequency fsWhen 16khz is taken, the sound velocity c is 340m/s, the distance d between the two microphones is 2.125cm as shown in formula (1),
Figure BDA0002957531030000043
the voice signal obtained by M1 at the ith sampling time is s (i), the noise signal obtained by M2 is n (i), and the voice signal with noise obtained by M1 and M2 is x1(i) And x2(i) As shown in formula (2) and formula (3):
x1(i)=s(i)+βn(i-1) (2),
x2(i)=n(i)+βs(i-1) (3),
wherein beta is 0 < beta < 1, which represents the signal amplitude attenuation factor when sound propagates d distance from M1 to M2, firstly, the difference operation is carried out on the two-channel noise-containing voice signal, namely, fixed difference beam forming is adopted to inhibit the direction noise, and the difference signal y required by the subsequent processing is obtaineds(i) And differential noise yn(i) The method specifically comprises the following steps: performing differential operation on the two paths of microphone signals to obtain a differential signal ys(i) As shown in equation (4):
Figure BDA0002957531030000051
differential noise yn(i) As shown in equation (5):
Figure BDA0002957531030000052
from equation (4): after performing differential operation on the noisy voices M1 and M2, noise signals n (i) are subtracted, voice signals s (i) are retained, the voice signals are distorted and need to be restored through subsequent processing, in practical application, noise may not be located on a connecting line of two microphones and at one end of M2, fixed differential beam forming is adopted to enhance voice in a desired direction, and noise signals in an undesired direction are suppressed, the fixed differential beam forming cannot completely subtract the noise, and the residual noise condition can be determined according to a beam pattern formed by the fixed differential beam forming, as shown in fig. 3, when the noise is at different angles within a range of 360 degrees, the gain direction of the residual noise is closer to 180 degrees, the noise attenuation is larger, and a zero point is formed in the direction of 180 degrees; when the noise direction is closer to the sound source direction (0 °), the noise attenuation is smaller, and because the influence of direct incidence, reflection and scattering is usually generated in the transmission process of voice and noise in the actual environment, the noise cannot be completely reduced by differential operation, and the residual noise needs to be further eliminated;
2) for the differential signal y obtained in the step 1)s(i) And differential noise yn(i) Is applied to the differential signal y without performing a restoration operations(i) And differential noise yn(i) Cancellation of differential signal y using an improved adaptive noise cancellation, ANC, algorithm based on voice activity detection, VADs(i) Residual noise in the ANC algorithm, and the adaptive processor in the improved ANC algorithm selects least mean square LMS, and the main input in the ANC algorithm is a differential signal ys(i) Reference input is differential noise yn(i) As shown in formula (6), formula (7), formula (8), formula (9), and formula (10):
y(i)=WT(i)·Yn(i) (6),
Figure BDA0002957531030000053
W(i)=[w0(i),w1(i),w2(i),...,wN(i)]T (8),
e(i)=ys(i)-y(i) (9),
W(i+1)=W(i)+μe(i)Yn(i) (10),
wherein Y (i) is the output signal of the adaptive filter, Yn(i) As a differential noise signal yn(i) Vector formed by values taken at different moments, e (i) is an error signal, a target signal required by an adaptive algorithm is generated by subtracting the difference signal and an output signal of a filter to determine automatic updating of a filter coefficient, N is the order of an adaptive FIR filter, W (i) is a coefficient vector of the adaptive filter, the mean square expected value of the error signal e (i) is minimized by adjusting the coefficient W (i) of the adaptive filter, mu represents the updating step length of the coefficient of the adaptive filter, which influences the convergence rate, stability and noise reduction effect of noise cancellation, adaptive filtering is carried out on the difference noise to obtain an approximate estimation of the noise in the difference signal, then the approximate estimation is subtracted by the difference signal to generate an estimation of a useful voice signal, noise cancellation processing is carried out on the difference signal and the difference noise obtained in the step 1, wherein VAD passes ys(i) Detecting the sound/soundless time period, and using the detection result to control whether the self-adaptive filter coefficient is updated or not;
3) because only pure noise signals are difficult to guarantee in the reference input of ANC, and a part of voice is difficult to be mixed in, thereby causing noise cancellation inevitably, and simultaneously causing partial cancellation to the voice, in order to reduce the influence of adaptive noise cancellation on a useful voice signal, an improved ANC voice enhancement method is adopted: adaptive filter coefficients of the ANC are adaptively updated only in the time period without voice and are kept unchanged in the time period with voice, that is, as shown in fig. 4, the ANC algorithm based on VAD adopts a time domain finite impulse response filter recovery algorithm FIR to the differential signal y obtained in step 1)s(i) And differential noise yn(i) The speech distortion of (2) is restored:
from equation (4): signal output by differential operationys(i) Not the speech signal s (i), but the distorted signal s (i) - β of s (i)2s (i-2), obtained by Z-transforming equation (4):
Ys(z)=(1-β2Z-2)S(Z) (11),
equation (12) is derived from equation (11):
Figure BDA0002957531030000061
when the formula (12) is converted from the Z domain to the time domain, and L is equal to 30, there are:
Figure BDA0002957531030000062
get the
Figure BDA0002957531030000063
Figure BDA0002957531030000071
Then
Figure BDA0002957531030000072
Where h is the coefficient of the recovery filter, L is the order of the recovery filter, and L is 30, the adaptive noise cancellation ANC can be applied to the differential signal ys(i) I.e. e (i) in equation (9) has a ratio ys(i) Higher signal-to-noise ratio, and hence the restoration filtering is done for e (i), i.e.:
Figure BDA0002957531030000073
wherein the content of the first and second substances,
Figure BDA0002957531030000074
obviously, the above speech signal recovery is performed by the formula (17), only L-order Finite Impulse Response (FIR) filtering is needed, and L is usually equal to 30, so that the computation amount is less than that of the frequency domain recovery method, and the method is easier to implement in real time, and the delay time of the output signal caused by the recovery algorithm is only L sample points, so that the delay time is also reduced by a lot compared with the delay time of one frame (generally 256 sample points) caused by the frequency domain recovery method, and the method of this embodiment obtains a smaller computation amount and a smaller time delay than the frequency domain recovery algorithm in the prior art.

Claims (1)

1. A method for dual microphone speech enhancement, comprising the steps of:
1) two microphones are used to receive two-channel noisy speech signal, the speech source is located on the connection of the two microphones at one end of one of the microphones M1, the noise source is located on the connection of the two microphones at one end of the other microphone M2, and the sampling frequency f issAnd sound velocity c, the distance d between the two microphones is shown in formula (1):
Figure FDA0002957531020000011
the speech signal obtained at the ith sampling time from M1 is s (i), the noise signal obtained from M2 is n (i), and the noisy speech signals obtained from M1 and M2 are x1(i) And x2(i) As shown in formula (2) and formula (3):
x1(i)=s(i)+βn(i-1) (2),
x2(i)=n(i)+βs(i-1) (3),
where beta is 0 < beta < 1 represents a signal amplitude attenuation factor for sound traveling a distance d from M1 to M2,
firstly, the difference operation is carried out on the two-channel noise-containing voice signals, namely, fixed difference beam forming is adopted to inhibit the direction noise, and the subsequent processing is obtainedRequired differential signal ys(i) And differential noise yn(i) The method specifically comprises the following steps: carrying out differential operation on the two paths of microphone signals to obtain a differential signal ys(i) As shown in equation (4):
Figure FDA0002957531020000012
differential noise yn(i) As shown in equation (5):
Figure FDA0002957531020000013
2) for the differential signal y obtained in the step 1)s(i) And differential noise yn(i) Without performing a restoration operation on the difference signal ys(i) And differential noise yn(i) Cancellation of differential signal y using an improved adaptive noise cancellation, ANC, algorithm based on voice activity detection, VADs(i) Residual noise in the ANC algorithm, and the adaptive processor in the improved ANC algorithm selects least mean square LMS, and the main input in the ANC algorithm is a differential signal ys(i) Reference input is differential noise yn(i) As shown in formula (6), formula (7), formula (8), formula (9), and formula (10):
y(i)=WT(i)·Yn(i) (6),
Figure FDA0002957531020000014
W(i)=[w0(i),w1(i),w2(i),...,wN(i)]T (8),
e(i)=ys(i)-y(i) (9),
W(i+1)=W(i)+μe(i)Yn(i) (10),
wherein Y (i) is the output signal of the adaptive filter, Yn(i) As a differential noise signal yn(i) Vectors formed by values taken at different moments, e (i) is an error signal, and N is an adaptive signalThe order of FIR filter, W (i) is the coefficient vector of adaptive filter, the desired mean square value of error signal e (i) is minimized by adjusting the coefficient of adaptive filter, W (i), mu represents the update step of adaptive filter coefficient, and VAD passes through ys(i) Detecting the sound/soundless time period, and using the detection result to control whether the adaptive filter coefficient is updated or not;
3) with improved ANC speech enhancement: adaptive updating is carried out on the adaptive filter coefficient of ANC only in the time period without voice, and the adaptive filter coefficient is kept unchanged in the time period with voice, namely, the ANC algorithm based on VAD adopts a time domain finite impulse response filter recovery algorithm FIR to the differential signal y obtained in the step 1)s(i) And differential noise yn(i) The speech distortion of (2) is restored:
from equation (4): signal y output by differential operations(i) Not the speech signal s (i), but the distorted signal s (i) - β of s (i)2s (i-2), obtained by Z-transforming equation (4):
Ys(z)=(1-β2Z-2)S(Z) (11),
equation (12) is derived from equation (11):
Figure FDA0002957531020000021
when the formula (12) is converted from the Z domain to the time domain, and L is equal to 30, there are:
Figure FDA0002957531020000022
get the
Figure FDA0002957531020000023
Figure FDA0002957531020000024
Then
Figure FDA0002957531020000025
Where h is the coefficient of the recovery filter and L is the order of the recovery filter, adaptive noise cancellation ANC can be applied to the difference signal ys(i) The residual noise in (a) is suppressed, i.e., e (i) in the formula (9) has a ratio ys(i) Higher signal-to-noise ratio, and hence the restoration filtering is done for e (i), i.e.:
Figure FDA0002957531020000031
wherein the content of the first and second substances,
Figure FDA0002957531020000032
CN202110227971.XA 2021-03-02 2021-03-02 Method for enhancing speech by double microphones Expired - Fee Related CN112951260B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110227971.XA CN112951260B (en) 2021-03-02 2021-03-02 Method for enhancing speech by double microphones

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110227971.XA CN112951260B (en) 2021-03-02 2021-03-02 Method for enhancing speech by double microphones

Publications (2)

Publication Number Publication Date
CN112951260A CN112951260A (en) 2021-06-11
CN112951260B true CN112951260B (en) 2022-07-19

Family

ID=76246997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110227971.XA Expired - Fee Related CN112951260B (en) 2021-03-02 2021-03-02 Method for enhancing speech by double microphones

Country Status (1)

Country Link
CN (1) CN112951260B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976565A (en) * 2010-07-09 2011-02-16 瑞声声学科技(深圳)有限公司 Dual-microphone-based speech enhancement device and method
CN109243482A (en) * 2018-10-30 2019-01-18 深圳市昂思科技有限公司 Improve the miniature array voice de-noising method of ACRANC and Wave beam forming
CN110085247A (en) * 2019-05-06 2019-08-02 上海互问信息科技有限公司 A kind of dual microphone noise-reduction method for complicated noise

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG97885A1 (en) * 2000-05-05 2003-08-20 Univ Nanyang Noise canceler system with adaptive cross-talk filters
US20140372113A1 (en) * 2001-07-12 2014-12-18 Aliphcom Microphone and voice activity detection (vad) configurations for use with communication systems
ATE402468T1 (en) * 2004-03-17 2008-08-15 Harman Becker Automotive Sys SOUND TUNING DEVICE, USE THEREOF AND SOUND TUNING METHOD
US10115386B2 (en) * 2009-11-18 2018-10-30 Qualcomm Incorporated Delay techniques in active noise cancellation circuits or other circuits that perform filtering of decimated coefficients

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976565A (en) * 2010-07-09 2011-02-16 瑞声声学科技(深圳)有限公司 Dual-microphone-based speech enhancement device and method
CN109243482A (en) * 2018-10-30 2019-01-18 深圳市昂思科技有限公司 Improve the miniature array voice de-noising method of ACRANC and Wave beam forming
CN110085247A (en) * 2019-05-06 2019-08-02 上海互问信息科技有限公司 A kind of dual microphone noise-reduction method for complicated noise

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Consolidated Perspective on Multimicrophone Speech Enhancement and Source Separation;Sharon Gannot等;《IEEE/ACM Transactions on Audio, Speech, and Language Processing》;20171231;第25卷(第4期);全文 *
Nonlinear speech enhancement: An overview;A.Hussain等;《Progress in Nonlinear Speech Processing》;20071231;全文 *
一种基于噪声对消与倒谱均值相减的鲁棒语音识别方法;王振力等;《智能系统学报》;20081215(第06期);全文 *
基于噪声抵消与波束形成的小阵语音增强;龙超 等;《计算机应用》;20200810;第40卷(第8期);全文 *
差分麦克风阵列自适应波束形成及后置滤波技术研究;武鹏飞;《中国优秀硕士学位论文全文数据库》;20210215(第2期);全文 *

Also Published As

Publication number Publication date
CN112951260A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN110741434B (en) Dual microphone speech processing for headphones with variable microphone array orientation
US9224393B2 (en) Noise estimation for use with noise reduction and echo cancellation in personal communication
JP5436814B2 (en) Noise reduction by combining beamforming and post-filtering
US9723422B2 (en) Multi-microphone method for estimation of target and noise spectral variances for speech degraded by reverberation and optionally additive noise
US7020291B2 (en) Noise reduction method with self-controlling interference frequency
EP2237271B1 (en) Method for determining a signal component for reducing noise in an input signal
US20050281415A1 (en) Microphone array processing system for noisy multipath environments
CN105575397B (en) Voice noise reduction method and voice acquisition equipment
WO2012026126A1 (en) Sound source separator device, sound source separator method, and program
US9305540B2 (en) Frequency domain signal processor for close talking differential microphone array
JP2008512888A (en) Telephone device with improved noise suppression
US20070263850A1 (en) Integration of a microphone array with acoustic echo cancellation and residual echo suppression
JP2007180896A (en) Voice signal processor and voice signal processing method
CN112331226B (en) Voice enhancement system and method for active noise reduction system
KR101587844B1 (en) Microphone signal compensation apparatus and method of the same
CN112951260B (en) Method for enhancing speech by double microphones
CN113362846B (en) Voice enhancement method based on generalized sidelobe cancellation structure
CN113838472A (en) Voice noise reduction method and device
Trong An Additive Equalizer for GSC Beamformer
Goetze et al. A decoupled filtered-x LMS algorithm for listening-room compensation
CN117278896B (en) Voice enhancement method and device based on double microphones and hearing aid equipment
CN111354368B (en) Method for compensating processed audio signal
CN116320947B (en) Frequency domain double-channel voice enhancement method applied to hearing aid
Saito et al. Noise suppressing microphone array for highly noisy environments using power spectrum density estimation in beamspace
CN117219108A (en) Self-adaptive noise reduction method based on second-order differential microphone array

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220719

CF01 Termination of patent right due to non-payment of annual fee