Summary of the invention
Technical problem to be solved by the present invention lies in a kind of speech separating method and device is provided, to solve existing skill
Crossbar signal remains larger technical problem in art.
The present invention is to solve above-mentioned technical problem by the following technical programs:
The embodiment of the invention provides a kind of speech separating methods, which comprises
Obtain the voice data to be separated of each signal path, wherein the voice data to be separated contains at least two
The voice data that people generates when speaking simultaneously;
For each preset sampling instant, the voice data to be separated is carried out at separation using blind source separation algorithm
Reason, obtains P separation signal;
Signal is separated for each, current separation signal is calculated and is currently separated with a separate in signal of the P except described
The intersection residual coefficients between other separation signals except signal;And judge whether the intersection residual coefficients are pre- less than first
If threshold value;
If it is not, using echo cancellation algorithm all intersection residual coefficients are not less than with the separation signal of the first preset threshold,
Echo cancellation process is carried out, and separation signal with all intersects separation of the residual coefficients less than the first preset threshold by treated
The set of signal separates signal as target;
If so, separating signal for the separation signal as target.
Optionally, the blind source separation algorithm includes: non-linear principal component analysis, independent component analysis, neural network calculation
One of method, maximum entropy algorithm, Minimum mutual information algorithm, maximum likelihood algorithm or multiple combinations.
It is optionally, described that separating treatment is carried out to the voice data to be separated using blind source separation algorithm, comprising:
For each voice data to be separated, the generation for being directed to the voice data to be separated is established using NPCA criterion
Valence functionWherein,
J (W) is the cost of the separation matrix of t moment;E { } is desired operation function;X (t) is corresponding for each microphone
The observation signal that is observed of signal path;W is separation matrix;(.)TFor transposition operation;For nonlinear function;T is to work as
The preceding moment;
Minimum processing is carried out to the cost function, obtains the iterative estimate of separation matrix are as follows:
W (t+1)=W (t)+θ * z (t) [xT(t)-zT(t) W (t)], wherein
W (t+1) is the separation matrix at t+1 moment;W (t) is the separation matrix of t moment;θ is iteration step length, andθ (t) is the iteration step length of t moment, and θ (t-1) is the iteration at t-1 moment
Step-length, ρ are constant,For gradient function, J (t) is the cost of t moment;Z (t) is nonlinear function, and
Utilize formula, W (t+1)=W (t)+θ * z (t) [xT(t)-zT(t) W (t)], iterate to calculate the separation square of subsequent time
Battle array obtains the target separation matrix of each voice data to be separated until the separation matrix is restrained;
Utilize formula, y (t)=Wx (t), the signal after obtaining the separation of the voice data to be separated, wherein y (t) is
Signal after the separation of Current observation signal.
Optionally, the current separation signal of the calculating with the P separate in signal except it is described it is current separate signal in addition to
Other separation signals between intersection residual coefficients, comprising:
Using formula,Current separation signal is calculated to separate in signal with the P
The intersection residual coefficients between other separation signals in addition to the current separation signal, wherein
For current separation signal and the P of i-th of channel separate in signal except it is described currently separate signal in addition to
Other separation signals between intersection residual coefficients;I is the number in the channel of current separation signal;J is P separation letter
The number in the channel of other separation signals in number in addition to the current separation signal;ai,kFor the separation signal in i-th of channel
The mixed coefficint between signal is separated with k-th;aj,kMixing between signal is separated with k-th for the separation signal in j-th of channel
Collaboration number;ykFor the sound-source signal in k-th of channel;∑ is summing function.
Optionally, the separation that using echo cancellation algorithm all intersection residual coefficients are not less than with the first preset threshold
Signal carries out echo cancellation process, comprising:
For all intersection residual coefficients not less than each separation signal in the separation signal of the first preset threshold, will work as
Preceding separation signal makees near end signal;Residual coefficients will be intersected currently to divide not less than in the separation signal of the first preset threshold except described
From other signals except signal as remote signaling;
Using formula,Obtain error signal;Wherein, e (n) is error letter
Number;D (n) is desired output signal;N is the corresponding duration of each audio frame, and value is filter length;K is in audio frame
The serial number of sampled point;K-th of sampled point corresponding filter coefficient when for nth iteration;N is the number of iterations;x(n-k)
Observation signal when iteration secondary for the n-th-k;
Using formula,Update iteration step length, wherein
Iteration step length when μ (n) is nth iteration;For the variance of near end signal;When N is that each audio frame is corresponding
Long, value is filter length, and k ∈ (0, N);Observation signal when x (n-i) is the n-th-i iteration;Λ (n) is the l times
Imbalance when iteration;
Using formula,Update filtering
The estimated value of the coefficient of device, wherein
The estimated value of filter coefficient when for (n+1)th iteration;μ (n) is iteration step length;For l
The estimated value of filter coefficient when secondary iteration;N is the corresponding duration of each audio frame, and value is filter length;x(n-i)
Observation signal when iteration secondary for the n-th-i;x*(n-k) observation signal conjugate when iteration secondary for the n-th-k;| | it is modulus letter
Number;
Utilize formula, d (n)=v (n)+∑kwk(n) x (n-k) calculates desired signal when nth iteration, wherein v (n)
For near end signal;wk(n) be nth iteration when the corresponding filter coefficient of k-th of sampled point theoretical value;X (n-k) is n-th-
Observation signal when k iteration;
Judge whether desired signal when nth iteration restrains, makees closely if so, returning and executing the signal that will currently separate
The step of end signal;If it is not, desired signal when using the nth iteration is as the signal after eliminating echo.
The embodiment of the invention provides a kind of speech Separation device, described device includes:
First obtains module, for obtaining the voice data to be separated of each signal path, wherein the voice to be separated
The voice data that data contain at least two people while generating when speaking;
Second obtains module, for being directed to each preset sampling instant, using blind source separation algorithm to described to be separated
Voice data carries out separating treatment, obtains P separation signal;
Computing module, for for each separation signal, the current separation signal of calculating separates in signal with the P to be removed
The intersection residual coefficients between other separation signals except the current separation signal;And judge that the intersection residual coefficients are
It is no less than the first preset threshold;
Cancellation module, for utilizing echo cancellation algorithm pair in the case where the calculated result of the computing module is no
All separation signals for intersecting residual coefficients and being not less than the first preset threshold, carry out echo cancellation process, and will treated point
Intersect residual coefficients with all from signal and separate signal as target less than the set of the separation signal of the first preset threshold;
Setup module, for the calculated result of the computing module be in the case where, using the separation signal as
Target separates signal.
Optionally, the blind source separation algorithm includes: non-linear principal component analysis, independent component analysis, neural network calculation
One of method, maximum entropy algorithm, Minimum mutual information algorithm, maximum likelihood algorithm or multiple combinations.
Optionally, described second module is obtained, is also used to:
For each voice data to be separated, the generation for being directed to the voice data to be separated is established using NPCA criterion
Valence functionWherein,
J (W) is the cost of the separation matrix of t moment;E { } is desired operation function;X (t) is corresponding for each microphone
The observation signal that is observed of signal path;W is separation matrix;(.)TFor transposition operation;For nonlinear function;T is to work as
The preceding moment;
Minimum processing is carried out to the cost function, obtains the iterative estimate of separation matrix are as follows:
W (t+1)=W (t)+θ * z (t) [xT(t)-zT(t) W (t)], wherein
W (t+1) is the separation matrix at t+1 moment;W (t) is the separation matrix of t moment;θ is iteration step length, andθ (t) is the iteration step length of t moment, and θ (t-1) is the iteration at t-1 moment
Step-length, ρ are constant,For gradient function, J (t) is the cost of t moment;Z (t) is nonlinear function, and
Utilize formula, W (t+1)=W (t)+θ * z (t) [xT(t)-zT(t) W (t)], iterate to calculate the separation square of subsequent time
Battle array obtains the target separation matrix of each voice data to be separated until the separation matrix is restrained;
Utilize formula, y (t)=Wx (t), the signal after obtaining the separation of the voice data to be separated, wherein y (t) is
Signal after the separation of Current observation signal.
Optionally, the computing module, is also used to:
Using formula,Current separation signal is calculated to separate in signal with the P
The intersection residual coefficients between other separation signals in addition to the current separation signal, wherein
For current separation signal and the P of i-th of channel separate in signal except it is described currently separate signal in addition to
Other separation signals between intersection residual coefficients;I is the number in the channel of current separation signal;J is P separation letter
The number in the channel of other separation signals in number in addition to the current separation signal;ai,kFor the separation signal in i-th of channel
The mixed coefficint between signal is separated with k-th;aj,kMixing between signal is separated with k-th for the separation signal in j-th of channel
Collaboration number;ykFor the sound-source signal in k-th of channel;∑ is summing function.
Optionally, the cancellation module, is also used to:
For all intersection residual coefficients not less than each separation signal in the separation signal of the first preset threshold, will work as
Preceding separation signal makees near end signal;Residual coefficients will be intersected currently to divide not less than in the separation signal of the first preset threshold except described
From other signals except signal as remote signaling;
Using formula,Obtain error signal;Wherein, e (n) is error letter
Number;D (n) is desired output signal;N is the corresponding duration of each audio frame, and value is filter length;K is in audio frame
The serial number of sampled point;K-th of sampled point corresponding filter coefficient when for nth iteration;N is the number of iterations;x(n-k)
Observation signal when iteration secondary for the n-th-k;
Using formula,Update iteration step length, wherein
Iteration step length when μ (n) is nth iteration;For the variance of near end signal;When N is that each audio frame is corresponding
Long, value is filter length, and k ∈ (0, N);Observation signal when x (n-i) is the n-th-i iteration;Λ (n) is the l times
Imbalance when iteration;
Using formula,Update filtering
The estimated value of the coefficient of device, wherein
The estimated value of filter coefficient when for (n+1)th iteration;μ (n) is iteration step length;For l
The estimated value of filter coefficient when secondary iteration;N is the corresponding duration of each audio frame, and value is filter length;x(n-i)
Observation signal when iteration secondary for the n-th-i;x*(n-k) observation signal conjugate when iteration secondary for the n-th-k;| | it is modulus letter
Number;
Utilize formula, d (n)=v (n)+∑kwk(n) x (n-k) calculates desired signal when nth iteration, wherein v (n)
For near end signal;wk(n) be nth iteration when the corresponding filter coefficient of k-th of sampled point theoretical value;X (n-k) is n-th-
Observation signal when k iteration;
Judge whether desired signal when nth iteration restrains, makees closely if so, returning and executing the signal that will currently separate
The step of end signal;If it is not, desired signal when using the nth iteration is as the signal after eliminating echo.
The present invention has the advantage that compared with prior art
Using the embodiment of the present invention, crossbar signal remaining in the signal after separation can be can be regarded as into other sound sources
Echo, reuse echo cancellation algorithm to each separation signal carry out echo cancellation process, so as to reach improve separation
Effect, and then reduce the crossbar signal residual in echo signal.
Specific embodiment
It elaborates below to the embodiment of the present invention, the present embodiment carries out under the premise of the technical scheme of the present invention
Implement, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to following implementation
Example.
The embodiment of the invention provides a kind of speech separating method and devices, just provided in an embodiment of the present invention first below
A kind of speech separating method is introduced.
Firstly the need of explanation, the embodiment of the present invention has a wide range of applications scene, such as: (1) traditionally for public affairs
The monitoring of occasion only has video monitoring altogether, can not accomplish sound monitoring, because there may be multiple speakers are same in public arena
When speak, in addition can also have various ambient noises, background music etc..It can be in protection and monitor field using the embodiment of the present invention
It is monitored while realizing voice and video.(2) occur the meeting transcription system for completing meeting summary in real time in the industry at present,
To efficiently complete the function of meeting summary, but this system is made a speech (if session discussing mistake simultaneously for there are more people
When debating actively in journey) the case where will fail, existing speech recognition system can not cope with more speaker's voices completely
The scene of identification.Using the embodiment of the present invention, intelligent meeting system can be applied to.(3) it can be applied to general voice drop
Scene of making an uproar remains with the channel of user's normal speech by Sound seperation, removes the channel of not normal speech, can be realized
Voice de-noising.
Fig. 1 is a kind of flow diagram of speech separating method provided in an embodiment of the present invention, as shown in Figure 1, the side
Method includes:
S101: the voice data to be separated of each signal path is obtained, wherein the voice data to be separated contains at least
The voice data that two people generate when speaking simultaneously.
Specifically, the microphone that different location is arranged in using at least two, it is same to obtain two or two people or more
When voice data when speaking, one of microphone obtains voice data to be separated all the way, for example, microphone -1 obtain to
The voice data -4 to be separated of voice data -2 to be separated, the acquisition of microphone -3 that separation voice data -1, microphone -2 obtain,
The voice data -5 to be separated that microphone -1 obtains.
It is understood that voice data to be separated corresponds to a signal path all the way.
S102: being directed to each preset sampling instant, is carried out using blind source separation algorithm to the voice data to be separated
Separating treatment obtains P separation signal.
Specifically, the blind source separation algorithm includes: non-linear principal component analysis, independent component analysis, neural network calculation
One of method, maximum entropy algorithm, Minimum mutual information algorithm, maximum likelihood algorithm or multiple combinations
Specifically, with the voice data position input to be separated of the road S101 step Zhong Ge, benefit is with the following method, available every
The corresponding road P of one preset sampling instant separates signal.It can be directed to each voice data to be separated, utilize NPCA
(nonlinear principle component analysis, non-linear principal component analysis) criterion is established for described wait divide
Cost function from voice data:
Wherein, J (W) is the cost of the separation matrix of t moment;E { } is
Expectation computing function;The observation signal that x (t) is observed by the corresponding signal path of each microphone;W is separation matrix;(.
)TFor transposition operation;For nonlinear function;T is current time;
Minimum processing is carried out to the cost function, obtains the iterative estimate of separation matrix are as follows:
W (t+1)=W (t)+θ * z (t) [xT(t)-zT(t) W (t)], wherein
W (t+1) is the separation matrix at t+1 moment;W (t) is the separation matrix of t moment;θ is iteration step length, andθ (t) is the iteration step length of t moment, and θ (t-1) is the iteration at t-1 moment
Step-length, ρ are constant,For gradient function, J (t) is the cost of t moment;Z (t) is nonlinear function, and
Utilize formula, W (t+1)=W (t)+θ * z (t) [xT(t)-zT(t) W (t)], iterate to calculate the separation square of subsequent time
Battle array obtains the target separation matrix of each voice data to be separated until the separation matrix is restrained;
Utilize formula, y (t)=Wx (t), the signal after obtaining the separation of the voice data to be separated, wherein y (t) is
Signal after the separation of Current observation signal.
For example, the available road the P separation signal of the 1st sampling instant, the available road the P separation letter of the 2nd sampling instant
Number, the available road P of n-th of sampling instant separate signal.
It is emphasized that aforementioned observed signal refers to, per voice data to be separated all the way.
S103: it for each separation signal, calculates current separation signal and separates in signal with the P except described current
Separate the intersection residual coefficients between other separation signals except signal;And judge the intersection residual coefficients whether less than
One preset threshold;If it is not, executing S104 step;If so, executing S105 step.
Specifically, s can be usedi(n) each obtained separation signal is indicated, wherein i is the corresponding letter of separation signal
Number channel position, the i.e. serial number of microphone;N is the serial number of the sampling instant in each signal path.Inventors have found that in reality
In the application of border, any one separation signal can be regarded as other roads P-1 in the same time and intersect what residual signal mixed, because
This, can use formula,Current separation signal is calculated to remove with a separate in signal of the P
The intersection residual coefficients between other separation signals except the current separation signal, wherein
For current separation signal and the P of i-th of channel separate in signal except it is described currently separate signal in addition to
Other separation signals between intersection residual coefficients;I is the number in the channel of current separation signal;J is P separation letter
The number in the channel of other separation signals in number in addition to the current separation signal;ai,kFor the separation signal in i-th of channel
The mixed coefficint between signal is separated with k-th, it is to be understood that when k is 1, ai,kValue be 1;aj,kFor j-th of channel
Separation signal the mixed coefficint between signal is separated with k-th;ykFor the sound-source signal in k-th of channel;∑ is summing function.
It is not less than the first preset threshold when intersecting residual coefficients, when such as 0.0125, executes S104 step;If intersecting residual system
Number executes S105 step less than the first preset threshold.
In practical applications, t can be usedi(n) it indicates to need to carry out the separation signal of echo cancellation process, wherein i is
The corresponding signal path serial number of the separation signal, the i.e. serial number of microphone;N is the sequence of the sampling instant in each signal path
Number, and i ∈ (1,2 ..., Q), Q≤P.
S104: being not less than the separation signal of the first preset threshold using echo cancellation algorithm to all intersection residual coefficients,
Echo cancellation process is carried out, and separation signal with all intersects separation of the residual coefficients less than the first preset threshold by treated
The set of signal separates signal as target.
Specifically, the echo cancellation algorithm, comprising: frequency domain MDF algorithm.In practical applications, it will successively need to carry out
The separation signal t of echo cancellation processi(n) it is successively used as near end signal, others need to carry out the separation of echo cancellation process
Signal is handled using echo cancellation algorithm respectively as remote signaling.
Specifically, the separation that using echo cancellation algorithm all intersection residual coefficients are not less than with the first preset threshold
Signal carries out echo cancellation process, comprising:
For all intersection residual coefficients not less than each separation signal in the separation signal of the first preset threshold, will work as
Preceding separation signal makees near end signal;Residual coefficients will be intersected currently to divide not less than in the separation signal of the first preset threshold except described
From other signals except signal as remote signaling;
Using formula,Obtain error signal;Wherein, e (n) is error letter
Number;D (n) is desired output signal;N is the corresponding duration of each audio frame, and value is filter length;K is in audio frame
The serial number of sampled point;K-th of sampled point corresponding filter coefficient when for nth iteration;N is the number of iterations;x(n-k)
Observation signal when iteration secondary for the n-th-k;
Using formula,Update iteration step length, wherein
Iteration step length when μ (n) is nth iteration;For the variance of near end signal;When N is that each audio frame is corresponding
Long, value is filter length, and k ∈ (0, N);Observation signal when x (n-i) is the n-th-i iteration;Λ (n) is the l times
Imbalance when iteration;
Using formula,Update filtering
The estimated value of the coefficient of device, wherein
The estimated value of filter coefficient when for (n+1)th iteration;μ (n) is iteration step length;For l
The estimated value of filter coefficient when secondary iteration;N is the corresponding duration of each audio frame, and value is filter length;x(n-i)
Observation signal when iteration secondary for the n-th-i;Observation signal conjugate when x* (n-k) is the n-th-k iteration;| | it is modulus letter
Number;
Utilize formula, d (n)=v (n)+∑kwk(n) x (n-k) calculates desired signal when nth iteration, wherein v (n)
For near end signal;wk(n) be nth iteration when the corresponding filter coefficient of k-th of sampled point theoretical value;X (n-k) is n-th-
Observation signal when k iteration;
Judge whether desired signal when nth iteration restrains, makees closely if so, returning and executing the signal that will currently separate
The step of end signal;If it is not, desired signal when using the nth iteration is as the signal after eliminating echo.
By the separation signal after progress echo cancellation process with all residual coefficients of intersecting not less than the first preset threshold
Signal is separated, that is, the set for not needing the separation signal for carrying out echo cancellation process separates signal as target.
S105: signal is separated using the separation signal as target.
Separation signal of the residual coefficients not less than the first preset threshold will be intersected, that is, do not need to carry out echo cancellation process
The set for separating signal separates signal as target.
It should be noted that the P separation signal for the (n+1)th moment is also handled according to the method described above.Finally
The target separation signal at each moment is arrived.
Using embodiment illustrated in fig. 1 of the present invention, crossbar signal remaining in the signal after separation can be can be regarded as
The echo of other sound sources reuses echo cancellation algorithm and carries out echo cancellation process to each separation signal, so as to reach
Improve separating effect, and then reduces the crossbar signal residual in echo signal.
In addition, if all echo cancellor is used to post-process each separation signal after blind source separating, and can bring
Great extra computation amount can effectively judge which signal after blind source separating is suitble to use using the embodiment of the present invention
Echo cancellation process improves separating effect, and then effectively promotes the working efficiency of whole system.
Corresponding with embodiment illustrated in fig. 1 of the present invention, the embodiment of the invention also provides a kind of speech Separation devices.
Fig. 2 is a kind of structural schematic diagram of speech Separation device provided in an embodiment of the present invention, as shown in Fig. 2, the dress
It sets and includes:
First obtains module 201, for obtaining the voice data to be separated of each signal path, wherein described to be separated
The voice data that voice data contains at least two people while generating when speaking;
Second obtains module 202, for being directed to each preset sampling instant, using blind source separation algorithm to described wait divide
Separating treatment is carried out from voice data, obtains P separation signal;
Computing module 203, for calculating current separation signal with the P and separating signal for each separation signal
In except it is described it is current separation signal in addition to other separation signals between intersection residual coefficients;And judge intersection residual system
Whether number is less than the first preset threshold;
Cancellation module 204, for utilizing echo cancellation algorithm in the case where the calculated result of the computing module is no
All intersection residual coefficients are not less than with the separation signal of the first preset threshold, carries out echo cancellation process, and by treated
Separation signal intersects residual coefficients and separates signal as target less than the set of the separation signal of the first preset threshold with all;
Setup module 205, in the case where the calculated result of the computing module, which is, is, the separation signal to be made
Signal is separated for target.
Using embodiment illustrated in fig. 2 of the present invention, crossbar signal remaining in the signal after separation can be can be regarded as
The echo of other sound sources reuses echo cancellation algorithm and carries out echo cancellation process to each separation signal, so as to reach
Improve separating effect, and then reduces the crossbar signal residual in echo signal.
In a kind of specific embodiment of the embodiment of the present invention, the blind source separation algorithm includes: non-linear principal component
One of analysis, independent component analysis, neural network algorithm, maximum entropy algorithm, Minimum mutual information algorithm, maximum likelihood algorithm
Or multiple combination.
In a kind of specific embodiment of the embodiment of the present invention, described second obtains module 202, is also used to:
For each voice data to be separated, the generation for being directed to the voice data to be separated is established using NPCA criterion
Valence functionWherein,
J (w) is the cost of the separation matrix of t moment;E { } is desired operation function;X (t) is corresponding for each microphone
The observation signal that is observed of signal path;W is separation matrix;(.)TFor transposition operation;For nonlinear function;T is to work as
The preceding moment;
Minimum processing is carried out to the cost function, obtains the iterative estimate of separation matrix are as follows:
W (t+1)=W (t)+θ * z (t) [xT(t)-zT(t) W (t)], wherein
W (t+1) is the separation matrix at t+1 moment;W (t) is the separation matrix of t moment;θ is iteration step length, andθ (t) is the iteration step length of t moment, and θ (t-1) is the iteration step length at t-1 moment,
ρ is constant,For gradient function, J (t) is the cost of t moment;Z (t) is nonlinear function, and
Utilize formula, W (t+1)=W (t)+θ * z (t) [xT(t)-zT(t) W (t)], iterate to calculate the separation square of subsequent time
Battle array obtains the target separation matrix of each voice data to be separated until the separation matrix is restrained;
Utilize formula, y (t)=Wx (t), the signal after obtaining the separation of the voice data to be separated, wherein y (t) is
Signal after the separation of Current observation signal.
In a kind of specific embodiment of the embodiment of the present invention, the computing module 203 is also used to:
Using formula,Current separation signal is calculated to separate in signal with the P
The intersection residual coefficients between other separation signals in addition to the current separation signal, wherein
For current separation signal and the P of i-th of channel separate in signal except it is described currently separate signal in addition to
Other separation signals between intersection residual coefficients;I is the number in the channel of current separation signal;J is P separation letter
The number in the channel of other separation signals in number in addition to the current separation signal;ai,kFor the separation signal in i-th of channel
The mixed coefficint between signal is separated with k-th;aj,kMixing between signal is separated with k-th for the separation signal in j-th of channel
Collaboration number;ykFor the sound-source signal in k-th of channel;∑ is summing function.
In a kind of specific embodiment of the embodiment of the present invention, the cancellation module 204 is also used to:
For all intersection residual coefficients not less than each separation signal in the separation signal of the first preset threshold, will work as
Preceding separation signal makees near end signal;Residual coefficients will be intersected currently to divide not less than in the separation signal of the first preset threshold except described
From other signals except signal as remote signaling;
Using formula,Obtain error signal;Wherein, e (n) is error letter
Number;D (n) is desired output signal;N is the corresponding duration of each audio frame, and value is filter length;K is in audio frame
The serial number of sampled point;K-th of sampled point corresponding filter coefficient when for nth iteration;N is the number of iterations;x(n-k)
Observation signal when iteration secondary for the n-th-k;
Using formula,Update iteration step length, wherein
Iteration step length when μ (n) is nth iteration;For the variance of near end signal;When N is that each audio frame is corresponding
Long, value is filter length, and k ∈ (0, N);Observation signal when x (n-i) is the n-th-i iteration;Λ (n) is the l times
Imbalance when iteration;
Using formula,Update filtering
The estimated value of the coefficient of device, wherein
The estimated value of filter coefficient when for (n+1)th iteration;μ (n) is iteration step length;For l
The estimated value of filter coefficient when secondary iteration;N is the corresponding duration of each audio frame, and value is filter length;x(n-i)
Observation signal when iteration secondary for the n-th-i;x*(n-k) observation signal conjugate when iteration secondary for the n-th-k;| | it is modulus letter
Number;
Utilize formula, d (n)=v (n)+∑kwk(n) x (n-k) calculates desired signal when nth iteration, wherein v (n)
For near end signal;wk(n) be nth iteration when the corresponding filter coefficient of k-th of sampled point theoretical value;X (n-k) is n-th-
Observation signal when k iteration;
Judge whether desired signal when nth iteration restrains, makees closely if so, returning and executing the signal that will currently separate
The step of end signal;If it is not, desired signal when using the nth iteration is as the signal after eliminating echo.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.