EP1057259A1 - Stable adaptive filter and method - Google Patents

Stable adaptive filter and method

Info

Publication number
EP1057259A1
EP1057259A1 EP99973514A EP99973514A EP1057259A1 EP 1057259 A1 EP1057259 A1 EP 1057259A1 EP 99973514 A EP99973514 A EP 99973514A EP 99973514 A EP99973514 A EP 99973514A EP 1057259 A1 EP1057259 A1 EP 1057259A1
Authority
EP
European Patent Office
Prior art keywords
adaptive filter
step size
linear equations
solving
calculator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP99973514A
Other languages
German (de)
French (fr)
Inventor
Heping Ding
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nortel Networks Ltd
Original Assignee
Nortel Networks Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/218,428 external-priority patent/US6754340B1/en
Application filed by Nortel Networks Ltd filed Critical Nortel Networks Ltd
Publication of EP1057259A1 publication Critical patent/EP1057259A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03HIMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
    • H03H21/00Adaptive networks
    • H03H21/0012Digital adaptive filters
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03HIMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
    • H03H21/00Adaptive networks
    • H03H21/0012Digital adaptive filters
    • H03H21/0043Adaptive algorithms
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03HIMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
    • H03H21/00Adaptive networks
    • H03H21/0012Digital adaptive filters
    • H03H21/0043Adaptive algorithms
    • H03H2021/0049Recursive least squares algorithm
    • H03H2021/0052Recursive least squares algorithm combined with stochastic gradient algorithm
    • H03H2021/0054Affine projection

Definitions

  • the present invention relates to adaptive filters, and in particular, to fast affine projection
  • FAP FAP adaptive filters providing a stability of operation, and methods of stable FAP adaptive filtering.
  • Adaptive filtering is a digital signal processing technique that has been widely used in technical areas such as, e.g., echo cancellation, noise cancellation, channel equalization, system identification and in products like, e.g., network echo cancellers, acoustic echo cancellers for full-duplex handsfree telephones and audio conference systems, active noise control, data communications systems.
  • an adaptive filter The characteristics of an adaptive filter are determined by its adaptation algorithm.
  • the choice of the adaptation algorithm in a specific adaptive filtering system directly affects the performance of the system.
  • N MS normalized least mean square
  • LMS least mean square
  • the NLMS algorithm converges slowly with colored training signals like the speech, an important class of signals most frequently encountered in many applications such as telecommunications.
  • the performance of systems incorporating NLMS adaptive filters very often suffers from the slow convergence nature of the algorithm.
  • Other known algorithms proposed so far are either too complicated to implement on a commercially available low- cost digital signal processor (DSP) or suffer from numerical problems.
  • DSP digital signal processor
  • FAP fast affine projection
  • a method of adaptive filtering comprising the steps of:
  • (c) updating the filter coefficients comprising: determining auto-correlation matrix coefficients from a reference input signal, and solving at least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved by using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the filter coefficients and the number of systems of linear equations to be solved being dependent on the normalized step size; (d) repeating the steps (b) and (c) required number of times.
  • the normalized step size may be chosen to be equal to any value from 0 to 1 depending on the application. In the majority of applications, it is often set to be close to unity or equal to unity. Conveniently, the normalized step size is within a range from about 0.9 to 1.0. Another convenient possibility is to set the normalized step size within a range from about 0.7 to 1.0.
  • the step of solving at least one system of linear equations comprises solving one system of linear equations only. Alternatively, in some applications, e.g., when one needs to keep misadjustment low after convergence, it is required to set the normalized step size substantially less than unity, e.g.
  • the step of solving at least one system of linear equations comprises solving N systems of linear equations, with N being a projection order.
  • N being a projection order.
  • a problem of finding the inverse of an auto-correlation matrix which is inherent for other known methods is reduced to a problem of solving a system of linear equations based on the autocorrelation matrix.
  • the system is solved by one of descending iterative methods which provide inherent stability of operation due to an intrinsic feedback adjustment. As a result inevitable numerical errors are not accumulated.
  • a steepest descent and conjugate gradient methods are used respectively to determine the first column of the inverse auto-correlation matrix, taking into account that the normalized step size is close to unity.
  • a steepest descent or conjugate gradient method is used to determine coefficients of the inverse auto-correlation matrix by recursively solving N systems of linear equations having decrementing orders. It corresponds to the case of the normalized step size being not close to unity.
  • the forth embodiment of the invention avoids determining the inverse of the auto-correlation matrix. Instead, a system of linear equations is solved by using a conjugate gradient method resulting in a solution that can be used directly to determine an updating part of the filter coefficients .
  • other known descending methods e.g. steepest descent, Newton's method, PARTAN, quasi-Newton' s method or other known iterative descending methods may also be used. Conveniently, the steps of the method may be performed by operating with real value or complex value numbers .
  • the method described above is suitable for a variety of applications, e.g. echo cancellation, noise cancellation, channel equalization, system identification which are widely used in products such as network echo cancellers, acoustic echo cancellers for full-duplex handsfree telephones and audio conference systems, active noise control systems, data communication systems.
  • an adaptive filter comprising: a filter characterized by adaptive filter coefficients; means for updating the filter coefficients, including means for setting a normalized step size, the updating means comprising: a correlator for determining auto-correlation matrix coefficients from a reference input signal, and a calculator for solving at least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved by using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the filter coefficients and the number of systems of linear equations to be solved being dependent on the normalized step size.
  • the calculator is an iterative calculator.
  • the calculator is a steepest descent or a conjugate gradient calculator.
  • it may be a calculator performing a Newton's or quasi- Newton's method, a PARTAN calculator, or another known iterative descending calculator providing an inherent stability of operation.
  • the filter and the updating means are capable of operating with real numbers. Alternatively, they may be capable of operating with complex numbers.
  • the normalized step size may be chosen to be equal to any value from 0 to 1 depending on the application. In the majority of applications, the adaptive filter is often set with the normalized step size close to unity or equal to unity. Conveniently, the normalized step size is within a range from about 0.9 to 1.0. Another convenient possibility is to set the normalized step size within a range from about 0.7 to 1.0. For the normalized step size close to unity, the calculator provides iterative solution of one system of linear equations only at each time interval. Alternatively, in some applications, e.g., when one needs to keep misadjustment after convergence low, it is required to set the normalized step size substantially less than unity, e.g. less than about 0.7.
  • the calculator provides solutions of N systems of linear equations, with N being a projection order.
  • determining of the inverse auto-correlation matrix may be performed by solving N systems of linear equations having decrementing orders .
  • the adaptive filter as described above may be used for echo cancellation, noise cancellation, channel equalization, system identification or other applications where adaptive filtering is required.
  • the adaptive filter and method described above have an advantage over known FAP adaptive filters by providing a stability of operation.
  • the problem caused by error accumulation in matrix inversion process existing in known FAP filters is solved in the present invention by using iterative descending methods.
  • the matrix inversion operation is reduced to a solution of a corresponding system of linear equations based on the auto-correlation matrix.
  • the iterative descending methods, used for the solution of the above system provide an inherent stability of operation due to an intrinsic feedback adjustment. As a result, inevitable numerical errors are not accumulated, thus providing stability of adaptive filtering.
  • Figure 1 is a block diagram of an adaptive echo cancellation system
  • FIG. 2 is a block diagram of an adaptive filter according to the first embodiment of the invention.
  • Figure 3 is a block diagram of a steepest descent calculator embedded in the filter of Fig. 2 ;
  • Figure 4 is a block diagram of a conjugate gradient calculator embedded in an adaptive filter according to a second embodiment of the invention;
  • Figure 5 is a block diagram of an adaptive filter according to a third embodiment of the invention
  • Figure 6 is a flow-chart illustrating an operation of a steepest descent calculator embedded in the adaptive filter of Fig. 5;
  • Figure 7 is a flow-chart illustrating an operation of a conjugate gradient calculator embedded in the adaptive filter of Fig. 5;
  • Figure 8 is a block diagram of an adaptive filter according to a fourth embodiment of the invention.
  • Figure 9 is a block diagram of a conjugate gradient calculator embedded in the adaptive filter of Fig. 8.
  • d(n) and X(n) stand for column vectors, and bold-faced ones, like X(n), are matrices.
  • d(n) stands for an N-1 vector consisting of the N-1 upper most elements of the N vector d(n)
  • d(n) stands for an N-1 vector consisting of the N-1 lower most elements of the N vector d(n) .
  • a superscript "T” stands for the transposition of a matrix or vector.
  • FIG. 1 presents a block diagram of an adaptive echo cancellation system 10 with an embedded adaptive filter 100, the echo cancellation being chosen as an exemplary representation of a wide class of adaptive filtering applications .
  • a digitally sampled far-end reference input signal x(n) is supplied to the adaptive filter 100 and to an echo path 14 producing an unwanted signal u(n), the signal being an echo of x(n) through the echo path 14.
  • the echo path 14 can be either a long electrical path, e.g. in a telecommunication network, or an acoustical path, e.g. in a room.
  • An echo canceller may be used together with a telecomminication network switch or a speaker phone.
  • the unwanted signal u(n) is mixed up with the wanted near-end signal s(n) in a summer 16 to produce a response signal d(n) .
  • the response signal d(n) is sent to another summer 18 together with an echo estimate signal y(n) generated by the adaptive filter 100.
  • the summer 18 subtracts y(n) from d(n) 4
  • the adaptive filter Since the echo path is constantly changing, the adaptive filter must be able to continuously adapt to the new echo path. Therefore the goal is to produce the echo estimate signal y(n) as close to u(n) as possible, so that the latter is largely cancelled by the former, and e(n) best resembles s(n) .
  • the output signal e(n), called the error- signal is then transmitted to the far-end and also sent to the adaptive filter 100 which uses it to adjust its coefficients .
  • far-end and “near-end” may need to be interchanged.
  • x(n) in Figure 1 is actually the near-end signal to be transmitted to the far-end
  • d(n) in Figure 1 is the signal received from the telephone loop connected to the far-end.
  • the terminology used above is based on the assumption that x(n) is the far-end signal and d(n) is the signal perceived at the near-end, it is done solely for convenience and does not prevent the invention from being applied to other adaptive filter applications with alternate terminology.
  • d(n) and X(n) stand for column vectors, and bold-faced ones, like X(n), are matrices .
  • d(n) stands for an N-1 vector consisting of the N- 1 upper most elements of the N vector d(n)
  • d(n) stands for an N-1 vector consisting of the N-1 lower most elements of the N vector d(n) .
  • a superscript "T” stands for the transposition of a matrix or vector.
  • L-dimensional column vectors are defined as the reference input vector and the adaptive filter coefficient vector respectively, where L is the length of the adaptive filter:
  • Equation 1 The part for convolution and subtraction, which derives the output of the adaptive echo cancellation system, can then be expressed as
  • Equation 2 Equation 2 where the superscript "T” stands for transpose of a vector or matrix.
  • W(n + 1) W(n) + 2 ⁇ (n)e(n)X(n)
  • Equation 3 ⁇ (n) is called the adaptation step size, which controls the rate of change to the coefficients, is a normalized step size, and 6, being a small positive number, prevents ⁇ (n) from going too big when there is no or little reference signal x(n).
  • the computations required in the NLMS filter include 2L+2 multiply and accumulate (MAC) operations and 1 division per sampling interval.
  • LMS least mean square
  • the affine projection method is a generalization of the NLMS method. With N being a so-called projection order, we define
  • Equation 4 Equation 4 where d(n) and e(n) are N vectors and X(n) is an LxN matrix. Usually N is much less than L, so that X(n) having more a "portrait” rather than a “landscape” shape. Note that e(n) in Equation (4) is the a priori error vector; all its ele- ments, including e(n-l), ..., e(n-N+l), depend on W(n), as indicated in Equation (5) below.
  • Equation 5 W(n) is defined in Equation (1) .
  • W(n+1) W(n) + ⁇ X(n) ⁇ (n)
  • Equation 6 Equation 6 where I is the NxN identity matrix, and and ⁇ play similar roles as described with regards to Equation 3. is the normalized step size which may have a value from 0 to 1, and very often is assigned a unity value, ⁇ is a regular- ization factor that prevents R(n), the auto-correlation matrix, from becoming ill-conditioned or rank-deficient, in which case P(n) would have too big eigenvalues causing instability of the method. It can be seen that an NxN matrix inversion operation at each sampling interval is needed in the AP method.
  • the AP method offers a good convergence property, but computationally is very extensive. It needs 2LN+0(N 2 ) MACs at each sampling interval. For example, for N equal to 5, which is a reasonable choice for many practical applica- tions, the AP is more than 5 times as complex as the NLMS.
  • the FAP method consists of two parts:
  • Equation (7) (a) An approximation which is shown in Equation (7) below and certain simplifications to reduce the computational load.
  • the approximation in Equation (7) uses the scaled posteriori errors to replace the a priori ones in Equation
  • the matrix inversion may be performed by using different approaches.
  • One of them is a so-called “sliding windowed fast recursive least square (FRLS) " approach, outlined in US patent 5,428,562 to Gay, to recursively calculate the P(n) in Eq. 6. This results in a total requirement of computations to be 2L+14N MACs and 5 divi- sions.
  • the matrix inversion lemma is used twice to derive P(n) at sampling interval n, see, e.g. Q. G. Liu, B. Champagne, and K. C.
  • Equation (6) the first expression in Equation (6) shows that the coefficient vector W(n) will no longer be updated properly. That is, W(n) can be updated in wrong directions, causing the adaptive filtering system to fail.
  • a proposed remedy is to periodically re-start a new inversion process, either sliding windowed FRLS or conventional RLS based, in parallel with the old one, and to replace the old one so as to get rid of the accumulated numerical errors in the latter. While this can be a feasible solution for high-precision DSPs such as a floating-point processor, it is still not suitable for fixed-point DSP implementations because then the finite precision numerical errors would accumulate so fast that the re-starting period would have to be made unpractically short.
  • Equation 8 where P(n) is the very first, i.e., left most, column of the matrix P(n) .
  • P(n) is the very first, i.e., left most, column of the matrix P(n) .
  • P(n) is the very first, i.e., left most, column of the matrix P(n) .
  • P(n) is the very first, i.e., left most, column of the matrix P(n) .
  • P(n) is the very first, i.e., left most, column of the matrix P(n) .
  • Q.G. Liu cited above that, even with an ⁇ slightly less than that range, say about 0.7, the approximation is still acceptable.
  • N rather than all the N 2 , elements of P(n) .
  • Equation 10 Equation 10 where R(n) is symmetric and positive definite according to its definition Equation (9), and b is an Nvector with all its elements zero except the very first, which is unity.
  • determining of an updating part of the filter coefficients may be performed either by a direct solving for £(n) (second line of Eq.
  • a method of adaptive filtering implemented in an adaptive filter 100 according to the first embodiment of the invention includes an iterative "steepest descent” technique to iteratively solve the Equation (10) .
  • steepest descent is a technique that seeks the minimum point of a certain quadratic function iteratively. At each iteration (the same as sampling interval in our application) , it takes three steps consec- utively :
  • the steepest descent reaches the unique minimum of the quadratic function, where the gradient is zero, and continuously tracks the minimum if it moves. Details about the steepest descent method can be found, for example, in a book by David G.
  • the implied quadratic function is as follows ip T (n)R(n)P(n)-P T b
  • Equation 12 Equation 12
  • R(n) must be symmetric and positive definite in order for the steepest descent technique to be applicable, this happens to be our case. Seeking the minimum, where the gradient vanishes, is equivalent to solving Equation (10) .
  • the steepest descent is also able to track the minimum point if it moves, such as the case with a non-stationary input signal X(n) .
  • the stable FAP (SFAP) method which uses the steepest descent technique includes the following steps: Initialization:
  • R(n) R(n-l) + ⁇ (n) ⁇ T (n)- ⁇ (n-L) ⁇ T (n-L)
  • R(n) is the first column of R(n)
  • R(n) is an N-1 vector that consists of the N-1 lower most elements of the N vector R(n)
  • ⁇ (n) is an N-1 vector that consists of the N-1 upper most elements of the N vector ⁇ (n) .
  • Equation (15) determines P(n) based on P(n-l) and the new incoming data X(n) only, without examining how well a P actually approximates R _1 (n) . Therefore inevitable numerical errors will accumu- late and eventually make the system collapse.
  • the feedback provided by a stable descending method, used in our invention uses Equation (15) to examine how well P(n-l), or the needed part of it, approximates R " (n), or its corresponding part. Then the adjustments are performed in Equations (16) and (17) accordingly to derive P(n), or the needed part of it. As just mentioned, this examination is done by evaluating g(n) in Equation (15) as the feedback error.
  • Equation (15) is the gradient of the implied quadratic function (Equation (15) )
  • ⁇ (n) is the optimum step size for parameter vector adjustment, which is made in Equation (17) .
  • the total computational requirement of the Stable FAP method according to the first embodiment of the invention is 2L+2N 2 +7N-1 MACs and 1 division. Note, that for the steepest descent technique to work adequately for the purpose of adaptive filtering, the projection order N has to be chosen to assure that the steepest descent converges faster than the adaptive filter coefficients do. The required pre-determined value of N will depend on a particular adaptive filtering application.
  • An adaptive filter 100 according to the first embodi- ment of the invention and operating in accordance with the method described above is shown in Figure 2. It includes a filter 102 characterized by adaptive filter coefficients W(n), and means 104 for updating the coefficients, the means being set with a normalized step size ⁇ close to its maximal value, i.e. unity.
  • the filter 102 is a finite impulse response (FIR) filter which receives a Table 1
  • the updating means 104 includes a correlator 106 for recursively determining an auto-correlation signal presented in the form of auto-correlation matrix coefficients R(n) based on the reference input signal x(n), and a calculator 108 for generating projection coefficients P(n), the projection coefficients being part of the coefficients of the inverse of the auto-correlation matrix.
  • the calculator 108 defines projection coefficients by using an iterative steepest descent method having an inherent stability of operation as illustrated in detail above.
  • the projection coefficients are used within updating means 104 for generation the auxiliary filter adapta- 24 tion signal f(n) and an echo estimate correction signal EC(n) (see Equation (34) below) .
  • the latter is used together with the provisional echo estimate PR(n) to produce the echo estimate signal y(n) .
  • a convention in Fig. 2 is the use of a thick line to represent the propagation of a matrix or vector signal, i.e., with more than one component, and the use of a thin line to stand for a scalar signal propagation.
  • a correlator 106 determines the autocorrelation matrix R(n) in accordance with the Eq. 14 using the current and past x(n) samples.
  • An “lj.(n) calculator” 110 calculates H.(n) based on Eq. 22, and as shown in Fig. 2, T ⁇ (n) is not used by the updating means 104 until the next sampling interval.
  • the filter 102 produces the convolutional sum W (n)X(n) .
  • ⁇ N _ ⁇ (n-l) is obtained from by putting the latter through a unit delay element 111, providing a delay of one sampling interval, and further multiplied by the step size in a Multiplier 113. The result is used for updating the adaptive filter coefficients in (Eq. 18) . n.
  • ⁇ (n-l) is dot-multi- plied with part of R(n) by a Dot multiplier 112, and the result is further multiplied by a multiplier 114 with the step size ⁇ to form the correction term to be added to W T (n)X(n) by the summer 116 to form the filter output y(n) (Equation (19) ) .
  • the summer 18 calculates the error, or the output, e(n), as in Equation (20) .
  • the scalar-vector multiplier 118 derives £(n) in accordance with Equation (21) .
  • a steepest descent calculator 108 is shown in detail in Figure 3. Thick lines represent the propagation of a matrix or vector signal, i.e., with more than one component, and the use of a thin line stands for a scalar sig- nal propagation.
  • the autocorrelation matrix R(n) and the vector P(n-l) which is a part of the estimated inverse of R(n-l) are multiplied in a Matrix-vector multiplier 130.
  • the vector product is fur- ther subtracted by a constant vector [1 0 ... 0] ⁇ in a Summer 132 to produce the gradient vector g(n), which contains the feedback error information about using P(n-l) as the estimated inverse of R(n).
  • Equation (15) The squared norm of g(n) is then found by dot- multiplying g(n) with itself in a Dot multiplier 134. It is used as the numerator in calculating ⁇ (n) in Equation 16.
  • a Matrix-vector multiplier 136 finds the vector product between the autocorrelation matrix R(n) and the gradient vector g(n). This vector product is then dot-multiplied with g(n) in another Dot multiplier 138 to produce the denominator in calculating ⁇ (n) in Equation (16) .
  • This denominator is reciprocated in a Reciprocator 140, and then further scalar-multiplied with the aforementioned numerator in scalar multiplier 142 to produce ⁇ (n) . This is the only place where any division operation is performed. Finally, ⁇ (n) is multiplied with the gradient g(n) in a scalar-vector multiplier 144 to form the correction term to P(n-l). This correction term is then subtracted from P(n-l) in a Vector Summer 146 to derive P(n) in accordance with Equation (17). P(n-l) is obtained from P(n) by using a unit delay element 148, providing a delay of one sampling interval .
  • the first one is a floating point module
  • the second one is a 16-bit fixed-point DSP implementation.
  • a floating-point module simulating the NLMS acoustic echo canceller design in Venture, a successful full-duplex handsfree telephone terminal product by Nortel Networks Corporation, and a bench mark, floating- point module that repeats a prior art FAP scheme by Q .
  • G. Liu, B. Champagne, and K. C. Ho Bell-Northern Research and INRS-Telecommunications, Universite du Quebec), "On the Use of a Modified Fast Affine Projection Algorithm in Subbands for Acoustic Echo Cancellation," pp.
  • the source ones are speech files with Harvard sentences (Intermediate Refer- ence System filtered or not) sampled at 8 KHz and a white noise file. Out of the source files certain echo files have been produced by filtering the source ones with certain measured, 1200-tap, room impulse responses. These two sets of files act as x(n) and d(n) respectively.
  • the major simulation results are as follows.
  • the output e(n) in the steepest descent embodiment converges approximately at the same speed as the bench mark prior art FAP and reaches the same steady state echo cancellation depth as the prior art FAP and NLMS.
  • the SFAP according to the first embodiment of the invention outperforms NLMS filter; with speech training, it converges in about 1 second while it takes the NLMS filter about 7 to 8 seconds to do so.
  • a method of adaptive filtering according to a second embodiment of the present invention uses an iterative "conjugate gradient” technique to iteratively solve the Equation (10), the corresponding calculator being shown in Figure 4.
  • Conjugate gradient is a technique that also seeks the minimum point of a certain quadratic function iteratively. Conjugate gradient is closely related to the steepest descent scheme discussed above. It differs from the steepest decent in that it is guaranteed to reach the minimum in no more than N steps, with N being the order of the system. That is, conjugate gradient usually converges faster than the steepest descent. At each iteration (the same as sampling interval in out application) , the conjugate gradient takes five steps consecutively:
  • conjugate gradient modifies the negative gradient to determine an optimized direction.
  • the scheme reaches the unique minimum of the quadratic func- tion, where the gradient is zero, in no more than N steps.
  • the conjugate gradient technique also continuously tracks the minimum if it moves, such as the case with non-stationary input signal x(n) . Details about the conjugate gradient algorithm can be found, for example, in a book by David G. Luenberger (Stanford University), Linear and Non- linear Programming, Addison-Wesley Publishing Company, 1984.
  • Equation (11) whose gradient with respect to P(n) is also Equation (12) .
  • R(n) must be symmetric and positive definite in order for the conjugate gradient technique to apply, this happens to be our case. Seeking the minimum, where the gradient vanishes, is equivalent to solving Equation (10) .
  • the conjugate gradient is also able to track the minimum point if it moves, such as the case with non-stationary input signal X(n) .
  • the SFAP method according to the second embodiment which uses the conjugate gradient technique, includes the following steps: Initialization:
  • Equation 25 where (n) is defined in Equation (23) above, and determining projection coefficients by solving the system of linear Equations (10) using the conjugate technique, the projection coefficients being first column coefficients of the inverse of the auto-correlation matrix:
  • Equation 37 where R(n) is the first column of R(n), R(n) is an N-1 vector that consists of the N-1 lower most elements of the N vector R(n), and ⁇ (n) is an N-1 vector that consists of the N-1 upper most elements of the N vector ⁇ (n) .
  • Equations (26), (27), (28) , (31) and (32) respectively correspond to the five steps of the conjugate gradient technique discussed earlier in this section.
  • g( ) is the gradient of the implied quadratic function
  • ⁇ (n) is the optimum factor for updating the direction vector s(n).
  • ⁇ (n) is the optimum step size for parameter vector adjustment, which is made in Equation
  • the total computational requirement of the Stable FAP method according to the second embodiment of the invention is 2L+2N 2 +9N+1 MACs and 1 division. It should be also ensured that the conjugate gradient converges fast enough so that the adaptive filter coeffients converge.
  • An adaptive filter according to the second embodiment of the invention is similar to that of the first embodi- ment shown in Figure 2 except for the calculator 108 now operating in accordance with the conjugate gradient technique and being designated by numeral 208 in Figure 4.
  • the conjugate gradient calculator 208 embedded in the adaptive filter of the second embodiment is shown in detail in Figure 4. Thick lines represent the propagation of a matrix or vector signal, i.e., with more than one component, and the use of a thin line stands for a scalar signal propagation.
  • the autocorrelation matrix R(n) and the vector P(n-l), part of the esti- mated inverse of R(n-l) are multiplied in a Matrix-vector Multiplier 210.
  • the resulted vector product is subtracted by a constant vector [1 0 ...
  • a Summer 212 to produce the gradient vector g(n), which contains the feedback error information about using P(n-l) as the estimated inverse of R(n).
  • the Matrix-vector Multiplier 210 and the Summer 212 implement the Equation (26) above.
  • the gradient g(n) is further dot-multiplied with b(n-l), an auxiliary vector found in the last sampling interval, in a Dot Multiplier 214.
  • the resulted scalar product is multiplied by r srs (n-l) in a Multiplier 216, to produce ⁇ (n), a factor to be used in adjusting s(n-l), the direction vector for adjusting J
  • Equation (27) The part of the dia- gram described in this paragraph implements Equation (27) shown above. With ⁇ (n), g(n), and s(n-l) available, s(n-l) is then updated into s(n) by using yet another unit delay element 222, with a delay of one sampling interval, scalar- vector Multiplier 224 and Vector Summer 226 which imple- ment operations shown in Equation (28) above.
  • the auxiliary vector b(n), to be used in the next sampling interval is calculated as the product between R(n) and s(n) in another Matrix-vector Multiplier 230. This implements Equation (29) above.
  • the vector b(n) is then dot-multi- plied with s(n) in yet another Dot multiplier 232, and the scalar product is reciprocated in a Reciprocator 234, to produce r srs (n) (Equation (30)) . This is where the only division operation is.
  • g(n) and s(n) are dot-multiplied, and the result, being a scalar product, is multiplied with -r srs (n) to derive ⁇ (n), thus implementing Equation (31) above.
  • ⁇ (n) is available, it is multiplied with s(n) in another scalar-vector Multiplier 240 to form the correction term to P(n-l), which is then added to P(n-l) in a Vector Summer 242 in order to derive P(n) (Equation (32) above) .
  • the output e(n) in the conjugate gradient embodiment converges approximately at the same speed as the bench mark prior art FAP and reaches the same steady state echo cancellation depth as the bench mark prior art FAP and NLMS .
  • the SFAP according to the second embodiment of the invention also ourperformes NLMS filter in terms of convergence speed.
  • a method of adaptive filtering according to a third embodiment of the present invention provides adaptive filtering when the normalized step size has any value from 0 to 1. It updates the adaptive filter coefficients by iteratively solving a number of systems linear equations having decrementing orders to determine the inverse auto- correlation matrix in a manner described below.
  • Equation 39 Equation 39
  • Equation 40 This means that P is also the inverse of R. Since the inverse of a matrix is unique, the only possibility is
  • Equation 42 Equation (42) can be written in a scalar form
  • Equation 43 where r ik (n) is the element of R(n) on row i and column k, and p k :(n) the element of P(n) on row k and column j, and dy is defined as
  • Equation (45) coincides with Equation (10) derived earlier and applied to the first and second embodiments of the invention.
  • Equation 46 The right hand side of Equation (45) or Equation (46) tells that P(n) is the left-most column of P(n) and, based on Equation (41), P (n) is also the upper-most row of P(n) . According to the first and second embodiments of the invention discussed above, this part will cost "2N 2 +3N" MACs and 1 division with steepest descent or "2N +5N+2" MACs and 1 division with conjugate gradient.
  • Equation (47) can be re-arranged to become
  • these N-1 unknowns can be uniquely determined by only N-1 equations.
  • Equation 49 Equation (49) has the same format as Equation (45) except that the order is reduced by one. Equation (49) can also be solved by using either of the two approaches presented above, costing "2 (N-1) 2 +4 (N-1) MACs and 1 division with steepest descent” or "2 (N-1) 2 +6 (N-1) +2 MACs and 1 division with conjugate gradient," where the added "(N-1)” in each of the two expressions accounts for the extra computations needed to calculate the right hand side of Equation (49) .
  • Equation 49 Equation 49 Equation (49) has the same format as Equation (45) except that the order is reduced by one. Equation (49) can also be solved by using either of the two approaches presented above, costing "2 (N-1) 2 +4 (N-1) MACs and 1 division with steepest descent” or "2 (N-1) 2 +6 (N-1) +2 MACs and 1 division with conjugate gradient," where the added "(N-1)” in each of the two expressions accounts for
  • Equation (45) and Equation (49) are just special cases of Equation (50), and (p kj ( n ) , k-j , ' +l, ... , N- 1 ⁇ found in recursion step j form a column vector P;(n) , which consists of the lower N- elements of the j'th (0 ⁇ ⁇ N- 1) column of P(n) .
  • the process of Equation (50) will take ⁇ divisions and
  • Equation 51 for steepest descent method, and N divisions and [2N 2 + 5N + 2] + [2(N - l) 2 + 6(N - 1) + 2] + [2(N - 2) 2 + 7(N - 2) + 2] + ...
  • the SFAP method according to the third embodiment of the invention includes the following steps: Initialization:
  • Equation 54 Updating the adaptive filter coefficients in sampling interval n including the steps shown in Equation 55 below.
  • designations used in Equation (55), are as follows: (n) is defined in Equation (23) above, R(n) is the first column of R(n), R(n) is an N-1 vector that consists of the N-1 lower most elements of the N vector R(n), and ⁇ (n) is an N-1 vector that consists of the N-1 upper most elements of the Nvector ⁇ (n) .
  • any division operation in the 2nd expression of Equation (55) is not performed if the denominator is not greater than zero, in which case a zero is assigned to the quotient .
  • the filter 300 also differs from the filter 100 by the following features: the normilized step size may have any value from 0 to 1.0, the calculator 308 now has more extended structure for consec- utively determining columns of the inverse auto-correlation matrix in accordance with the steepest descent technique, and an e(n) calculator 320 is added.
  • the routine 402 sets an initial value to index j (block 404) which is submitted together with the auto-correlation matrix R(n) (block 406) to a projection coefficient column calculator (block 408) .
  • the calculator provides a steepest descent iteration in accordance with Equation (50) for the current value of index j, thus updating the corresponding column of projection coefficients from the previous sampling interval (block 408) .
  • the updated column of the projected coefficients is sent to a storage means (routine 410, block 412) to be stored until the other columns of P(n) are calculated.
  • thick lines represent the propagation of a matrix or vector signal, i.e., with more than one component, and the use of a thin line stands for a control propagation.
  • the steepest descent calculator 308 may be replaced with the conjugate calculator.
  • the corresponding structure is illustrated by a flow-chart 500 shown in Figure 7 where the blocks similar to that ones of Figure 6 are designated by same refer- ence numerals incremented by 100. It operates in a manner described above with regard to Figure 6.
  • a method of adaptive filtering according to a fourth embodiment of the present invention also provides adaptive filtering when the normalized step size has any value from 0 to 1. It updates the adaptive filter coefficients by iteratively solving a number of systems linear equations which avoid an explicit matrix inversion performed in the third embodiment of the invention. The details are described below.
  • the second equation from the set of Equations (6) which is reproduced for convenience in Equation (56) below, is equivalent to
  • Equation 56 it is possible to obtain £(n), required for updating the adaptive filter coefficients, directly from the set of linear Equations (56) , which are solved again by one of the descending iterative methods .
  • the SFAP method of the fourth embodiment of the invention includes the following steps:
  • T ⁇ -r srs (k+l)g ⁇ (k+l) Nx (N+l)
  • ⁇ t (k+l) ⁇ t (k) + ⁇ s(k+l) NxN
  • An adaptive filter 600 according to a fourth embodiment of the invention is shown in detail in Figure 8. It includes a filter 602 characterized by adaptive filter- coefficients W(n), and means 604 for updating the coefficients, the means being set with a normalized step size ⁇ having any value in a range from 0 to 1.0.
  • the filter 602 is a finite impulse response (FIR) filter which receives a reference input signal x(n) and an auxiliary signal f(n) used for updating the coefficients, and generates a provisional echo estimate signal PR(n).
  • FIR finite impulse response
  • the updating means 604 includes a correlator 606 for recursively determining an auto-correlation signal presented in the form of auto-correlation matrix coefficients R(n) based on the reference input signal x(n), an ⁇ (n) calculator 608 and an e(n) calculator 620 for corresponding calculation of vectors £_(n) and e(n) .
  • the calculator 608 defines ⁇ (n) by using an iterative conjugate gradient method having an inherent stability of operation as illustrated in detail above.
  • the projection coefficients are used within updating means 604 for generation the auxiliary filter adaptation signal f(n) and an echo estimate correction signal EC(n) . The latter is used together with the provisional echo estimate PR(n) to produce the echo estimate signal y(n).
  • Fig. 8 thick lines represent propagation of a matrix or vector signal, i.e., the signal with more than one component, and the use of a thin line stands for a scalar signal propagation.
  • a correlator 606 determines the autocorrelation matrix 4
  • R(n) in accordance with the first formula of Eq. (59) using the current and past x(n) samples.
  • An "n(n) calculator" 610 calculates n . (n) based the last formula of Eq. (59) , and as shown in Fig. 8, rj.(n) is not used by the updating means 104 until the next sampling interval.
  • the filter 602 produces the convolutional sum W (n)X(n) .
  • ⁇ N _ 1 (n-l) is obtained from ⁇ N -iC 11 ) ky putting the latter through a unit delay element 611, providing a delay of one sampling interval, and further multiplied by the step size ⁇ in a Multiplier 613.
  • the £(n) calculator 608 solves the sixth equation of Eq. (59) for £(n) by a conjugate gradient method, thus providing sufficient data for updating the adaptive filter coefficients (Eq. 6, first formula) .
  • Thick lines represent the propagation of a matrix or vector signal, i.e., with more 4
  • the calculator 608 additionally includes an output switch 754 which automatically opens- at the beginning of the sampling interval and closes at the end of N conjugate gradient iterations. Modifications described with regard to the first two embodiments are equally applicable to the third and fourth embodiments of the invention.
  • an adaptive filter and a method providing a stability of adaptive filtering based on feedback adjustment are provided.

Landscapes

  • Filters That Use Time-Delay Elements (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

Stable adaptive filter and method are disclosed. The invention solves a problem of instability associated with Fast Affine Projection adaptive filters caused by error accumulation in an inversion process of an auto-correlation matrix. The Stable FAP provides updating of the adaptive filter coefficients by solving at least one system of linear equations, whose coefficients are the auto-correlation matrix coefficients, by using one of the descending iterative methods having an inherent stability of operation due to intrinsic feedback adjustment. The results of the solution are used to update the filter coefficients. The above approach is applicable for any value of a normalized step size ranging from zero to unity. It allows either direct determining of an updating part of the filter coefficients without determining an inverse auto-correlation matrix, or determining the inverse auto-correlation matrix by descending iterative methods.

Description

STABLE ADAPTIVE FILTER AND METHOD
FIELD OF THE INVENTION
This application relates to U.S. Patent Application Serial No. 09/218,428 filed on December 22, 1998, and to
U.S. Patent Application Serial No. 09/356,041 filed on
July 16, 1999. The present invention relates to adaptive filters, and in particular, to fast affine projection
(FAP) adaptive filters providing a stability of operation, and methods of stable FAP adaptive filtering.
BACKGROUND OF THE INVENTION
Adaptive filtering is a digital signal processing technique that has been widely used in technical areas such as, e.g., echo cancellation, noise cancellation, channel equalization, system identification and in products like, e.g., network echo cancellers, acoustic echo cancellers for full-duplex handsfree telephones and audio conference systems, active noise control, data communications systems.
The characteristics of an adaptive filter are determined by its adaptation algorithm. The choice of the adaptation algorithm in a specific adaptive filtering system directly affects the performance of the system. Being simple and easily stable, the normalized least mean square (N MS) adaptation algorithm, being a practical implementation of the least mean square (LMS) algorithm, is now most widely used in the industry with a certain degree of success . However, because of its intrinsic weakness, the NLMS algorithm converges slowly with colored training signals like the speech, an important class of signals most frequently encountered in many applications such as telecommunications. The performance of systems incorporating NLMS adaptive filters very often suffers from the slow convergence nature of the algorithm. Other known algorithms proposed so far are either too complicated to implement on a commercially available low- cost digital signal processor (DSP) or suffer from numerical problems. Recently, a fast affine projection (FAP) method was proposed as described in a publication by Steven L. Gay and Sanjeev Tavathia (Acoustic Research Department, AT&T Bell Laboratories) , "The Fast Affine Projection Algorithm," pp. 3023 - 3026, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, May 1995, Detroit, Michigan, U.S.A. The FAP is a simplified version of the more complicated, and therefore less practical, affine projection (AP) algorithm. With colored train signals such as the speech, the FAP usually converges several times faster than the NLMS, with only a marginal increase in implementation complexity.
However, a stability issue has been preventing FAP from being used in the industry. A prior art FAP implementation oscillates within a short period of time even with floating-point calculations. This results from the accumulation of finite precision numerical errors in a matrix inversion process associated with the FAP. Researchers have been trying to solve this problem, but no satisfactory answer has been found so far. A remedy proposed in the publication listed above and reinforced in publication by Q. G. Liu, B. Champagne, and K. C. Ho (Bell-Northern Research and INRS-Telecommunications , Universite du Quebec) , "On the Use of a Modified Fast Affine Projection Algorithm in Subbands for Acoustic Echo Cancellation," pp. 354 - 357, Proceedings of 1996 IEEE Digital Signal Processing Workshop, Loen, Norway, September 1996, is to periodically re-start a new inversion process in parallel with the old one, and to use- it to replace the latter so as to get rid of the accumulated numerical errors therein. While this can be a feasible solution for high-precision DSPs such as a floating-point processor, it is still not suitable for fixed-point DSP implementations because then the finite precision numerical errors would accumulate so fast that the re-starting period would have to be made impractically small, not to mention the extra complexity associated with this part of the algorithm.
Therefore there is a need in the industry for development of alternative adaptive filtering methods which would ensure stability of operation while providing fast convergence and reliable results .
SUMMARY OF THE INVENTION
It is an object of the present invention to provide an adaptive filter and a method of adaptive filtering which would avoid the afore-mentioned problems.
According to one aspect of the present invention there is provided a method of adaptive filtering, comprising the steps of:
(a) determining adaptive filter coefficients; (b) defining a normalized step size;
(c) updating the filter coefficients, comprising: determining auto-correlation matrix coefficients from a reference input signal, and solving at least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved by using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the filter coefficients and the number of systems of linear equations to be solved being dependent on the normalized step size; (d) repeating the steps (b) and (c) required number of times.
Advantageously, determining of the auto-correlation matrix is performed recursively. The normalized step size may be chosen to be equal to any value from 0 to 1 depending on the application. In the majority of applications, it is often set to be close to unity or equal to unity. Conveniently, the normalized step size is within a range from about 0.9 to 1.0. Another convenient possibility is to set the normalized step size within a range from about 0.7 to 1.0. For the normalized step size close to unity, the step of solving at least one system of linear equations comprises solving one system of linear equations only. Alternatively, in some applications, e.g., when one needs to keep misadjustment low after convergence, it is required to set the normalized step size substantially less than unity, e.g. less than about 0.7. In this situation the step of solving at least one system of linear equations comprises solving N systems of linear equations, with N being a projection order. In the embodiments of the invention, a problem of finding the inverse of an auto-correlation matrix which is inherent for other known methods, is reduced to a problem of solving a system of linear equations based on the autocorrelation matrix. The system is solved by one of descending iterative methods which provide inherent stability of operation due to an intrinsic feedback adjustment. As a result inevitable numerical errors are not accumulated. In first and second embodiments of the invention, a steepest descent and conjugate gradient methods are used respectively to determine the first column of the inverse auto-correlation matrix, taking into account that the normalized step size is close to unity. In a third embodiment of the invention a steepest descent or conjugate gradient method is used to determine coefficients of the inverse auto-correlation matrix by recursively solving N systems of linear equations having decrementing orders. It corresponds to the case of the normalized step size being not close to unity. The forth embodiment of the invention avoids determining the inverse of the auto-correlation matrix. Instead, a system of linear equations is solved by using a conjugate gradient method resulting in a solution that can be used directly to determine an updating part of the filter coefficients . Alternatively, other known descending methods, e.g. steepest descent, Newton's method, PARTAN, quasi-Newton' s method or other known iterative descending methods may also be used. Conveniently, the steps of the method may be performed by operating with real value or complex value numbers .
The method described above is suitable for a variety of applications, e.g. echo cancellation, noise cancellation, channel equalization, system identification which are widely used in products such as network echo cancellers, acoustic echo cancellers for full-duplex handsfree telephones and audio conference systems, active noise control systems, data communication systems. According to another aspect of the invention there is provided an adaptive filter, comprising: a filter characterized by adaptive filter coefficients; means for updating the filter coefficients, including means for setting a normalized step size, the updating means comprising: a correlator for determining auto-correlation matrix coefficients from a reference input signal, and a calculator for solving at least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved by using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the filter coefficients and the number of systems of linear equations to be solved being dependent on the normalized step size.
Advantageously, the calculator is an iterative calculator. Preferably, the calculator is a steepest descent or a conjugate gradient calculator. Alternatively, it may be a calculator performing a Newton's or quasi- Newton's method, a PARTAN calculator, or another known iterative descending calculator providing an inherent stability of operation.
Conveniently, the filter and the updating means are capable of operating with real numbers. Alternatively, they may be capable of operating with complex numbers.
The normalized step size may be chosen to be equal to any value from 0 to 1 depending on the application. In the majority of applications, the adaptive filter is often set with the normalized step size close to unity or equal to unity. Conveniently, the normalized step size is within a range from about 0.9 to 1.0. Another convenient possibility is to set the normalized step size within a range from about 0.7 to 1.0. For the normalized step size close to unity, the calculator provides iterative solution of one system of linear equations only at each time interval. Alternatively, in some applications, e.g., when one needs to keep misadjustment after convergence low, it is required to set the normalized step size substantially less than unity, e.g. less than about 0.7. In this situation the calculator provides solutions of N systems of linear equations, with N being a projection order. Conveniently, due to the symmetry of the auto-correlation matrix, determining of the inverse auto-correlation matrix may be performed by solving N systems of linear equations having decrementing orders .
The adaptive filter as described above may be used for echo cancellation, noise cancellation, channel equalization, system identification or other applications where adaptive filtering is required.
The adaptive filter and method described above have an advantage over known FAP adaptive filters by providing a stability of operation. The problem caused by error accumulation in matrix inversion process existing in known FAP filters is solved in the present invention by using iterative descending methods. First, the matrix inversion operation is reduced to a solution of a corresponding system of linear equations based on the auto-correlation matrix. Second, the iterative descending methods, used for the solution of the above system, provide an inherent stability of operation due to an intrinsic feedback adjustment. As a result, inevitable numerical errors are not accumulated, thus providing stability of adaptive filtering.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in greater detail regarding the attached drawings in which:
Figure 1 is a block diagram of an adaptive echo cancellation system;
Figure 2 is a block diagram of an adaptive filter according to the first embodiment of the invention;
Figure 3 is a block diagram of a steepest descent calculator embedded in the filter of Fig. 2 ; Figure 4 is a block diagram of a conjugate gradient calculator embedded in an adaptive filter according to a second embodiment of the invention;
Figure 5 is a block diagram of an adaptive filter according to a third embodiment of the invention; Figure 6 is a flow-chart illustrating an operation of a steepest descent calculator embedded in the adaptive filter of Fig. 5;
Figure 7 is a flow-chart illustrating an operation of a conjugate gradient calculator embedded in the adaptive filter of Fig. 5;
Figure 8 is a block diagram of an adaptive filter according to a fourth embodiment of the invention; and
Figure 9 is a block diagram of a conjugate gradient calculator embedded in the adaptive filter of Fig. 8.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
A. CONVENTIONS IN LINEAR ALGEBRA REPRESENTATION
In this document, underscored letters, such as d(n) and X(n) , stand for column vectors, and bold-faced ones, like X(n), are matrices. d(n) stands for an N-1 vector consisting of the N-1 upper most elements of the N vector d(n) , and d(n) stands for an N-1 vector consisting of the N-1 lower most elements of the N vector d(n) . A superscript "T" stands for the transposition of a matrix or vector.
B. INTRODUCTION Figure 1 presents a block diagram of an adaptive echo cancellation system 10 with an embedded adaptive filter 100, the echo cancellation being chosen as an exemplary representation of a wide class of adaptive filtering applications . A digitally sampled far-end reference input signal x(n) is supplied to the adaptive filter 100 and to an echo path 14 producing an unwanted signal u(n), the signal being an echo of x(n) through the echo path 14. The echo path 14 can be either a long electrical path, e.g. in a telecommunication network, or an acoustical path, e.g. in a room. An echo canceller may be used together with a telecomminication network switch or a speaker phone. The unwanted signal u(n) is mixed up with the wanted near-end signal s(n) in a summer 16 to produce a response signal d(n) . The response signal d(n) is sent to another summer 18 together with an echo estimate signal y(n) generated by the adaptive filter 100. The summer 18 subtracts y(n) from d(n) 4
10 producing an output signal e(n), to be transmitted to the far-end. Since the echo path is constantly changing, the adaptive filter must be able to continuously adapt to the new echo path. Therefore the goal is to produce the echo estimate signal y(n) as close to u(n) as possible, so that the latter is largely cancelled by the former, and e(n) best resembles s(n) . The output signal e(n), called the error- signal, is then transmitted to the far-end and also sent to the adaptive filter 100 which uses it to adjust its coefficients .
Note that, depending on a particular application, the terms "far-end" and "near-end" may need to be interchanged. For example, with a network echo canceller in a telephone terminal, x(n) in Figure 1 is actually the near-end signal to be transmitted to the far-end, and d(n) in Figure 1 is the signal received from the telephone loop connected to the far-end. Although the terminology used above is based on the assumption that x(n) is the far-end signal and d(n) is the signal perceived at the near-end, it is done solely for convenience and does not prevent the invention from being applied to other adaptive filter applications with alternate terminology.
The following conventions in linear algebra representation are used throughout the text of the present patent. Underscored letters, such as d(n) and X(n), stand for column vectors, and bold-faced ones, like X(n), are matrices . d(n) stands for an N-1 vector consisting of the N- 1 upper most elements of the N vector d(n) , and d(n) stands for an N-1 vector consisting of the N-1 lower most elements of the N vector d(n) . A superscript "T" stands for the transposition of a matrix or vector. 1. The normalized least mea square (NLMS) filter
The following L-dimensional column vectors are defined as the reference input vector and the adaptive filter coefficient vector respectively, where L is the length of the adaptive filter:
X(n) ≡
(Equation 1) The part for convolution and subtraction, which derives the output of the adaptive echo cancellation system, can then be expressed as
e(n) = d(n)-y(n) = d(n) -^1w1(n)x(n-l) = d(n) -XT(n) (n)
1 = 0
(Equation 2) where the superscript "T" stands for transpose of a vector or matrix. The adaptation part of the method, which updates the coefficient vectors based on the knowledge of the system behavior, is
W(n + 1) = W(n) + 2μ(n)e(n)X(n)
μ(n) = α
X (n)X(n) + δ
(Equation 3) In Equation (3), μ(n) is called the adaptation step size, which controls the rate of change to the coefficients, is a normalized step size, and 6, being a small positive number, prevents μ(n) from going too big when there is no or little reference signal x(n).
The computations required in the NLMS filter include 2L+2 multiply and accumulate (MAC) operations and 1 division per sampling interval. Details about the least mean square (LMS) method can be found, e.g. in classical papers to B. Widrow, et al . , "Adaptive Noise Cancelling: Principles and Applications," Proceedings of the IEEE, Vol. 63, pp. 1692 - 1716, Dec. 1975 and B. Widrow, et al . , "Stationary and Nonstationary Learning Characteristics of the LMS Adaptive Filter," Proceedings of the IEEE, Vol. 64, pp. 1151 - 1162, Aug. 1976.
2. The Affine Projection (AP) filter
The affine projection method is a generalization of the NLMS method. With N being a so-called projection order, we define
d(n) e(n) d(n) d(n-l) e(n): e(n-l)
d(n-N + l) e(n-N+l)Jw(n) x(n) x(n-l) ... x(n-N+l)
X(n) = x(n-l) x(n-2) ... x(n-N)
x(n-L+l) x(n-L) ... x(n-N-L + 2)
(Equation 4) where d(n) and e(n) are N vectors and X(n) is an LxN matrix. Usually N is much less than L, so that X(n) having more a "portrait" rather than a "landscape" shape. Note that e(n) in Equation (4) is the a priori error vector; all its ele- ments, including e(n-l), ..., e(n-N+l), depend on W(n), as indicated in Equation (5) below.
The convolution and subtraction part of the method is
e(n) = d(n)-XT(n)W(n)
(Equation 5) where W(n) is defined in Equation (1) . The updating part of the method includes the following steps
W(n+1) = W(n) + αX(n)ε(n)
R(n)ε(n) = e(n) or ε(n) = P(n)e(n)
P(n) = R_1(n)
R(n) = X (n)X(n) + 5I
(Equation 6) where I is the NxN identity matrix, and and δ play similar roles as described with regards to Equation 3. is the normalized step size which may have a value from 0 to 1, and very often is assigned a unity value, δ is a regular- ization factor that prevents R(n), the auto-correlation matrix, from becoming ill-conditioned or rank-deficient, in which case P(n) would have too big eigenvalues causing instability of the method. It can be seen that an NxN matrix inversion operation at each sampling interval is needed in the AP method.
The AP method offers a good convergence property, but computationally is very extensive. It needs 2LN+0(N2) MACs at each sampling interval. For example, for N equal to 5, which is a reasonable choice for many practical applica- tions, the AP is more than 5 times as complex as the NLMS.
3. The Fast Affine Projection (FAP) filter
Since the AP method is impractically expensive computationally, certain simplifications have been made to arrive at the so-called FAP method, see, e.g. US patent 5,428,562 to Gay. Note that here the "F", for "fast", means that it saves computations, not faster convergence. In fact by adopting these simplifications, the performance indices, including the convergence speed, will slightly degrade .
Briefly, the FAP method consists of two parts:
(a) An approximation which is shown in Equation (7) below and certain simplifications to reduce the computational load. The approximation in Equation (7) uses the scaled posteriori errors to replace the a priori ones in Equation
(4) :
(Equation 7)
(b) The matrix inversion operation.
The matrix inversion may be performed by using different approaches. One of them is a so-called "sliding windowed fast recursive least square (FRLS) " approach, outlined in US patent 5,428,562 to Gay, to recursively calculate the P(n) in Eq. 6. This results in a total requirement of computations to be 2L+14N MACs and 5 divi- sions. In another approach, the matrix inversion lemma is used twice to derive P(n) at sampling interval n, see, e.g. Q. G. Liu, B. Champagne, and K. C. Ho (Bell-Northern Research and INRS-Telecommunications , Universite du Quebec), "On the Use of a Modified Fast Affine Projection Algorithm in Subbands for Acoustic Echo Cancellation", pp. 354 - 357, Proceedings of 1996 IEEE Digital Signal Processing Workshop, Loen, Norway, September 1996. It assumes an accurate estimate P(n-l) to start with, then derives P(n) by modifying P(n-l) based on P(n-l) and knowledge of the new data X(n). The total computations needed for such a FAP system are 2L+3N2+12N MACs and 2 divisions. Compared with the "sliding windowed" approach, this method offers a more accurate estimation for P(n) because a conventional recur- sive least square (RLS) algorithm is used, instead of a fast version of it which has inevitable degradations.
Note that, it always arrives at the most accurate and stable solution to solve the matrix inversion problem directly by using classical methods. However, these meth- ods are too expensive computationally to implement on a real time platform. Therefore, various alternative approaches with much less complexity, such as the ones described above, are used. The above matrix inversion methods have no feedback adjustment. An accurate estimate of P(n) relies heavily on an accurate starting point P(n-l). If P(n-l) deviates from the accurate solution, the algorithm has no way of knowing that, and will still keep updating it based on it and the new X(n). This means that errors in P(n-l), if any, will very likely accumulate and be passed on to P(n), P(n+1), P(n+2), and so on, and therefore stay in the system forever. When P(n) deviates from the accurate value, so will the calculated £(n), as shown in Equation (6) . As a result, the first expression in Equation (6) shows that the coefficient vector W(n) will no longer be updated properly. That is, W(n) can be updated in wrong directions, causing the adaptive filtering system to fail. A proposed remedy is to periodically re-start a new inversion process, either sliding windowed FRLS or conventional RLS based, in parallel with the old one, and to replace the old one so as to get rid of the accumulated numerical errors in the latter. While this can be a feasible solution for high-precision DSPs such as a floating-point processor, it is still not suitable for fixed-point DSP implementations because then the finite precision numerical errors would accumulate so fast that the re-starting period would have to be made unpractically short.
4. Stable Fast Affine Projection Filter with a nor- milized step size close or equal to unity- Usually, for maximum convergence speed, the normal- ized step size α, as indicated in Equation (6), is set to a value of unity, or less than but quite close to it. This is the case described in the publications and the US patent 5,428,562 cited above. It indicates that in this case e(n) will have only one significant element, e(n) as the very first one. Thus, the calculation for £(n) (Eq. 6) reduces from the product between a matrix and a vector to that between a vector and a scalar, i.e. ε(n) = e(n)P(n)
(Equation 8) where P(n) is the very first, i.e., left most, column of the matrix P(n) . Typically, is greater than 0.9 and less or equal to 1.0. It is also indicated in the publication to Q.G. Liu cited above that, even with an α slightly less than that range, say about 0.7, the approximation is still acceptable. Thus, one only needs to calculate N, rather than all the N2 , elements of P(n) .
In light of the above, the problem of finding P(n), the inverse of the auto-correlation matrix
R(n)≡X (n)X(n) + δI
(Equation 9] reduces to solving a set of N linear equations
(Equation 10) where R(n) is symmetric and positive definite according to its definition Equation (9), and b is an Nvector with all its elements zero except the very first, which is unity.
Although Eq. (10) is much simpler to be solved than the original matrix inversion problem, it is still quite expensive, and especially division extensive, to do that with classical methods like Gaussian elimination. There- fore the obtained system of linear equations is solved by one of iterative descending methods which provide an inherent stability of operation and avoid accumulation of numerical errors as will be described in detail below. 5. Stable Fast Affine Projection Filter with general step size
As mentioned above, the concept described in section 4 above, is only suitable for applications where a rela- tively large (the one equal to unity or less than but very close to unity) is needed. Although a large α is needed in most applications, the method of adaptive filtering wouldn't be regarded as complete without addressing cases with smaller normilized step sizes. For example, one way of reducing the misadjustment (steady state output error) after the FAP system has converged is to use a small α. According to Equation (6), determining of an updating part of the filter coefficients may be performed either by a direct solving for £(n) (second line of Eq. (6) , 1st formula) , or by determining an inverse auto-correlation matrix (second line of Eq.(6), second formula) with further calculation of e(n) . Each of the above approaches requires to solve N systems of linear equations based on the auto-correlation matrix. According to the present invention, the beneficial way to do that is to use descending iterative methods providing stability of operation as will be described below. C. PREFERRED EMBODIMENTS OF THE INVENTION
A method of adaptive filtering implemented in an adaptive filter 100 according to the first embodiment of the invention includes an iterative "steepest descent" technique to iteratively solve the Equation (10) .
In general, steepest descent is a technique that seeks the minimum point of a certain quadratic function iteratively. At each iteration (the same as sampling interval in our application) , it takes three steps consec- utively :
1. to find the direction in which the parameter vector should go. This is just the negative gradient of the quadratic function at the current point; 2. to find the optimum step size for the parameter vector updating so that it will land at the minimum point along the direction dictated by the above step; and
3. to update the parameter vector as determined above.
By iteratively doing the above, the steepest descent reaches the unique minimum of the quadratic function, where the gradient is zero, and continuously tracks the minimum if it moves. Details about the steepest descent method can be found, for example, in a book by David G.
Luenberger (Stanford University) , Linear and Nonlinear Programming, Addison-Wesley Publishing Company, 1984.
For an adaptive filtering application, the implied quadratic function is as follows ipT(n)R(n)P(n)-PTb
(Equation 11) whose gradient with respect to P(n) can be easily found as g = R(n)P(n)-b
(Equation 12) where b is defined in Equation (10) . Note that R(n) must be symmetric and positive definite in order for the steepest descent technique to be applicable, this happens to be our case. Seeking the minimum, where the gradient vanishes, is equivalent to solving Equation (10) . The steepest descent is also able to track the minimum point if it moves, such as the case with a non-stationary input signal X(n) . Based on the above discussion, the stable FAP (SFAP) method which uses the steepest descent technique includes the following steps: Initialization:
W(0) = 0 , X(0) = 0 , η(0) = 0 , R(0) = 51 , α = 1 , P(0) =
(Equation 13) Updating the adaptive filter coefficients in sampling interval n including: recursive determining of an auto-correlation matrix:
R(n) = R(n-l) + ξ(n)ξT(n)-ξ(n-L)ξT(n-L)
(Equation 14) where (n) is defined in equation (23) below, and determining projection coefficients by solving the system of linear Equations (10) using the steepest descent technique, the projection coefficients being the coefficients of the inverse of the auto-correlation matrix: g(,n,) =«R(,n_)P(,n-,l,) IΪ1
(Equation 15) g (n)g(n) β(n) = gf(n)R(n)g(n)
(Equation 16)
P(n) = P(n-l)-β(n)g(n)
(Equation 17) and performing an adaptive filtering for updating the filter coefficients W(n) = (n-l) + αηN_1(n-l)X(n-N)
(Equation 18) y(n) = WT(n)X(n) + α T(n-l)R(n)
(Equation 19] e(n) = d(n)-y(n)
(Equation 20) ε(n) = e(n)P(n)
(Equation 21)
(Equation 22) where (n)is
(Equation 23) R(n) is the first column of R(n), R(n) is an N-1 vector that consists of the N-1 lower most elements of the N vector R(n), and η(n) is an N-1 vector that consists of the N-1 upper most elements of the N vector η(n) .
It is important to note that feedback adjustment provided by Equations (15) , (16) and (17) does not exist in known prior art approaches . The prior art FAP approaches determine P(n) based on P(n-l) and the new incoming data X(n) only, without examining how well a P actually approximates R_1(n) . Therefore inevitable numerical errors will accumu- late and eventually make the system collapse. The feedback provided by a stable descending method, used in our invention, uses Equation (15) to examine how well P(n-l), or the needed part of it, approximates R" (n), or its corresponding part. Then the adjustments are performed in Equations (16) and (17) accordingly to derive P(n), or the needed part of it. As just mentioned, this examination is done by evaluating g(n) in Equation (15) as the feedback error.
The three expressions shown in Equations (15) , (16) and (17) correspond to the three steps of the steepest descent technique discussed above. g(n) is the gradient of the implied quadratic function (Equation (15) ) , β(n) is the optimum step size for parameter vector adjustment, which is made in Equation (17) . As follows from Table 1, the total computational requirement of the Stable FAP method according to the first embodiment of the invention is 2L+2N2+7N-1 MACs and 1 division. Note, that for the steepest descent technique to work adequately for the purpose of adaptive filtering, the projection order N has to be chosen to assure that the steepest descent converges faster than the adaptive filter coefficients do. The required pre-determined value of N will depend on a particular adaptive filtering application.
An adaptive filter 100 according to the first embodi- ment of the invention and operating in accordance with the method described above is shown in Figure 2. It includes a filter 102 characterized by adaptive filter coefficients W(n), and means 104 for updating the coefficients, the means being set with a normalized step size α close to its maximal value, i.e. unity. The filter 102 is a finite impulse response (FIR) filter which receives a Table 1
reference input signal x(n) and an auxiliary signal f(n) (see Equation (33) below), used for updating the coefficients, and generates a provisional echo estimate signal PR(n) (see Equation (34) below) . The updating means 104 includes a correlator 106 for recursively determining an auto-correlation signal presented in the form of auto-correlation matrix coefficients R(n) based on the reference input signal x(n), and a calculator 108 for generating projection coefficients P(n), the projection coefficients being part of the coefficients of the inverse of the auto-correlation matrix. The calculator 108 defines projection coefficients by using an iterative steepest descent method having an inherent stability of operation as illustrated in detail above. The projection coefficients are used within updating means 104 for generation the auxiliary filter adapta- 24 tion signal f(n) and an echo estimate correction signal EC(n) (see Equation (34) below) . The latter is used together with the provisional echo estimate PR(n) to produce the echo estimate signal y(n) . A convention in Fig. 2 is the use of a thick line to represent the propagation of a matrix or vector signal, i.e., with more than one component, and the use of a thin line to stand for a scalar signal propagation. In Fig. 2 a correlator 106 determines the autocorrelation matrix R(n) in accordance with the Eq. 14 using the current and past x(n) samples. An "lj.(n) calculator" 110 calculates H.(n) based on Eq. 22, and as shown in Fig. 2, Tϊ(n) is not used by the updating means 104 until the next sampling interval. The filter 102 produces the convolutional sum W (n)X(n) . ηN_ι(n-l) is obtained from by putting the latter through a unit delay element 111, providing a delay of one sampling interval, and further multiplied by the step size in a Multiplier 113. The result is used for updating the adaptive filter coefficients in (Eq. 18) . n.τ(n-l) is dot-multi- plied with part of R(n) by a Dot multiplier 112, and the result is further multiplied by a multiplier 114 with the step size α to form the correction term to be added to WT(n)X(n) by the summer 116 to form the filter output y(n) (Equation (19) ) . The summer 18 calculates the error, or the output, e(n), as in Equation (20) . The scalar-vector multiplier 118 derives £(n) in accordance with Equation (21) .
A steepest descent calculator 108 is shown in detail in Figure 3. Thick lines represent the propagation of a matrix or vector signal, i.e., with more than one component, and the use of a thin line stands for a scalar sig- nal propagation. In the calculator 108, the autocorrelation matrix R(n) and the vector P(n-l) which is a part of the estimated inverse of R(n-l), are multiplied in a Matrix-vector multiplier 130. The vector product is fur- ther subtracted by a constant vector [1 0 ... 0]τin a Summer 132 to produce the gradient vector g(n), which contains the feedback error information about using P(n-l) as the estimated inverse of R(n). This part corresponds to Equation (15) . The squared norm of g(n) is then found by dot- multiplying g(n) with itself in a Dot multiplier 134. It is used as the numerator in calculating β(n) in Equation 16. A Matrix-vector multiplier 136 finds the vector product between the autocorrelation matrix R(n) and the gradient vector g(n). This vector product is then dot-multiplied with g(n) in another Dot multiplier 138 to produce the denominator in calculating β(n) in Equation (16) . This denominator is reciprocated in a Reciprocator 140, and then further scalar-multiplied with the aforementioned numerator in scalar multiplier 142 to produce β(n) . This is the only place where any division operation is performed. Finally, β(n) is multiplied with the gradient g(n) in a scalar-vector multiplier 144 to form the correction term to P(n-l). This correction term is then subtracted from P(n-l) in a Vector Summer 146 to derive P(n) in accordance with Equation (17). P(n-l) is obtained from P(n) by using a unit delay element 148, providing a delay of one sampling interval .
Two C language prototypes implementing the steepest descent technique according to the first embodiment of the invention have been built. The first one is a floating point module, and the second one is a 16-bit fixed-point DSP implementation. A floating-point module simulating the NLMS acoustic echo canceller design in Venture, a successful full-duplex handsfree telephone terminal product by Nortel Networks Corporation, and a bench mark, floating- point module that repeats a prior art FAP scheme by Q . G. Liu, B. Champagne, and K. C. Ho (Bell-Northern Research and INRS-Telecommunications, Universite du Quebec), "On the Use of a Modified Fast Affine Projection Algorithm in Subbands for Acoustic Echo Cancellation," pp. 354 - 357, Proceedings of 1996 IEEE Digital Signal Processing Workshop, Loen, Norway, September 1996, have been also implemented for comparison purposes . The following data files have been prepared for processing. The source ones are speech files with Harvard sentences (Intermediate Refer- ence System filtered or not) sampled at 8 KHz and a white noise file. Out of the source files certain echo files have been produced by filtering the source ones with certain measured, 1200-tap, room impulse responses. These two sets of files act as x(n) and d(n) respectively. The major simulation results are as follows. The bench mark prior art floating-point FAP scheme with L=1024 and N=5, goes unstable at 2 '57" (2 minutes and 57 seconds, real time, with 8 KHz sampling rate) with speech training, but with certain unhealthy signs showing up after only about 25 seconds . These signs are in the form of improper excursions of the elements of the vector P(n), first column of P(n) (inverse of the matrix R(n)) . The fact that it takes over 2 minutes from the first appearance of unhealthy signs to divergence, in which period the excursions of the P(n) elements become worse and worse, shows that the coefficient updating algorithm is quite tolerant of certain i
27 errors in P(n) . Once simulated random quantization noises, which are uniformly distributed between -0.5 bit and +0.5 bit of a 16-bit implementation, are injected into the matrix inversion lemma calculation, the prior art FAP sys- tern diverges in 0.6 second.
For comparison, within the time period of our longest test case (7'40"), the portions that estimate P(n), i.e., Eqs . (15) -(17) of the steepest descent scheme of the invention with the same parameters (L=1024 and N=5) , always remain stable. Furthermore, the elements in the vector P(n) progress as expected, without any visible unhealthy signs like improper excursions during the entire 7 '40" period. The output e(n) in the steepest descent embodiment converges approximately at the same speed as the bench mark prior art FAP and reaches the same steady state echo cancellation depth as the prior art FAP and NLMS. The SFAP according to the first embodiment of the invention outperforms NLMS filter; with speech training, it converges in about 1 second while it takes the NLMS filter about 7 to 8 seconds to do so.
Filters of another length L=512 have also been built for SFAP, the prior art FAP and NLMS. As expected, they converge approximately twice as fast as they do for L=1024. Thus, the adaptive filter and method using a steepest descent calculator for determining the inverse matrix coefficients, providing a stability of adaptive filtering, are provided.
A method of adaptive filtering according to a second embodiment of the present invention uses an iterative "conjugate gradient" technique to iteratively solve the Equation (10), the corresponding calculator being shown in Figure 4.
Conjugate gradient is a technique that also seeks the minimum point of a certain quadratic function iteratively. Conjugate gradient is closely related to the steepest descent scheme discussed above. It differs from the steepest decent in that it is guaranteed to reach the minimum in no more than N steps, with N being the order of the system. That is, conjugate gradient usually converges faster than the steepest descent. At each iteration (the same as sampling interval in out application) , the conjugate gradient takes five steps consecutively:
1. to find the gradient of the quadratic function at the current point; 2. to find the optimum factor for adjusting the direction vector, along which adjustment to the parameter vector will be made;
3. to update the direction vector as determined above ; 4. to find the optimum step size for the parameter vector updating; and
5. to update the parameter vector as determined above .
Unlike the steepest descent algorithm, which simply takes the negative gradient of the quadratic function as the parameter vector updating direction, conjugate gradient modifies the negative gradient to determine an optimized direction. By iteratively doing the above, the scheme reaches the unique minimum of the quadratic func- tion, where the gradient is zero, in no more than N steps. The conjugate gradient technique also continuously tracks the minimum if it moves, such as the case with non-stationary input signal x(n) . Details about the conjugate gradient algorithm can be found, for example, in a book by David G. Luenberger (Stanford University), Linear and Non- linear Programming, Addison-Wesley Publishing Company, 1984.
For an adaptive filtering application, the implied quadratic function is still shown in Equation (11) , whose gradient with respect to P(n) is also Equation (12) . Note that R(n) must be symmetric and positive definite in order for the conjugate gradient technique to apply, this happens to be our case. Seeking the minimum, where the gradient vanishes, is equivalent to solving Equation (10) . The conjugate gradient is also able to track the minimum point if it moves, such as the case with non-stationary input signal X(n) .
Based on the above discussion, the SFAP method according to the second embodiment, which uses the conjugate gradient technique, includes the following steps: Initialization:
W(0) = 0 , X(0) = 0 , η(0) = 0 , R(0) = δl , α = 1 , P(0) = δ
0 s(0) = 0 , rsrs(0) = 0 , b(0) = 0
(Equation 24)
Updating the adaptive filter coefficients in sampling interval n including: recursive determining of an auto-correlation matrix: R(n) = R(n-l) + ξ(n)ξ (n)-ξ(n L)ξT(n )
(Equation 25) where (n) is defined in Equation (23) above, and determining projection coefficients by solving the system of linear Equations (10) using the conjugate technique, the projection coefficients being first column coefficients of the inverse of the auto-correlation matrix:
(Equation 26] γ(n) = rsrs(n-l)g'(n)b(n-l)
(Equation 27) s(n) = γ(n)s(n-l)-g(n)
(Equation 28)
b(n) = R(n)s(n)
(Equation 29)
,(n)
S (n)b(n)
(Equation 30) β(n) = -rsr tn)g 11 (,n)s(n)
(Equation 31)
P(n) = P(n-l) + β(n)s(n)
(Equation 32) 4
31 and performing an adaptive filtering for updating the filter coefficients
W(n) = W(n- l) + αηN_!(n- l)X(n-N) = W(n - 1) + f(n)X(n - N)
(Equation 33)
y(n) = WT(n)X(n) + αηT(n-l)R(n) = PR(n) + EC(n)
(Equation 34) e(n) = d(n)-y(n)
(Equation 35) ε(n) = e(n)P(n)
(Equation 36)
(Equation 37) where R(n) is the first column of R(n), R(n) is an N-1 vector that consists of the N-1 lower most elements of the N vector R(n), and η(n) is an N-1 vector that consists of the N-1 upper most elements of the N vector η(n) .
The five expressions shown in Equations (26), (27), (28) , (31) and (32) respectively correspond to the five steps of the conjugate gradient technique discussed earlier in this section. g( ) is the gradient of the implied quadratic function, γ(n) is the optimum factor for updating the direction vector s(n). β(n) is the optimum step size for parameter vector adjustment, which is made in Equation As shown in Table 2 , the total computational requirement of the Stable FAP method according to the second embodiment of the invention is 2L+2N2+9N+1 MACs and 1 division. It should be also ensured that the conjugate gradient converges fast enough so that the adaptive filter coeffients converge.
An adaptive filter according to the second embodiment of the invention is similar to that of the first embodi- ment shown in Figure 2 except for the calculator 108 now operating in accordance with the conjugate gradient technique and being designated by numeral 208 in Figure 4.
The conjugate gradient calculator 208 embedded in the adaptive filter of the second embodiment is shown in detail in Figure 4. Thick lines represent the propagation of a matrix or vector signal, i.e., with more than one component, and the use of a thin line stands for a scalar signal propagation. In the calculator 208, the autocorrelation matrix R(n) and the vector P(n-l), part of the esti- mated inverse of R(n-l), are multiplied in a Matrix-vector Multiplier 210. The resulted vector product is subtracted by a constant vector [1 0 ... 0] in a Summer 212 to produce the gradient vector g(n), which contains the feedback error information about using P(n-l) as the estimated inverse of R(n). The Matrix-vector Multiplier 210 and the Summer 212 implement the Equation (26) above. The gradient g(n) is further dot-multiplied with b(n-l), an auxiliary vector found in the last sampling interval, in a Dot Multiplier 214. The resulted scalar product is multiplied by rsrs(n-l) in a Multiplier 216, to produce γ(n), a factor to be used in adjusting s(n-l), the direction vector for adjusting J
33
P(n-l). rsrs(n-l) is obtained from rsrs(n) by putting the latter through a unit delay element 218, providing a delay of one sampling interval. Similarly, b(n-l) is obtained from b(n) by using another unit delay element 220. The part of the dia- gram described in this paragraph implements Equation (27) shown above. With γ(n), g(n), and s(n-l) available, s(n-l) is then updated into s(n) by using yet another unit delay element 222, with a delay of one sampling interval, scalar- vector Multiplier 224 and Vector Summer 226 which imple- ment operations shown in Equation (28) above. Next, the auxiliary vector b(n), to be used in the next sampling interval, is calculated as the product between R(n) and s(n) in another Matrix-vector Multiplier 230. This implements Equation (29) above. The vector b(n) is then dot-multi- plied with s(n) in yet another Dot multiplier 232, and the scalar product is reciprocated in a Reciprocator 234, to produce rsrs(n) (Equation (30)) . This is where the only division operation is. By using yet another Dot Multiplier 236 and a Multiplier 238, g(n) and s(n) are dot-multiplied, and the result, being a scalar product, is multiplied with -rsrs(n) to derive β(n), thus implementing Equation (31) above. Once β(n) is available, it is multiplied with s(n) in another scalar-vector Multiplier 240 to form the correction term to P(n-l), which is then added to P(n-l) in a Vector Summer 242 in order to derive P(n) (Equation (32) above) .
The rest of the structure of the adaptive filter, employing the conjugate gradient calculator 208, is similar to that shown in Figure 2 and described above. Table 2
A C language prototype for 16-bit fixed-point DSP implementation of the SFAP using the conjugate gradient scheme has been built and studied. It has the same parameters (L=1024 andN=5) and uses same data files as the steepest descent prototype described above. It behaves very similarly to its floating-point steepest descent counterpart. There is no observable difference in the way P(n) elements progress, and they also remain stable during the 7 '40" longest test case period. The output e(n) in the conjugate gradient embodiment converges approximately at the same speed as the bench mark prior art FAP and reaches the same steady state echo cancellation depth as the bench mark prior art FAP and NLMS . The SFAP according to the second embodiment of the invention also ourperformes NLMS filter in terms of convergence speed. A conjugate gradient filter of another length L=512 have been also built. As expected, it converges twice as fast as it does for- L=1024.
A method of adaptive filtering according to a third embodiment of the present invention provides adaptive filtering when the normalized step size has any value from 0 to 1. It updates the adaptive filter coefficients by iteratively solving a number of systems linear equations having decrementing orders to determine the inverse auto- correlation matrix in a manner described below.
Let's prove first that, if P is the inverse of a symmetric matrix R, then it is also symmetric. By definition
RP = I , PR = I
(Equation 38)
Transposing Equation (38) we get pTRT = jT Rτpτ = jT
(Equation 39) respectively. Since R and I are symmetric, Equation (39) can be written as
PTR = I , RPT = I
(Equation 40) This means that P is also the inverse of R. Since the inverse of a matrix is unique, the only possibility is
PT = P
(Equation 41)
That is, P is symmetric.
Based on the understanding that the inverse of a symmetric matrix is also symmetric, let's consider a sampling interval n where we need to find an N-th order square matrix P(n) so that
R(n)P(n) = I
(Equation 42) Equation (42) can be written in a scalar form
J ri (n)Pkj(n) = δij . Vi e [0,N-1] k = 0
(Equation 43) where rik(n) is the element of R(n) on row i and column k, and pk:(n) the element of P(n) on row k and column j, and dy is defined as
δ-H I 01 , , i ofthie=rwjise
(Equation 44) We first solve the set of N linear equations defined by j=0 in Equation (43), for {pk0(n), k=0, 1, ..., N-1} , i.e.
T rik(n)pk0(n) = δi0 , Vie [0,N- 1] k = 0
(Equation 45)
Equation (45) coincides with Equation (10) derived earlier and applied to the first and second embodiments of the invention.
(Equation 46) The right hand side of Equation (45) or Equation (46) tells that P(n) is the left-most column of P(n) and, based on Equation (41), P (n) is also the upper-most row of P(n) . According to the first and second embodiments of the invention discussed above, this part will cost "2N2+3N" MACs and 1 division with steepest descent or "2N +5N+2" MACs and 1 division with conjugate gradient.
Having dealt with the j=0 case, we now start solving the set of N linear equations defined by j=l in Equation (43), for {pkl(n),k=0, 1,...,N-1}, i.e.
∑rik(n)pkl(n) = δπ , Vie[0,N-l] k = 0
(Equation 47)
Because P(n) is symmetric so that Poι(n) equals Pιo(n), Equation (47) can be re-arranged to become
N-1
X rik(n)Pki(n) = δii-rio(n)Pιo(n) > Vie [O.N-1] k= 1
(Equation 48) with still N equations but only N-1 instead of N unknowns, i.e., {pkl(n), k=l, 2, ..., N-1} , to solve. In general, these N-1 unknowns can be uniquely determined by only N-1 equations. Thus, the equation in Equation (48) with i=0 can be omitted so that it becomes
N-1 ∑ rik(n)Pki(n) = δii -riθ(n)Pιo(n) - ViG [l.N-1] k= 1
(Equation 49) Equation (49) has the same format as Equation (45) except that the order is reduced by one. Equation (49) can also be solved by using either of the two approaches presented above, costing "2 (N-1) 2+4 (N-1) MACs and 1 division with steepest descent" or "2 (N-1) 2+6 (N-1) +2 MACs and 1 division with conjugate gradient," where the added "(N-1)" in each of the two expressions accounts for the extra computations needed to calculate the right hand side of Equation (49) . By repeating the above recursion steps, with the order of the problem decrementing by one each step, we can completely solve the lower triangle of P(n) . Since P(n) is symmetric, this is equivalent to solving the entire P(n) . A formula for this entire process can be derived from Equa- tion (43) and the concept described above, as follows: Forj = 0, 1,...,N-1 , solve
j = 0
N-1
∑ rik(n)pkj(n) = j-i k = j δij - ∑ rik(n)Pjk(n) l≤j≤N-1 k = 0
Vie [j,N-l]
for {Pkj(n),Vke[j,N-l] }
(Equation 50)
Note that the right hand sides of Equation (50) for all i J
39 at each recursion step j do not contain any unknowns, i.e., (pjk ( n) } there have already been found in previous stages, Equation (45) and Equation (49) are just special cases of Equation (50), and (pkj ( n ) , k-j , '+l, ... , N- 1 } found in recursion step j form a column vector P;(n) , which consists of the lower N- elements of the j'th (0 < < N- 1) column of P(n) . The process of Equation (50) will take Ν divisions and
[2Ν2 + 3N] + [2(N - l)2 + 4(N - 1 )] + [2(N - 2)2 + 5(N - 2)] + ... + [2(1)2 + (N + 2)(1)]
k= 1 k= 1 k= 1
N(N + l)(N + 2) MACs
(Equation 51' for steepest descent method, and N divisions and [2N2 + 5N + 2] + [2(N - l)2 + 6(N - 1) + 2] + [2(N - 2)2 + 7(N - 2) + 2] + ...
+ [2(l)2 + (N + 4)(l) + 2]
= k+2N k= 1 k= 1 k= 1
= -(N+l)(2N+l)+-(N+l)(N+5)+2N= iN(N2 + N +? MACs
(Equation 52) 1
40 for conjugate gradient method. Note that in deriving Equations (51) and (52) the following formulae are used
∑k = (N+1)(2N+1) k= 1
(Equation 53) which can be easily proven by mathematical induction.
Based on the above derivations, the SFAP method according to the third embodiment of the invention includes the following steps: Initialization:
W(0) = 0 , X(0) = 0 , η(0) = 0 , R(0) = δl , e(0) = 0 , P(0) =
0
(Equation 54) Updating the adaptive filter coefficients in sampling interval n including the steps shown in Equation 55 below. Please, note that designations used in Equation (55), are as follows: (n) is defined in Equation (23) above, R(n) is the first column of R(n), R(n) is an N-1 vector that consists of the N-1 lower most elements of the N vector R(n), and η(n) is an N-1 vector that consists of the N-1 upper most elements of the Nvector η(n) . Please, also note that any division operation in the 2nd expression of Equation (55) is not performed if the denominator is not greater than zero, in which case a zero is assigned to the quotient .
(Equation 55) An adaptive filter 300 according to a third embodiment of the invention, shown in Figure 5, is similar to that of Fig. 2 with like elements being designated by same refer- >
42 ence numerals incremented by 200. The filter 300 also differs from the filter 100 by the following features: the normilized step size may have any value from 0 to 1.0, the calculator 308 now has more extended structure for consec- utively determining columns of the inverse auto-correlation matrix in accordance with the steepest descent technique, and an e(n) calculator 320 is added.
The P(n) calculator 308, now being a matrix calculator, operates in accordance with the flow-chart 400 shown in Figure 6. Upon start up for the sampling interval n (block 401) , the routine 402 sets an initial value to index j (block 404) which is submitted together with the auto-correlation matrix R(n) (block 406) to a projection coefficient column calculator (block 408) . The calculator provides a steepest descent iteration in accordance with Equation (50) for the current value of index j, thus updating the corresponding column of projection coefficients from the previous sampling interval (block 408) . The updated column of the projected coefficients is sent to a storage means (routine 410, block 412) to be stored until the other columns of P(n) are calculated. Until the index j is equal to N-1 (block 416) , its value is incremented by 1, i.e. made equal to j+1 (block 418), and the steepest descent iteration is repeated (block 408) to determine the next column of P(n) . By performing N corresponding steepest descent iterations for j = 0, 1, ... N-1 , all columns of the inverse auto-correlation matrix are thus determined and assembled into P(n) in an assembling means (block 414). A command/signal (block 420) then notifies about the end of the sampling interval n and the beginning of the next sampling interval n+l where the steps of the routine 400 are έ
43 repeated. In Figure 6, thick lines represent the propagation of a matrix or vector signal, i.e., with more than one component, and the use of a thin line stands for a control propagation. In modification to this embodiment, the steepest descent calculator 308 may be replaced with the conjugate calculator. The corresponding structure is illustrated by a flow-chart 500 shown in Figure 7 where the blocks similar to that ones of Figure 6 are designated by same refer- ence numerals incremented by 100. It operates in a manner described above with regard to Figure 6.
A method of adaptive filtering according to a fourth embodiment of the present invention also provides adaptive filtering when the normalized step size has any value from 0 to 1. It updates the adaptive filter coefficients by iteratively solving a number of systems linear equations which avoid an explicit matrix inversion performed in the third embodiment of the invention. The details are described below. The second equation from the set of Equations (6) , which is reproduced for convenience in Equation (56) below, is equivalent to
R(n)£(n) = e(n)
(Equation 56) it is possible to obtain £(n), required for updating the adaptive filter coefficients, directly from the set of linear Equations (56) , which are solved again by one of the descending iterative methods .
As a way of example, we will use a conjugate gradient method and perform N conjugate gradient iterations so that an exact solution, not an iterated one, is reached. It is ensured by the fact that the conjugate gradient method is guaranteed to reach the solution in no more than N iterations, with N being the order of the problem, see Equation (55) . It is convenient to start with ε(n)=0 before iterations begin at each sampling interval n to save some computation time.
Accordingly, the SFAP method of the fourth embodiment of the invention includes the following steps:
Initialization: MAC Division
εt(0) = 0 , s(0) = 0 , rsrs(0) = 0 , b(0) 0
In sampling internal n, repeat the following equations N times, i.e., for k = 0, 1, ..., N-1: g = R(n)εt(k)-e(n) (N-l)xN
γ = rsrs(k)g1b(k) (N-l)x(N+ 1)
s(k+ 1) = γs(k)-g (N-l)xN b(k + 1) = R(n)s(k+1)
NxN
rsrs(k+l) = NxN s(k+l) b(k+l) Nxl
T β = -rsrs(k+l)g §(k+l) Nx (N+l)
εt(k+l) = εt(k) + βs(k+l) NxN
Output: ε(n) = εt(N)
Total 2N3 + 4N2 - 1 N
(Equation 57) The steps of the adaptive filtering methods according to the fourth embodiment are presented in more detail below: Initialization:
W(0) = 0 , X(0) = 0 , η(0) = 0 , R(0) = 81 , e(0) = 0 , P(0) /δ 0
(Equation 58 )
Processing in sampling interval n:
(Equation 59 ) where the designations are similar to that presented with regard to the first , second and third embodiments y
46 described above. Note that any division operation in Equation (56) is not performed if the denominator is not greater than zero, in which case a zero is assigned to the quotient . An adaptive filter 600 according to a fourth embodiment of the invention is shown in detail in Figure 8. It includes a filter 602 characterized by adaptive filter- coefficients W(n), and means 604 for updating the coefficients, the means being set with a normalized step size α having any value in a range from 0 to 1.0. The filter 602 is a finite impulse response (FIR) filter which receives a reference input signal x(n) and an auxiliary signal f(n) used for updating the coefficients, and generates a provisional echo estimate signal PR(n). The updating means 604 includes a correlator 606 for recursively determining an auto-correlation signal presented in the form of auto-correlation matrix coefficients R(n) based on the reference input signal x(n), an ε(n) calculator 608 and an e(n) calculator 620 for corresponding calculation of vectors £_(n) and e(n) . The calculator 608 defines ε(n) by using an iterative conjugate gradient method having an inherent stability of operation as illustrated in detail above. The projection coefficients are used within updating means 604 for generation the auxiliary filter adaptation signal f(n) and an echo estimate correction signal EC(n) . The latter is used together with the provisional echo estimate PR(n) to produce the echo estimate signal y(n). In Fig. 8 thick lines represent propagation of a matrix or vector signal, i.e., the signal with more than one component, and the use of a thin line stands for a scalar signal propagation. In Fig. 8 a correlator 606 determines the autocorrelation matrix 4
47
R(n) in accordance with the first formula of Eq. (59) using the current and past x(n) samples. An "n(n) calculator" 610 calculates n.(n) based the last formula of Eq. (59) , and as shown in Fig. 8, rj.(n) is not used by the updating means 104 until the next sampling interval. The filter 602 produces the convolutional sum W (n)X(n) . ηN_1(n-l) is obtained from ΗN-iC11) ky putting the latter through a unit delay element 611, providing a delay of one sampling interval, and further multiplied by the step size α in a Multiplier 613. The result is used for updating the adaptive filter coefficients (Eq. 59, second formula), ^(n-l) is dot-multiplied with part of R(n) by a Dot multiplier 612, and the result is further multiplied by a multiplier 614 with the step size α to form the correction term to be added to WT(n)X(n) by the summer 616 to form the filter output y(n) (Equation
(59), third formula). Signals y(n) and e(n) are further sent to the e(n) calculator 620 to determine e(n) in accordance with a fourth and fifth formulae of Equation (59) , and the results are sent to the £(n) calculator 608 together with the auto-correlation matrix R(n) derived in the correlator 606. The £(n) calculator 608 solves the sixth equation of Eq. (59) for £(n) by a conjugate gradient method, thus providing sufficient data for updating the adaptive filter coefficients (Eq. 6, first formula) . The £(n) calculator 608, shown in detail in Figure 9, includes a one-step calculator 708a similar to the calculator 208 of Fig. 4 and includes like elements which are referred to by the same reference numerals incremented by 500 respectively (except for P(n-l) and P(n) being replaced with ε(n-l) and £(n) respectively) . Thick lines represent the propagation of a matrix or vector signal, i.e., with more 4
48 than one component, and the use of a thin line stands for a scalar signal propagation. At each sampling interval n, the calculator 708a performs N steps corresponding to k=0, 1,... N-1, each step being similar to the conjugate gradient iteration performed by the filter 208 of the second embodiment of the invention. The calculator 608 additionally includes an output switch 754 which automatically opens- at the beginning of the sampling interval and closes at the end of N conjugate gradient iterations. Modifications described with regard to the first two embodiments are equally applicable to the third and fourth embodiments of the invention.
Two "C" prototypes according to the third and fourth embodiments of the invention have been implemented in a floating point PC platform. They have demonstrated results completely consistent with the results of the first and second embodiments of the invention.
Thus, an adaptive filter and a method providing a stability of adaptive filtering based on feedback adjustment, are provided.
Although the methods operate with real-valued numbers, it does not prevent the invention from being extended to cases where introduction of complex numbers is necessary. Although the embodiments are illustrated within the context of echo cancellation, the results are also applicable to other adaptive filtering applications .
Thus, it will be appreciated that, while specific embodiments of the invention are described in detail above, numerous variations, modifications and combinations of these embodiments fall within the scope of the invention as defined in the following claims .

Claims

4WHAT IS CLAIMED IS:
1. A method of adaptive filtering, comprising the steps of:
(a) determining adaptive filter coefficients; (b) defining a normalized step size;
(c) updating the filter coefficients, comprising: determining auto-correlation matrix coefficients from a reference input signal, and solving at least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved by using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the filter coefficients and the number of systems of linear equations to be solved being dependent on the normalized step size; (d) repeating the steps (b) and (c) required number of times .
2. A method as defined in claim 1 wherein the step of determining auto-correlation matrix coefficients comprises calculating the auto-correlation matrix coefficients recursively.
3. A method as defined in claim 1, wherein the step of defining a normalized step size comprises setting the normalized step size not equal to unity.
4. A method as defined in claim 1, wherein the step of defining a normalized step size comprises setting the normalized step size substantially less than unity. 4
51
5. A method as defined in claim 1, wherein the step of defining a normalized step size comprises setting the normalized step size less than about 0.7.
6. A method as defined in claim 3 , wherein the step of solving at least one system of linear equations comprises solving N systems of linear equations, with N being a projection order.
7. A method as defined in claim 1, wherein the step of defining a normalized step size comprises setting the normalized step size close to unity.
8. A method as defined in claim 1, wherein the step of defining a normalized step size comprises setting the normalized step size equal to unity.
9. A method as defined in claim 1, wherein the step of defining a normalized step size comprises setting the normalized step size in a range from about 0.9 to 1.0.
10. A method as defined in claim 1, wherein the step of defining a normalized step size comprises setting the normalized step size in a range from about 0.7 to 1.0.
11. A method as defined in claim 7, wherein the step of solving at least one system of linear equations comprises solving one system of linear equations only.
12. A method as defined in claim 1, wherein the step of solving the system of linear equations by a descending 4
52 iterative method comprises solving the system by using a steepest descent method.
13. A method as defined in claim 1, wherein the step of solving the system of linear equations by a descending iterative method comprises solving the system by using a conjugate gradient method.
14. A method as defined in claim 1, wherein the step of solving the system of linear equations by a descending iterative method comprises solving the system by using a Newton's method.
15. A method as defined in claim 1, wherein the step of solving the system of linear equations by a descending iterative method comprises solving the system by using a PARTAN method.
16. A method as defined in claim 1, wherein the step of solving the system of linear equations by a descending iterative method comprises solving the system by using a quasi-Newton' s method.
17. A method as defined in claim 1, wherein the steps are performed by operating with real value numbers .
18. A method as defined in claim 1, wherein the steps are performed by operating with complex value numbers .
19. A method as defined in claim 1, the method 4
53 being used in an application selected from the group consisting of echo cancellation, noise cancellation, channel equalization and system identification.
i
20. A method as defined in claim 1, wherein the step of solving at least one system of linear equations comprises determining projection coefficients, the projection coefficients being the coefficients of an inverse auto-correlation matrix.
21. A method as defined in claim 20, wherein the step of determining auto-correlation matrix coefficients comprises calculating the auto-correlation matrix coefficients recursively.
22. A method as defined in claim 20, wherein the step of defining a normalized step size comprises setting the normalized step size not equal to unity.
23. A method as defined in claim 20, wherein the step of defining a normalized step size comprises setting the normalized step size substantially less than unity.
24. A method as defined in claim 20, wherein the step of defining a normalized step size comprises setting the normalized step size less than about 0.7.
25. A method as defined in claim 22, wherein the step of solving at least one system of linear equations comprises solving N systems of linear equations, with N being a projection order. 4
54
26. A method as defined in claim 25, wherein the step of solving N systems of linear equations comprises solving N systems of linear equations having decrementing orders .
27. A method as defined in claim 20, wherein the step of defining a normalized step size comprises setting the normalized step size close to unity.
28. A method as defined in claim 20, wherein the step of defining a normalized step size comprises setting the normalized step size equal to unity.
29. A method as defined in claim 20, wherein the step of defining a normalized step size comprises setting the normalized step size in a range from about 0.9 to 1.0.
30. A method as defined in claim 20, wherein the step of defining a normalized step size comprises setting the normalized step size in a range from about 0.7 to 1.0.
31. A method as defined in claim 27, wherein the step of solving at least one system of linear equations comprises solving one system of linear equations only.
32. A method as defined in claim 31, wherein determining the projection coefficients comprises calculating coefficients of a first column of the inverse auto- correlation matrix coefficients only. 4
55
33. A method as defined in claim 20, wherein the step of solving the system of linear equations by a descending iterative method comprises solving the system by using a steepest descent method. i
34. A method as defined in claim 20, wherein the step of solving the system of linear equations by a descending iterative method comprises solving the system by using a conjugate gradient method.
35. A method as defined in claim 20, wherein the step of solving the system of linear equations by a descending iterative method comprises solving the system by using a Newton's method.
36. A method as defined in claim 20, wherein the step of solving the system of linear equations by a descending iterative method comprises solving the system by using a PARTAN method.
37. A method as defined in claim 20, wherein the step of solving the system of linear equations by a descending iterative method comprises solving the system by using a quasi-Newton' s method.
38. A method as defined in claim 20, wherein the steps are performed by operating with real value numbers.
39. A method as defined in claim 20, wherein the 0 steps are performed by operating with complex value numbers .
40. A method as defined in claim 20, the method being used in an application selected from the group consisting of echo cancellation, noise cancellation, channel equalization and system identification.
41. An adaptive filter, comprising: a filter characterized by adaptive filter coefficients; means for updating the filter coefficients, including means for setting a normalized step size, the updating means comprising: a correlator for determining auto-correlation matrix coefficients from a reference input signal, and a calculator for solving at least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved by using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the filter coefficients and the number of systems of linear equations to be solved being dependent on the normalized step size.
42. The adaptive filter as defined in claim 41, wherein the correlator is a recursive correlator.
43. The adaptive filter as defined in claim 41, wherein the normalized step size is not equal to unity.
44. The adaptive filter as defined in claim 41, wherein the normalized step size is substantially less 4
57 than uni ty .
45. The adaptive filter as defined in claim 41, wherein the normalized step size is less than about 0.7.
46. The adaptive filter as defined in claim 43, wherein the calculator includes means for solving N systems of linear equations, with N being a projection order.
47. The adaptive filter as defined in claim 41, wherein the normalized step size is close to unity.
48. The adaptive filter as defined in claim 41, wherein the normalized step size is equal to unity.
49. The adaptive filter as defined in claim 41, wherein the normalized step size is within a range from about 0.9 to 1.0.
50. The adaptive filter as defined in claim 41, wherein the normalized step size is within a range from about 0.7 to 1.0.
51. The adaptive filter as defined in claim 47, wherein the calculator provides solution of one system of linear equations only.
52. The adaptive filter as defined in claim 41, wherein the calculator is a calculator providing solution of the system of linear equations according to a steepest descent method. 4
58
53. The adaptive filter as defined in claim 41, wherein the calculator is a calculator providing solution of the system of linear equations according to a conju- gate gradient method.
54. The adaptive filter as defined in claim 41, wherein the calculator is a calculator is a calculator providing solution of the system of linear equations according to a Newton's method.
55. The adaptive filter as defined in claim 41, wherein the calculator is a calculator providing solution of the system of linear equations according a PARTAN method.
56. The adaptive filter as defined in claim 41, wherein the calculator is a calculator providing solution of the system of linear equations according to a quasi- Newton's method.
57. The adaptive filter as defined in claim 41 capable of operating with real value numbers.
58. The adaptive filter as defined in claim 41 capable of operating with complex value numbers.
59. The adaptive filter as defined in claim 41 for use in an application selected from the group consisting of echo cancellation, noise cancellation, channel equalization and system identification. 4
59
60. The adaptive filter as defined in claim 41, wherein the calculator further comprises means for determining projection coefficients, the projection coeffi- cients being the coefficients of an inverse autocorrelation matrix.
61. The adaptive filter as defined in claim 60, wherein the correlator is a recursive correlator.
62. The adaptive filter as defined in claim 60, wherein the normalized step size is not equal to unity.
63. The adaptive filter as defined in claim 60, wherein the normalized step size is substantially less than unity.
64. The adaptive filter as defined in claim 60, wherein the normalized step size is less than about 0.7.
65. The adaptive filter as defined in claim 62, wherein the calculator is capable of solving N systems of linear equations, with N being a projection order.
66. The adaptive filter as defined in claim 65, wherein the calculator is capable of solving N systems of linear equations having decrementing orders.
67. The adaptive filter as defined in claim 60, wherein the normalized step size is close to unity. 4
60
68. The adaptive filter as defined in claim 60, wherein the normalized step size is equal to unity.
69. The adaptive filter as defined in claim 60, wherein the normalized step size is within a range from about 0.9 to 1.0.
70. The adaptive filter as defined in claim 60, wherein the normalized step size is within a range from about 0.7 to 1.0.
71. The adaptive filter as defined in claim 67, wherein the calculator is suitable for solving one system of linear equations only.
72. The adaptive filter as defined in claim 71, wherein the means for determining projection coefficients provides calculation of coefficients of a first column of the inverse auto-correlation matrix coefficients only.
73. The adaptive filter as defined in claim 60, wherein the calculator is a calculator providing solution of the system of linear equations according to a steepest descent method.
74. The adaptive filter as defined in claim 60, wherein the calculator is a calculator providing solution of the system of linear equations according to a conjugate gradient method.
75. The adaptive filter as defined in claim 60, 4
wherein the calculator is a calculator providing solution of the system of linear equations according to a Newton's method .
76. The adaptive filter as defined in claim 60, wherein the calculator is a calculator providing solution of the system of linear equations according to a PARTAN method.
77. The adaptive filter as defined in claim 60, wherein the calculator is a calculator providing solution of the system according to a quasi-Newton' s method.
78. The adaptive filter as defined in claim 60 capable of operating with real value numbers.
79. The adaptive filter as defined in claim 60 capable of operating with complex value numbers.
80. The adaptive filter as defined in claim 60 for use in an application selected from the group consisting of echo cancellation, noise cancellation, channel equalization and system identification.
EP99973514A 1998-12-22 1999-11-10 Stable adaptive filter and method Withdrawn EP1057259A1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US218428 1998-12-22
US09/218,428 US6754340B1 (en) 1998-12-22 1998-12-22 Stable adaptive filter and method
US09/356,041 US6788785B1 (en) 1998-12-22 1999-07-16 Stable adaptive filter and method
US356041 1999-07-16
PCT/CA1999/001068 WO2000038319A1 (en) 1998-12-22 1999-11-10 Stable adaptive filter and method

Publications (1)

Publication Number Publication Date
EP1057259A1 true EP1057259A1 (en) 2000-12-06

Family

ID=26912898

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99973514A Withdrawn EP1057259A1 (en) 1998-12-22 1999-11-10 Stable adaptive filter and method

Country Status (4)

Country Link
EP (1) EP1057259A1 (en)
JP (1) JP2002533970A (en)
CA (1) CA2318929A1 (en)
WO (1) WO2000038319A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2371191B (en) * 2001-01-11 2005-06-15 Mitel Corp Double-talk and path change detection using a matrix of correlation coefficients
DE10329055B4 (en) * 2003-06-27 2005-10-13 Infineon Technologies Ag Method and apparatus for echo cancellation
US7436880B2 (en) 2004-08-17 2008-10-14 National Research Council Of Canada Adaptive filtering using fast affine projection adaptation
US9679260B2 (en) * 2014-03-20 2017-06-13 Huawei Technologies Co., Ltd. System and method for adaptive filter
CN106209023B (en) * 2016-07-28 2018-06-26 苏州大学 Non-negative adaptive filter method based on data reusing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995012926A1 (en) * 1993-11-05 1995-05-11 Ntt Mobile Communications Network Inc. Replica producing adaptive demodulating method and demodulator using the same
US5568411A (en) * 1994-07-25 1996-10-22 National Semiconductor Corporation Method and apparatus for using polarity-coincidence correlators in LMS adaptive filters
JP2685031B2 (en) * 1995-06-30 1997-12-03 日本電気株式会社 Noise cancellation method and noise cancellation device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0038319A1 *

Also Published As

Publication number Publication date
WO2000038319A1 (en) 2000-06-29
JP2002533970A (en) 2002-10-08
CA2318929A1 (en) 2000-06-29

Similar Documents

Publication Publication Date Title
US6788785B1 (en) Stable adaptive filter and method
Enzner et al. Frequency-domain adaptive Kalman filter for acoustic echo control in hands-free telephones
Comminiello et al. Nonlinear acoustic echo cancellation based on sparse functional link representations
US7171436B2 (en) Partitioned block frequency domain adaptive filter
Gil-Cacho et al. Nonlinear acoustic echo cancellation based on a sliding-window leaky kernel affine projection algorithm
Nascimento et al. Adaptive filters
EP0809893A1 (en) Echo canceller having kalman filter for optimal adaptation
Huang et al. Practically efficient nonlinear acoustic echo cancellers using cascaded block RLS and FLMS adaptive filters
EP1782533A1 (en) Adaptive filtering using fast affine projection adaptation
Schrammen et al. Efficient nonlinear acoustic echo cancellation by dual-stage multi-channel Kalman filtering
Van Vaerenbergh et al. A split kernel adaptive filtering architecture for nonlinear acoustic echo cancellation
EP1314247B1 (en) Partitioned block frequency domain adaptive filter
WO2000038319A1 (en) Stable adaptive filter and method
Mayyas et al. A variable step-size partial-update normalized least mean square algorithm for second-order adaptive volterra filters
CN113873090B (en) Robust estimation affine projection spline self-adaptive echo cancellation method
Ramdane et al. Partial update simplified fast transversal filter algorithms for acoustic echo cancellation
Panda et al. A VSS sparseness controlled algorithm for feedback suppression in hearing aids
JP4663630B2 (en) Multi-channel system identification device
Okhassov et al. Cost-Effective Proportionate Affine Projection Algorithm with Variable Parameters for Acoustic Feedback Cancellation
Rodríguez et al. Convex Combination of FXECAP–FXECLMS Algorithms for Active Noise Control
Gay Affine projection algorithms
Burra et al. Comparison of convergence performance for LMS and NLMS adaptive algorithms in stereophonic channels
Wahbi et al. Enhancing the quality of voice communications by acoustic noise cancellation (ANC) using a low cost adaptive algorithm based Fast Fourier Transform (FFT) and circular convolution
Schuldt et al. Low-complexity adaptive filtering implementation for acoustic echo cancellation
JP3303898B2 (en) Adaptive transfer function estimation method and estimation device using the same

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17P Request for examination filed

Effective date: 20001229

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: NORTEL NETWORKS LIMITED

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20060601