WO2009050434A2 - Equation solving - Google Patents

Equation solving Download PDF

Info

Publication number
WO2009050434A2
WO2009050434A2 PCT/GB2008/003413 GB2008003413W WO2009050434A2 WO 2009050434 A2 WO2009050434 A2 WO 2009050434A2 GB 2008003413 W GB2008003413 W GB 2008003413W WO 2009050434 A2 WO2009050434 A2 WO 2009050434A2
Authority
WO
WIPO (PCT)
Prior art keywords
equations
equation
estimate
linear equations
vector
Prior art date
Application number
PCT/GB2008/003413
Other languages
French (fr)
Other versions
WO2009050434A3 (en
Inventor
Yuriy Zakharov
Timothy Conrad Tozer
George Page White
Jie Liu
Original Assignee
The University Of York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The University Of York filed Critical The University Of York
Publication of WO2009050434A2 publication Critical patent/WO2009050434A2/en
Publication of WO2009050434A3 publication Critical patent/WO2009050434A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/12Simultaneous equations, e.g. systems of linear equations

Definitions

  • the present invention relates to systems and methods for solving systems of linear equations. More particularly, but not exclusively, the invention relates to methods for solving systems of linear equations so as to determine a vector providing a solution to a recursive least squares problem.
  • the invention has particular, but not exclusive applicability in determining filter coefficients of an adaptive digital filter.
  • iterative methods produce an approximate solution of the desired accuracy by yielding a sequence of solutions which converge to the exact solution as the number of iterations tends to infinity.
  • Adaptive algorithms are of considerable importance in many areas of engineering. They are used in communications for channel estimation, interference suppression, antenna array beamforming, MIMO detection and many other applications, hi radar, adaptive algorithms are used for antenna beamforming. In acoustics, adaptive algorithms are used for acoustic echo cancellation, microphone array beamforming, active noise control, and sonar signal processing.
  • Known adaptive algorithms include the least mean squares (LMS) and recursive least squares (RLS) algorithms.
  • LMS least mean squares
  • RLS recursive least squares
  • the LMS algorithm is known to have a slow convergence speed. However, despite this drawback, in practice, the LMS algorithm is commonly used. This is due to its implementation simplicity and stability, in particular for fixed- point implementation on FPGA platforms.
  • the complexity of the LMS algorithm is O(N) operations per sample (or time instant or iteration), i.e., the complexity is linear in the number of parameters N , where N is the filter length.
  • a method for obtaining an estimate of a solution to a first system of linear equations comprising: obtaining a second system of linear equations, obtaining an estimate of a solution to said second system of linear equations, determining differences between said first and second systems of linear equations, and determining an estimate of a solution to said first system of linear equations based upon said differences and said estimate of said solution to said second system of linear equations.
  • Determining the estimate of the solution to the first system of linear equations may comprise combining the estimate of the solution to the second system of linear equations and the estimate of the solution to the system of equations based upon the differences. Such combining may comprise addition.
  • the first system of linear equations may relate to a first time point and the second system of linear equations may relate to a second earlier time point.
  • Each of the first and second systems of linear equations may have the form:
  • R is a coefficient matrix of the respective system of equations
  • h is a vector of the unknown variables of the respective system of equations
  • is a vector containing the values of the right hand side of each equation of the respective system of equations
  • the second system of linear equations may relate to a time point i-1 and have a form
  • the system of equations based upon the differences may have a form: where: h ⁇ (i -1) is an estimate of the solution of the system of equations relating to time point i-1;
  • the invention further provides a method of estimating filter coefficients for an adaptive filter.
  • the method comprises generating first and second systems of linear equations, and obtaining an estimate of a solution to the first system of linear equations using a method as described above.
  • the estimate of the solution to the first system of linear equations may be an estimate of the filter coefficients.
  • Such adaptive digital filters are used in a wide range of applications including acoustic echo cancellation and various telecommunications applications.
  • the vector h(i) may represent a time- varying estimate of the acoustic impulse response, while R(i) maybe an autocorrelation matrix of a signal received from a transmitter.
  • ⁇ (i) may be a cross-correlation vector between the signal received from the transmitter and a signal provided by a local microphone.
  • the method for estimating filter coefficients may be used for solving such problems as equalisation, channel estimation, multiuser detection, and others.
  • the vector h(/) may represent the equaliser impulse response
  • R(i) maybe the autocorrelation matrix of the received signal at time instant i , i.e., a signal at the output of a communication channel to be equalised.
  • ⁇ (i) maybe a cross-correlation vector between the received signal and a desired (pilot or test) signal.
  • a method for generating an estimate of a solution of N linear equations in N unknown variables comprising: initialising a vector indicating a first estimate of said solution, recursively modifying an update value until a predetermined criterion is satisfied, and updating said vector based upon said update value.
  • Processing in this way means that an appropriate value of the update value is selected before an estimate of a solution to the first system of linear equations is updated. Such a method improves convergence.
  • the system of linear equations may have a form:
  • R is a coefficient matrix of the respective system of equations
  • h is a vector of the unknown variables of the system of equations
  • is a vector containing the value of the right hand side of each equation of the system of equations.
  • the method may comprise determining the element of the residual vector having the greatest magnitude.
  • the predetermined criteria may be based upon said element having the greatest magnitude.
  • the predetermined criteria may be based upon a relationship between said element having the greatest magnitude and a value of an element of the matrix R.
  • the method may comprise determining an index p of the element ⁇ r,p having the greatest magnitude and the element of the matrix R may be determined based upon the index.
  • the index of the element having the greatest magnitude may be the pth element of the residual vector and the element of the matrix R may be the (p , p)th element R p,p of the matrix R.
  • the predetermined criteria may be for an update value d.
  • the method may further comprise updating the residual vector based upon the update value.
  • the method may further comprise carrying out a plurality of iterations of processing, each iteration of processing being based upon a value of the residual vector having the greatest magnitude following the preceding iteration.
  • the method may further comprise initialising the update value to a power of two.
  • Modifying the update value may comprise dividing the update value by a predetermined value.
  • the predetermined value may be two.
  • the division may be implemented as a bit shift operation.
  • modifying the update value makes sure that the update value remains within predetermined limits.
  • FIGS. 1 and 2 are schematic illustrations of operation of an embodiment of the invention
  • Figure 3 is a flow chart illustrating operation of a method for solving systems of linear equations in accordance with one embodiment of the invention
  • Figure 4 and 5 are flow charts illustrating the solution of an exponential recursive least squares problem using the method of Figure 3;
  • Figure 6 is a flow chart illustrating the solution of a sliding window recursive least squares problem using the method of Figure 3;
  • FIG. 7 is a schematic illustration of a Filed Programmable Gate Array (FPGA) in communication with a general purpose computer;
  • FPGA Filed Programmable Gate Array
  • Figure 8 is a schematic illustration of components of the field programmable gate array of Figure 7;
  • Figure 9 is a schematic illustration showing a correlation module of the field programmable gate array of Figure 8 in further detail
  • Figure 10 is a schematic illustration showing the equation solver of the field programmable gate array of figure 8 in further detail;
  • Figures 11 to 18 are graphs showing performance of an embodiment of the invention.
  • Figure 19 is a schematic illustration of an echo cancellation apparatus.
  • a method for solving a system of linear equations is now described.
  • a system of linear equations can be expressed in the form:
  • R is a coefficient matrix of the system of equations; h is a vector of the unknown variables; and ⁇ is a vector containing the value of the right hand side of each equation.
  • equation (1) can be rewritten as:
  • i is an index of a particular system of equations, occurring at time i.
  • equation (5) a further system of equations is defined by equation (5), where the system of equations of equation (5) exists one time step before the system of equations of equation (4).
  • equation (6) Using an appropriate method (and such a method is described in further detail below), the system of equations of equation (5) can be solved to determine an approximate solution h ⁇ (i -1.) Accordingly, a residual vector ⁇ ,.(i -1) indicating the error in the approximate solution h ⁇ (i-1) can be determined according to equation (6):
  • ⁇ ,.(i -1) is a vector representing an error in the approximate solution h ⁇ (i -1).
  • Equation (15) can be rewritten as:
  • Equation (16) can be rewritten as:
  • ⁇ o (i) can be computed from data which is already known. Specifically, the residual vector ⁇ r from time i-1, the vector ⁇ (i), the matrix ⁇ R(i), and the vector h ⁇ (i-1) are all known in the manner described above. Accordingly, the system of equations (18) can be processed to obtain an estimate for ⁇ h ⁇ (i) as described in further detail below. An approximation for the system of equations of equation (4) can then be computed according to equation (19).
  • embodiments of the invention allow a system of equations of the form of equation (18) to be formed so as to allow more efficient solution of the system of equations of equation (4). More specifically, solutions obtained to a previous system of equations are used to obtain a solution to a further system of equations. This can be seen in Figure 1. Specifically, the system of equations of equation (5) 1, is processed together with the system of equations of equation (4) 2 to provide an approximate solution 3 to the system of equations of equation (4) 2.
  • the system of equations of equation (4) 2 is processed together with the system of equations of equation (5) 1 so as to determine a plurality of differences 4 as represented by equations (7), (8) and (9).
  • An approximate solution 5 to the system of equations of equation (5) 1 is determined together with a residual vector.
  • the differences 4, the approximate solution 5 and the residual vector are processed in the manner described above so as to generate the approximate solution 3 to the system of equations of equation (4) 2.
  • the approximate solution to the system of equations /-1 1 is generated based upon differences between the system of equations i- 1 and a further system of equations existing at time i-2 6 together with an approximate solution 7 to the system of equations existing at time i-2.
  • the approximate solution 7 to the system of equations existing at time i-2 is itself created based upon differences between equations existing at a time before time i-2 and the system of equations at time i-2 together with an approximate solution to that previous system of equations.
  • the index (i) is omitted.
  • the method determines an approximation ⁇ h ⁇ for the vector ⁇ h.
  • the described method uses the coefficient matrix R as well as a residual vector ⁇ r .
  • Each element of the approximate solution vector ⁇ h ⁇ is represented by M b bits. That is, all elements of the approximate solution vector ⁇ h ⁇ lie in an interval [-H, +H].
  • the algorithm carries out a predetermined number of iterations denoted by N u . The following notation should be noted in the following description:
  • Rp is used to denote the p th column of the coefficient matrix R;
  • R P,P denotes the (p,p) th element of the coefficient matrix R;
  • ⁇ r , p denotes the p th element of the vector ⁇ r ;
  • sign( ⁇ r;P ) denotes the sign of the p th element of the vector ⁇ r ;
  • ⁇ h ⁇ ,p denotes the p th element of the approximate solution vector ⁇ h ⁇ .
  • step Sl the variables utilised by the algorithm are appropriately initialised as follows.
  • the residual vector ⁇ r is initialised so as to be equal to the vector ⁇ 0 :
  • the parameter d indicating a step size is set to be equal to H, which is a maximum value taken by elements of the approximate solution vector ⁇ h ⁇ .
  • the counter variable q determines a number of iterations through which the algorithm has progressed.
  • a counter variable m is a counter variable counting through bits of values of the approximate solution vector ⁇ h ⁇ .
  • an index p representing an element of the vector ⁇ r having the greatest magnitude is determined.
  • the value of the parameter d is divided by 2
  • the value of the counter variable m is incremented.
  • a check is carried out to determine whether the current value of the counter variable m is greater than the predefined number of bits M b used to represent each of element of the vector ⁇ h ⁇ . If this is the case, processing stops at step S6. Otherwise, processing continues at step S7.
  • a check is carried out based upon equation (27):
  • the p th element of the vector ⁇ r is the element of the vector ⁇ r having the greatest magnitude given the processing step S2.
  • step S 7 if it is determined that the check of step S 7 and equation (27) is satisfied processing returns to step S3. If however the check is not satisfied processing passes from step S 7 to step S 8.
  • step S 8 the p th element of the approximate solution vector ⁇ h ⁇ is updated according to equation (28):
  • the value of pth element of the approximate solution vector ⁇ h ⁇ is updated by the addition of the value of d to which the sign of the p th element of the vector ⁇ r is applied. Having updated the p th element of the approximate solution vector ⁇ h ⁇ , at step S9 the residual vector ⁇ r is updated in accordance with equation (29):
  • the residual vector is updated by subtracting the p th column vector of the coefficient matrix R 5 when that column vector is multiplied by d and a sign taken from the p th element of the residual vector ⁇ r is applied,
  • steps S3 to S9 is arranged to appropriately update a currently processed value of the approximate solution vector ⁇ h ⁇ .
  • This processing involves first determining an appropriate value d which is to be used to update a currently processed value of the approximate solution vector ⁇ h ⁇ (loop of steps S3 to
  • steps S3 to S7 is configured such that the value of d is repeatedly divided by two until it is sufficiently small that when multiplied by the corresponding element of the matrix R, the corresponding element of the residual vector ⁇ r has greater magnitude. It will therefore be appreciated that when the determined value of d is used to update the currently processed element of the residual vector ⁇ r (the update corresponding to an update made to the approximate solution vector ⁇ h ⁇ ), the currently processed element of the residual vector ⁇ ,. will decrease in magnitude indicating that the value for the currently processed element of the approximate solution vector ⁇ h ⁇ has improved.
  • step S9 Processing passes from step S9 to step SlO where the counter variable q indicating the number of iterations is incremented.
  • step SI l a check is carried out to determine whether the number of iterations N 11 has been carried out. If this is the case processing ends at step S 12. Otherwise, processing continues at step S 13 where the variable p is updated so as to indicate the element of the residual vector ⁇ r having the greatest magnitude. Processing then returns to step S7 and continues in the manner described above.
  • H represents a value greater than or equal to the magnitude of the maximum value of the solution vector elements.
  • H is set according to equation (30).
  • t is any integer (i.e. positive, negative or zero)
  • the value of d can be updated without multiplication or division, simply by appropriate bit shift operations. This is particularly advantageous, because microprocessors can typically carry out bit shift operations far more efficiently than multiplication or division.
  • entries of the approximate solution vector ⁇ h ⁇ are processed for each bit of the approximate solution vector entries, starting with the most significant bit.
  • bit wise processing is in fact achieved because immediately preceding each increment of m (step S4) a new value of d is created at step S3.
  • each increment of m will be accompanied by the value of d being divided by two, and given that d is used to update both ⁇ h ⁇ and ⁇ r , after an update of d the next most significant bit is then updated.
  • a regression vector in the form of a vector x(i), and a signal value, in the form of a desired scalar value z(i), are received.
  • a vector of values h(i) represents the coefficients of an adaptive digital filter. These coefficients should be such that the application of the filter to the values of the regression vector produces a good approximation of the scalar value. That is:
  • z ⁇ is a vector of all signal values received at time i and times before time i.
  • X(i) is an N x (i + 1) matrix comprised of a concatenation of vectors x(k)
  • equation (33) can be represented by equation (4):
  • a value of h for equation (32) can be found by solving a related system of linear equations.
  • the received signal vector z and regression matrix X become large, it becomes more difficult to find a vector h representing filter coefficients which allow the left and right hand sides of equation (32) to have substantially equal values.
  • the first method involves the use of a forgetting factor, such that data from relatively recent time points is given exponentially more weight.
  • a forgetting factor such that data from relatively recent time points is given exponentially more weight.
  • E exp (i) is the sum of squared errors between the signal values, z(k), and the response of an adaptive filter to the regression vectors x(k); h is a vector of filter coefficients; ⁇ is an NxN regularization matrix, which can take a number of forms.
  • the regularization matrix takes the form of a diagonal matrix containing small positive values;
  • x(k) is an N-length regression vector of signal parameters received at a time step indicated by k;
  • z(k) is a scalar representing a received scalar value at time k;
  • is a scalar value that lies between 0 and 1, but preferably, given by equation (38): where q is a positive scalar value.
  • Equation (37) is used in equation (37) as the forgetting factor, such that the error between the filter response and the desired received signal value for relatively recent time steps contributes more to the total error, E exp (i) as compared with less recent time steps.
  • equation (37) in the form of equation (4), given equations (39) and (40), is known from, for examples, S Haykin: "Adaptive Filter Theory", 2 nd Edition, Prentice Hall, 1991, page 479.
  • equation (7) provides:
  • equation (8) provides:
  • i) is defined to be a scalar value representing the filter output at time /, according to equation (46):
  • Equation (42) provides:
  • An error e(i) is defined as a difference between an output of the filter y(i) and the received signal value z(i):
  • step S 14 appropriate initialisation is carried out.
  • Vectors h ⁇ (-1) and ⁇ r (-1) are initialised to zero and R(-1) is initialised to II .
  • step S 15 a new signal value, z( ⁇ ) , and a new regression vector, x(i), are received.
  • the signal value and regression vector can be received in any appropriate form, for example transmitted over wired, or wireless, communication means, from direct user input or read from a readable medium.
  • ⁇ R(/) is determined by evaluating equation (41) using the values initialised at step S 14 and received at step S 15.
  • the matrix R(O is obtained by adding ⁇ R(;) to R(M) according to the relationship set out in equation
  • y(i) is calculated by multiplying the most recently received regression vector x(i) with the approximate solution from the previous iteration, h ⁇ (i -1), according to equation (46).
  • the vector ⁇ o (i) is calculated from e(i) and ⁇ r (-1) according to equation (53):
  • ⁇ o (i) could be calculated.
  • ⁇ o (i) could be calculated directly according to equation (17), without the need to calculate e( ⁇ ).
  • calculating ⁇ o (i) in accordance with equation (53) offers a performance enhancement.
  • step S21 an approximate solution to a system of linear equations defined from the determined values of R(i) and ⁇ o (i) is obtained.
  • the system of equations is as set out in equation (18):
  • the system of linear equations is solved by using the method described above with reference to Figure 3, although alternative methods are used in some embodiments of the invention.
  • the solution obtained is denoted Ah ⁇ (i) .
  • step S22 the value of ⁇ r (i) is calculated according to equation (6). It should be noted that in the preferred embodiment, the value of ⁇ r (i) is calculated as part of the method described above with reference to Figure 3 at step S9. Therefore, in the preferred embodiment, step S22 need not be performed separately.
  • h a (i) represents an approximate solution to the exponentially weighted recursive least squares adaptive filtering problem at time z.
  • the problem has been solved by solving an alternate system of linear equations derived from the difference between the system of linear equations at time i and the system of linear equations at time i - 1.
  • step S24 it is determined whether any further values, z(i+l) and x(z+l), are received. If there are more values, processing returns to step S15 in order to solve the system of linear equations in the sequence at time z+1. If there are no more values then the method terminates at step S25.
  • the difference between a regression matrix X(i) and X(z-1) in this case would be small. This can be exploited in order to achieve an increase in efficiency in the method described above.
  • the matrix R(i) can be obtained at step S 17 by copying the values of the upper left (N - 1) x (N - 1) portion of R(i-1) to the lower right portion of R(i) and generating values for the first row and column of R(i) according to:
  • This method requires far fewer calculations than the general method for solving the exponential weighted adaptive filtering problem and this presents a substantial increase in efficiency.
  • step S30 appropriate initialisation is carried out. Such initialisation is equivalent to that carried out at step S14 of Figure 4. Similarly, at step S15 a vector x(i) and a scalar value z(i) are received.
  • Step S32 the matrix R is appropriately updated.
  • Steps S33 to S40 correspond respectively to steps Sl 8 to S25 of Figure 4 and are therefore not described in further detail here.
  • the error generated by the vector h is given by: where is the error in the system;
  • M w is a scalar representing the length of a sliding window.
  • Equation (60) gives the difference between the vector ⁇ at time i and at time i-1:
  • a scalar value y(i) provides the filter output at time step i, and is given by equation (62):
  • a scalar value y M (i) provides the filter output, at time step i, when the regression vector from time step i-M w is input to the filter:
  • An error signal at time (i-M w ) can also be defined:
  • M w is initialised to an appropriate positive integer value indicating the size of the sliding window. Variable values and are initialised to zero, and R is initialised to II ; for time steps -M w to -1, the vector x(i) is initialised to zero.
  • a new signal value, z(i) , and a new regression vector, x(i) are received.
  • ⁇ R(i) is determined by evaluating equation (59), using the values initialised at step S41 and received at step S42.
  • R(O is calculated from ⁇ R(i) according to methods described above.
  • y(i) is calculated by multiplying the most recently received regression vector x(i) with the approximate solution from the previous iteration, h ⁇ (i -1) , according to equation (62).
  • y M (i) is calculated by multiplying a regression vector received at time i— M w by the filter approximation for time i-1, according to equation (63).
  • y(i) is used to calculate e(i), according to equation (52), while at step S48 e M (i) is calculated according to equation (68).
  • ⁇ o (i) is determined according to equation (69):
  • system of linear equations is solved by using the method described above with reference to Figure 3.
  • step S51 the value of ⁇ r (i) is calculated according to equation (20g) given equation (2Oh).
  • step S52 h ⁇ (i) is calculated according to equation (19).
  • h ⁇ (i) represents an approximate value for h in equation (56).
  • the approximation of h ⁇ (i) is such as to minimize the error of equation (56).
  • h ⁇ (i) has been determined by solving an alternate system of linear equations derived from the difference between the system of linear equations at time i and the system of linear equations at time i—1.
  • step S53 it is determined whether any further values, z(i+ ⁇ ) and x(i+1), are received. If there are more values, processing returns to step S42. A further set of linear equations is then solved. If there are no more values at step S53 then the method terminates at step S54.
  • R(O) can be generated at step S44 by copying the top left (N - V) X (N- V) portion of R(i-1) to the lower right portion of R(O and generating values for the first row and column of R(i) according to:
  • the described implemented embodiment determines a value for a vector h in the least-squares problem described above.
  • the invention is implemented by appropriate programming of a field programmable gate array (FPGA) 8.
  • the FPGA 8 is in communication with a host computer 9.
  • the host computer 9 can take any convenient form and can suitably be a conventional desktop computer.
  • the FPGA 8 was based upon a Xilinx Virtex- II Pro Development System, which is now described in general terms.
  • the development system provides an evaluation board comprising an XC2VP30 FPGA (FF896 package, speed grade 7), which in turn features 13696 logic slices, each containing two D-type flip-flops and two four-input look-up tables (LUTs).
  • the FPGA also has 136 block RAM components, each providing 18K bits of storage.
  • VFfDL was used to specify an algorithm in accordance with the method outlined above for the solution of a system of linear equations.
  • Appropriate VHDL code was downloaded to the FPGA using the Xilinx ISE 8.1i software package.
  • the implementation operated from a single 100MHz clock with a FPGA DCM (Digital Clock Manager) and Global Clock Distribution Network being used to ensure a uniform delay between the system clock source and each logic slice.
  • FPGA DCM Digital Clock Manager
  • the FPGA 8 is now described in further detail with reference to Figures 8, 9 and 10.
  • Figure 8 shows components of the FPGA 8.
  • the clock distribution modules are not shown.
  • the FPGA comprises a host interface controller 10, which controls communication with the host computer 9 using the RS232 protocol.
  • a Master State Machine 12 co-ordinates processing.
  • a correlation module 13 is responsible for updating the matrix R(i) and for the calculation of the vector P 0 (O in the manner described above.
  • An equation solver 14 is configured to solve a system of linear equations in the manner described above.
  • Three dual-port RAM modules 15, 16, 17 are provided.
  • R RAM 15 is used to store the matrix R
  • ⁇ RAM 16 is used to store the vector ⁇
  • h ⁇ RAM 17 is used to store the approximation h ⁇ of the solution vector h.
  • Both the correlation module 13 and the equation solver 14 access each of the RAM modules 15, 16, 17. Accordingly, each RAM module 15, 16, 17 is provided with an associated multiplexer 18, 19, 20.
  • the host interface controller 10 receives and stores values of vector x(i) and signal z( ⁇ ) provided by the host computer 9. Values of x(i) and z( ⁇ ) are stored respectively in an x RAM 21 and a z register 22, both of which are internal to the host interface controller 10.
  • the correlation module 13 reads values of x(i) from the x RAM 21, values of signal z(i) from the z register 22, values of R(i-1) from R RAM 15 and values of ⁇ r (i -1) from ⁇ RAM 16.
  • the correlation module 13 calculates values of R(i) and ⁇ o ( ⁇ ) in the manner described above, and respectively stores the calculated values in R RAM 15 and ⁇ RAM 16.
  • the equation solver 14 reads values of R(i) and ⁇ o (i) from the R RAM 15 and ⁇ RAM 16, and determines the values of the adaptive filter represented by ⁇ h ⁇ (i) according to equation (18) using the method described above.
  • the equation solver 14 provides a vector h ⁇ (i) to the h ⁇ RAM 17, being an approximate solution to the exponential recursive least squares problem of equation (37).
  • the ⁇ RAM 16 contains values of residual vector ⁇ r (i) .
  • the host interface controller 10 reads the values of h ⁇ (i) from the h ⁇ RAM 15 and provides the values of h ⁇ (i) to the host computer 9.
  • a plurality of iterations of processing by the FPGA 8 receive values of x and z and output values of h ⁇ , which represent an approximate solution to the exponential recursive least squares problem of equation (37).
  • the correlation module 13 is described in greater detail with reference to Figure 9.
  • the correlation module is comprised of an x RAM reader 23, a h ⁇ RAM reader 24, an R RAM reader 25 and a ⁇ RAM reader 26, which respectively control reading of data from the x RAM 21, the h ⁇ RAM 17, the R RAM 15 and the ⁇ RAM 16.
  • Each multiplier 29, 30 has two inputs and one output, such that a value is produced at the output by multiplying together two values written to the inputs.
  • Two writers, R RAM writer 31 and ⁇ RAM writer 32 respectively write values of R(i) to R RAM 15 and values of ⁇ o (i) to ⁇ RAM 16.
  • a correlation module state machine 33 co-ordinates operation of the components of the correlation module 13.
  • the x RAM reader 23 provides addresses to the x RAM 21 which indicate the location of the values of the vector x(i) within the x RAM 21. Addressed values are provided from the x RAM 21 to the MUYl writer 27.
  • the h ⁇ RAM reader 24 provides addresses to the h ⁇ RAM 17, which indicate the location of the approximate solution vector h ⁇ (i -1) within the h ⁇ RAM 17. Addressed values are provided from the h ⁇ RAM 17 to the MUY2 writer 28.
  • the R RAM reader 25 provides addresses to R RAM 15 indicating the locations of the values of R(i -1) within R RAM 13. Addressed values are provided from the R RAM 15 to the R RAM writer 31.
  • MUYl writer 27 writes the received values of x(i) to both inputs of multiplier MUYl 29 in such a way that the values of the vector product x(i)x T (i) are produced at the output of the multiplier MUYl 29.
  • a R RAM writer 31 reads the values of x(i)x T (i) from the output of multiplier MUYl 29.
  • the R RAM writer 31 combines R(i-I) with the values of x(i)x T (i) in accordance with equation (41), such that values for R(i) are determined, then writes the values of R(i) to the R RAM 15.
  • MUY2 writer 28 In order to compute the error signal e(i), MUY2 writer 28 provides the values of h ⁇ (i — 1) received from the h ⁇ RAM 17 and values of x(i) received from the x RAM 21 to the inputs of multiplier MUY2 30.
  • a ⁇ RAM writer 32 reads the values from the output of the multiplier MUY2 30 and accumulates them such that a value of y ⁇ i) is provided to the ⁇ RAM writer 32, which represents a filter output in accordance with equation (46):
  • the ⁇ RAM writer 32 reads the value of z(i) from the host interface controller 10 and calculates the value of e(i) according to equation (52):
  • the x RAM reader 23 provides addresses of the values of x(i) to the x RAM 21 and addressed values are provided to the MUY2 writer 28.
  • the ⁇ RAM reader 26 provides addresses of values of ⁇ r (i -1) to the ⁇ RAM 16, and addressed values are provided to the MUY2 writer 28.
  • the value of e( ⁇ ) is read from the ⁇ RAM writer 32 by the MUY2 writer 28.
  • MUY2 writer 28 writes the values of x(i) and the value of e(f) to the multiplier MU Y2 30 such that a vector e(i)x(i) is produced at the output of the multiplier MUY2 30.
  • the ⁇ RAM writer 32 reads the values of e(i)x( ⁇ ) from the output of MUY2 30 and reads the values of ⁇ r (i -1) from the ⁇ RAM 16.
  • the ⁇ RAM writer 32 calculates ⁇ o (i) using the values of e(i)x(i) and ⁇ r (i -1), according to equation (53), and writes the values of ⁇ o (i) to the ⁇ RAM 16.
  • the equation solver 14 is comprised of a core state machine 35, which co-ordinates the operation of the components of the equation solver 14 and contains values representing d, m, p, q, N u and M b as described above.
  • a RAM reader 36 controls reading of data from h ⁇ RAM 17, R RAM 15 and ⁇ RAM 16.
  • a comparator 37 compares provided values.
  • a ⁇ updater 38 and a h updater 39 respectively control updating of values stored in ⁇ RAM 16 and h ⁇ RAM 17.
  • the RAM Reader 36 provides addresses of the values of ⁇ r p (i) into ⁇ RAM 16 and addresses of the values of R P,P ( ⁇ ) to the R RAM 15.
  • the comparator 37 uses the addresses provided by the RAM Reader 36 to read the values of ⁇ r p (i) from the ⁇ RAM 16 and the values of R P,P (i) from the R RAM 15.
  • the core state machine 35 operates in accordance with Figure 3 described above and updates the values of d and m according to steps S3 and S4.
  • the core state machine then performs a check in accordance with step S 5 of Figure 3 to determine whether m is greater than a value M b . If m is greater than M b then operation of the equation solver 14 terminates for the currently processed system of equation.
  • step S7 the comparator 37 compares the values of R p,p (i) and ⁇ r,p (i) according to the inequality of equation (27):
  • the comparison result is output to the core state machine 35.
  • the core state machine 35 uses the comparison result to determine whether to proceed to the next stage of processing. If the result of the inequality is true then the core state machine 35 repeats the operations according to steps S3 to S7; otherwise the core state machine 35 allows processing to proceed.
  • the RAM reader 36 provides addresses of the values of vector R (p ) (i) to the R RAM 15 and addresses of the values of vector ⁇ r (i) to the ⁇ RAM 16.
  • the ⁇ updater 38 uses the addresses written by the RAM reader 36 to read the values of P,(i) from the ⁇ RAM 16.
  • the h updater 39 reads the value of h a,p ⁇ i) from the h ⁇
  • the ⁇ updater 38 updates the values of ⁇ r (i) in accordance with equation (29):
  • Updated values of ⁇ r (i) are written to the ⁇ RAM 16 at step S9.
  • the ⁇ updater 38 determines the location of the new maximum value of ⁇ r (i) and outputs the appropriate value ofp to the core state machine 35.
  • the core state machine 35 increments the number of updates q, and compares q to the maximum number of updates N u . If q is equal to or greater than N u then operation of the equation solver 14 terminates for the current processing operation. Otherwise, the core state machine 35 updates the value of p, in accordance with step S 13, and the equation solver 14 performs a further iteration.
  • the h ⁇ RAM 15 contains values of the vector h ⁇ (i -1) .
  • the values of vector h ⁇ (i- ⁇ ) are not removed from h ⁇ RAM 17 before processing by equation solver 14 begins. This is equivalent to adding the values of h ⁇ (i -1) to the values of ⁇ h ⁇ (i) determined by the equation solver.
  • the values in h ⁇ RAM 17 therefore represent h ⁇ (i) .
  • h(i) is a vector representing the N -length true impulse response to be approximated
  • h ⁇ (i) is a vector representing the approximate solution of an RLS adaptive filtering algorithm provided by an embodiment of the invention.
  • AU N values of the impulse response filter h(i) are modelled as independent Gaussian random numbers with zero mean and unit variance.
  • the received signal is modelled as
  • n(i) is additive zero-mean Gaussian random noise with variance ⁇ 2
  • the vector x(i) contains autoregressive correlated random numbers given by
  • Figure 11 shows that the algorithms have similar performance.
  • Figure 12 shows that, as the maximum number of updates N u increases, the algorithm described above approaches the performance of the known RLS algorithm.
  • Figure 16 shows the tracking performance of the algorithm described above in a time- varying environment.
  • Thep th element h p (i) of the true impulse response h(i) varies in time according to where ⁇ p are scalar values representing independent random numbers uniformly distributed on ; h p ⁇ 0) are scalar values representing independent Gaussian random amplitudes of unit variance; and
  • F is a scalar value representing the variation rate.
  • Figure 17 shows the mean squared error performance of the algorithm described above in an environment typical for an acoustic echo cancellation scenario.
  • Figure 18 shows the mean squared error performance of a fixed-point implementation of the algorithm described above against a floating-point implementation of the same algorithm and an implementation of the known recursive least squares algorithm, also implemented in floating point. Floating-point is considered as having infinite precision.
  • elements of the matrix R(i) , vector ⁇ r , and others JVj bits are used.
  • a single iteration of the described algorithm is approximately equivalent to 2N additions.
  • Q 16 bits
  • the chip area of one multiplier is approximately equivalent to that of 16 adders.
  • the solution of the normal equations according to the method of Figure 3, described above has a complexity significantly lower when compared to the whole complexity of the adaptive algorithm.
  • steady-state mean squared error performance approaches that of the known recursive least squares algorithm.
  • M b 16
  • An echo cancellation method may be used, for example, in a handsfree telephone where a first speech signal is received and output through a loudspeaker. A second speech signal is input to a microphone together with part of the first speech signal output through the loudspeaker.
  • the response of the system is defined by the manner in which an echo of the first signal is generated and included in the signal input to the microphone.
  • the response is an acoustic response determined by, for example, the geometry of the environment in which the microphone and loudspeaker are situated.
  • a signal 51 travels along an input line and is output through a digital to analog converter (not shown), to a loudspeaker 52.
  • a further signal such a human voice (not shown) is passed to an input of a microphone 53.
  • the microphone signal is converted to a digital signal by means of an analog to digital converter (not shown).
  • an output signal 54 of the microphone 53 contains only the human voice signal, and none of the signal output by the loudspeaker 52.
  • some of the signal output by the loudspeaker 52 will be received by the microphone 53 such that the output signal 54 comprises a combination of the human voice signal and part of the loudspeaker output signal (referred to as "the echo signal"). It is desirable to remove the echo signal present in the output signal 54 of the microphone 53.
  • the echo cancellation apparatus comprises a filter 55, which takes as input the signal 51 and which is configured to provide an estimate 56 of the echo signal. This estimate 56 is subtracted from the microphone output signal 54 by a subtractor 57. Therefore, if the echo is accurately estimated, an output 58 of the subtractor will be equal to the human voice signal input to the microphone 53.
  • the filter 55 In order to accurately model the echo signal, the filter 55 must be provided with a set of filter coefficients which can be applied to the signal 51. The set of filter coefficients should model the acoustic response of the environment in which the loudspeaker 52 and microphone 53 are situated.
  • the echo cancellation apparatus comprises a filter coefficient setting circuit 59, which produces suitable filter coefficients.
  • the filter coefficient setting circuit 59 takes as inputs the signal 51 which is input to the loudspeaker and the signal 54 which is output from the microphone 53.
  • the echo cancellation apparatus of Figure 19 processes discrete samples in turn.
  • the apparatus operates at a predetermined sampling frequency (F s ), which determines how often samples are taken.
  • F s a predetermined sampling frequency
  • values of the signal 51 input to the loudspeaker 52 (s f ), the signal 54 output from the microphone 53 (s m ), the output 56 of the filter 55 (y), and the output 58 of the subtracter 57 ( ⁇ ) are considered.
  • the purpose of the apparatus of Figure 19 is to remove the echo signal from the signal 54 output from the microphone 53. There will be a time delay between a signal output by the loudspeaker 52 being input to the microphone 53 to form the echo signal. Furthermore, a signal output from the loudspeaker 52 at a single instant will not be input to the microphone 53 at a single instant, instead the signal output by the loudspeaker 52 will be input to the microphone 53 at a range of different times. The delay between a signal being output by the loudspeaker 52 and substantially all of that signal which are to form the echo signal being received by the microphone 53 is estimated to correspond to the time taken to obtain N samples. JV will typically equate to a relatively large number of samples.
  • y( ⁇ is output 56 of the filter 55 at sample time t
  • h t is a set of N filter coefficients determined by the filter coefficient setting circuit 59 in the manner described below;
  • S f (t - r) is the signal 51 input to the loudspeaker 52 at sample time (t - ⁇ ) ;
  • Equation (76) can conveniently be carried out by a Finite Impulse Response (FIR) filter, the implementation of which will be readily understood by one of ordinary skill in the art.
  • FIR Finite Impulse Response
  • s m (t) is the signal 54 output from the microphone 53 at sample time t
  • y ⁇ i) is the signal 56 output from the filter 55 at sample time t
  • A(t) is the signal 58 output from the subtractor 57 at sample time t.
  • ⁇ (t) should be close to the human voice signal input to the microphone 53 at time t.
  • a value M s represents the number of samples to be processed in a particular step, and coefficients of the filter 55 are updated after every M s samples as described below.
  • a counter variable is incremented with each sample to determine when M s samples have been processed.
  • a filter coefficient setting circuit can be provided to generate coefficients for filters in devices other than the echo cancellation apparatus as described with reference to Figure 19.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Operations Research (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Complex Calculations (AREA)

Abstract

A method for obtaining an estimate of a solution to a first system of linear equations. The method comprises obtaining a second system of linear equations, obtaining an estimate of a solution to said second system of linear equations, determining differences between said first and second systems of linear equations, and determining an estimate of a solution to said first system of linear equations based upon said differences and said estimate of said solution to said second system of linear equations.

Description

EQUATION SOLVING
The present invention relates to systems and methods for solving systems of linear equations. More particularly, but not exclusively, the invention relates to methods for solving systems of linear equations so as to determine a vector providing a solution to a recursive least squares problem. The invention has particular, but not exclusive applicability in determining filter coefficients of an adaptive digital filter.
Systems of linear equations occur frequently in many branches of science and engineering. Effective methods are needed for solving such equations. It is desirable that systems of linear equations are solved as quickly as possible.
A system of linear equations typically comprises N equations in N unknown variables. For example, where N=3 an example system of equations is set out below:
15x + 5y - 2z = 15
5x + 1 Iy + Az = 47 -2x + 4y + 9z = 51
In this case, it is necessary to find values of x, y and z which satisfy all three equations. Many methods exist for finding such values of x, y and z and in this case it can be seen that x=l , y=2 and z=S is the unique solution to the system of equations.
In general terms, existing methods for solving systems of linear equations can be categorised as either direct methods, or iterative methods. Direct methods attempt to produce an exact solution by using a finite number of operations. Direct methods, however, suffer from a problem in that the number of operations required is often large, which makes the method slow. Furthermore, some implementations of such methods are sensitive to truncation errors. Iterative methods solve a system of linear equations by repeated refinements of an initial approximation until a result is obtained which is acceptably close to the accurate solution. Each iteration is based on the result of the previous iteration, and the result of the final iteration represents a closer approximation to the accurate solution than the result of any previous iteration.
Generally, iterative methods produce an approximate solution of the desired accuracy by yielding a sequence of solutions which converge to the exact solution as the number of iterations tends to infinity.
It will be appreciated that systems of linear equations are often solved using a computer apparatus. Often, equations will be solved by executing appropriate program code on a microprocessor.
When considering the implementation of an algorithm in hardware, its efficiency is a prime concern. Many algorithms for the solution of linear equations involve computationally expensive division and/or multiplication operations. These operations should, where possible be avoided, although this is often not possible with known methods for solving linear equations.
Many systems of linear equations have sparse solutions. In such cases the number of iterations required to solve the system of equations should be relatively low. However, this does not occur with some known methods.
Adaptive algorithms are of considerable importance in many areas of engineering. They are used in communications for channel estimation, interference suppression, antenna array beamforming, MIMO detection and many other applications, hi radar, adaptive algorithms are used for antenna beamforming. In acoustics, adaptive algorithms are used for acoustic echo cancellation, microphone array beamforming, active noise control, and sonar signal processing. Known adaptive algorithms include the least mean squares (LMS) and recursive least squares (RLS) algorithms. The LMS algorithm is known to have a slow convergence speed. However, despite this drawback, in practice, the LMS algorithm is commonly used. This is due to its implementation simplicity and stability, in particular for fixed- point implementation on FPGA platforms. The complexity of the LMS algorithm is O(N) operations per sample (or time instant or iteration), i.e., the complexity is linear in the number of parameters N , where N is the filter length.
There are many applications where fast-convergence adaptive algorithms would be very useful. However, in practice such algorithms are not used because of high implementation complexity and/or instability. The RLS algorithm is known to possess the fastest convergence speed amongst adaptive algorithms. However, the classical RLS algorithm has a high complexity that is prohibitive for many applications; the complexity of the RLS algorithm is O(N2) per time instant. Various attempts have been made to improve the performance of the RLS algorithm, although none of these attempts have provided a widely applicable workable solution.
In summary there is a need for a more efficient algorithm, which can be used to solve systems of linear equations. There is also a need for efficient adaptive algorithms, and in particular an efficient implementation of the RLS algorithm.
It is an object of embodiments of the present invention to obviate or mitigate at least one of the problems identified above.
According to a first aspect of the present invention, there is provided, a method for obtaining an estimate of a solution to a first system of linear equations, the method comprising: obtaining a second system of linear equations, obtaining an estimate of a solution to said second system of linear equations, determining differences between said first and second systems of linear equations, and determining an estimate of a solution to said first system of linear equations based upon said differences and said estimate of said solution to said second system of linear equations.
The method may further comprise determining a system of equations based upon the differences, and determining an estimate of the solution to the system of equations based upon the differences. Determining the estimate of the solution to the first system of linear equations may be based upon the estimate of the solution of the system of equations based upon the differences.
Determining the estimate of the solution to the first system of linear equations may comprise combining the estimate of the solution to the second system of linear equations and the estimate of the solution to the system of equations based upon the differences. Such combining may comprise addition.
The first system of linear equations may relate to a first time point and the second system of linear equations may relate to a second earlier time point.
Each of the first and second systems of linear equations may have the form:
Rh = β
where:
R is a coefficient matrix of the respective system of equations; h is a vector of the unknown variables of the respective system of equations; and β is a vector containing the values of the right hand side of each equation of the respective system of equations The first system of linear equations may relate to a time point i and have a form R(i)h(i)=P(i). The second system of linear equations may relate to a time point i-1 and have a form
Figure imgf000006_0005
The system of equations based upon the differences may have a form:
Figure imgf000006_0001
where:
Figure imgf000006_0002
hα (i -1) is an estimate of the solution of the system of equations relating to time point i-1;
Figure imgf000006_0003
An estimate of the solution of the first system of linear equations relating to time point i may be given by:
Figure imgf000006_0004
The invention further provides a method of estimating filter coefficients for an adaptive filter. The method comprises generating first and second systems of linear equations, and obtaining an estimate of a solution to the first system of linear equations using a method as described above. The estimate of the solution to the first system of linear equations may be an estimate of the filter coefficients. Such adaptive digital filters are used in a wide range of applications including acoustic echo cancellation and various telecommunications applications. For example, in an acoustic echo cancellation application, the vector h(i) may represent a time- varying estimate of the acoustic impulse response, while R(i) maybe an autocorrelation matrix of a signal received from a transmitter. β(i) may be a cross-correlation vector between the signal received from the transmitter and a signal provided by a local microphone.
In telecommunications, the method for estimating filter coefficients may be used for solving such problems as equalisation, channel estimation, multiuser detection, and others. In equalisation applications, the vector h(/) may represent the equaliser impulse response, while R(i) maybe the autocorrelation matrix of the received signal at time instant i , i.e., a signal at the output of a communication channel to be equalised. β(i) maybe a cross-correlation vector between the received signal and a desired (pilot or test) signal.
According to a second aspect of the present invention, there is provided, a method for generating an estimate of a solution of N linear equations in N unknown variables, the method comprising: initialising a vector indicating a first estimate of said solution, recursively modifying an update value until a predetermined criterion is satisfied, and updating said vector based upon said update value.)
Processing in this way means that an appropriate value of the update value is selected before an estimate of a solution to the first system of linear equations is updated. Such a method improves convergence.
The system of linear equations may have a form:
Rh = β
where:
R is a coefficient matrix of the respective system of equations; h is a vector of the unknown variables of the system of equations; and β is a vector containing the value of the right hand side of each equation of the system of equations.
The method may comprise initialising a residual vector βr= β. The method may comprise determining the element of the residual vector having the greatest magnitude. The predetermined criteria may be based upon said element having the greatest magnitude. The predetermined criteria may be based upon a relationship between said element having the greatest magnitude and a value of an element of the matrix R.
The method may comprise determining an index p of the element βr,p having the greatest magnitude and the element of the matrix R may be determined based upon the index. The index of the element having the greatest magnitude may be the pth element of the residual vector and the element of the matrix R may be the (p,p)th element Rp,p of the matrix R.
The predetermined criteria may be for an update value d.
Figure imgf000008_0001
The method may further comprise updating the residual vector based upon the update value. The method may further comprise carrying out a plurality of iterations of processing, each iteration of processing being based upon a value of the residual vector having the greatest magnitude following the preceding iteration. The method may further comprise initialising the update value to a power of two. Modifying the update value may comprise dividing the update value by a predetermined value. The predetermined value may be two. The division may be implemented as a bit shift operation. In general terms, modifying the update value makes sure that the update value remains within predetermined limits. It will be appreciated that the features described in the context of the first aspect of the present invention may be applied in the context of the second aspect of the present invention.
It will be appreciated that all aspects of the present invention can be implemented by ways of methods, apparatus and computer programs. Such computer programs may be carried on appropriate carrier media.
Embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings in which:
Figure 1 and 2 are schematic illustrations of operation of an embodiment of the invention;
Figure 3 is a flow chart illustrating operation of a method for solving systems of linear equations in accordance with one embodiment of the invention;
Figure 4 and 5 are flow charts illustrating the solution of an exponential recursive least squares problem using the method of Figure 3;
Figure 6 is a flow chart illustrating the solution of a sliding window recursive least squares problem using the method of Figure 3;
Figure 7 is a schematic illustration of a Filed Programmable Gate Array (FPGA) in communication with a general purpose computer;
Figure 8 is a schematic illustration of components of the field programmable gate array of Figure 7;
Figure 9 is a schematic illustration showing a correlation module of the field programmable gate array of Figure 8 in further detail; Figure 10 is a schematic illustration showing the equation solver of the field programmable gate array of figure 8 in further detail;
Figures 11 to 18 are graphs showing performance of an embodiment of the invention; and
Figure 19 is a schematic illustration of an echo cancellation apparatus.
A method for solving a system of linear equations is now described. A system of linear equations can be expressed in the form:
Rh = β (1)
Where:
R is a coefficient matrix of the system of equations; h is a vector of the unknown variables; and β is a vector containing the value of the right hand side of each equation.
For example, the system of equations (2):
Figure imgf000010_0002
can be expressed in the form of equation (1) where:
Figure imgf000010_0001
To solve the system of equations, it is necessary to find values for x, y, and z of h which satisfy each of the three equations.
In some circumstances a plurality of systems of linear equations need to be solved. In such a case, the equation (1) can be rewritten as:
Figure imgf000011_0001
Where: i is an index of a particular system of equations, occurring at time i.
It will be appreciated that a further system of equations is defined by equation (5), where the system of equations of equation (5) exists one time step before the system of equations of equation (4).
Figure imgf000011_0002
Using an appropriate method (and such a method is described in further detail below), the system of equations of equation (5) can be solved to determine an approximate solution hα(i -1.) Accordingly, a residual vector β,.(i -1) indicating the error in the approximate solution hα(i-1) can be determined according to equation (6):
Figure imgf000011_0003
Where: β,.(i -1) is a vector representing an error in the approximate solution hα(i -1).
Differences between the system of equations of equation (5) and the system of equations of equation (4) can be defined by equations (7), (8), and (9).
Figure imgf000012_0002
Substituting equation (9) into equation (4) gives:
Figure imgf000012_0003
Expanding equation (10) gives:
Figure imgf000012_0004
Rearranging equation (11) gives:
Figure imgf000012_0005
Substituting equation (8) into equation (12) gives:
Figure imgf000012_0006
Substituting equation (7) into equation (13) gives:
Figure imgf000012_0007
Expanding equation (14) gives:
Figure imgf000012_0001
Recalling equation (6):
Figure imgf000013_0002
Equation (15) can be rewritten as:
Figure imgf000013_0003
Defining βo(i) as in equation (17):
Figure imgf000013_0004
Equation (16) can be rewritten as:
Figure imgf000013_0001
From equation (17) it can be seen that βo(i) can be computed from data which is already known. Specifically, the residual vector βr from time i-1, the vector Δβ(i), the matrix ΔR(i), and the vector hα(i-1) are all known in the manner described above. Accordingly, the system of equations (18) can be processed to obtain an estimate for Δhα(i) as described in further detail below. An approximation for the system of equations of equation (4) can then be computed according to equation (19).
Figure imgf000013_0005
Given that solving the system of equations of equation (18) is more efficient than solving the system of equations of equation (4) it will be appreciated that determining an approximation for the solution of equations (18) and using this to determine an approximation for the solution of the system of equations of equation (4) does offer efficiency benefits. It should be noted that obtaining an approximation
Figure imgf000014_0002
by processing equation (18) provides a residual vector βΔ as follows:
Figure imgf000014_0001
Substituting equation (17) into equation (20a) gives:
Figure imgf000014_0003
Substituting equation (6) into equation (20b) gives:
Figure imgf000014_0004
Factorising in equation (20c) gives:
Figure imgf000014_0009
Figure imgf000014_0005
Substituting equations (7) and (8) into equation (2Od) gives:
Figure imgf000014_0006
Factorising R(i) in equation (2Oe) gives:
Figure imgf000014_0007
Substituting equation (19)into equation (20f) gives:
Figure imgf000014_0008
From equation (6) it can therefore be seen that equation (2Og) gives:
Figure imgf000015_0001
That is, it can be seen that the residual vector for the system of equations (18) which are solved to provide Δhα(i) is the same as the residual vector for the approximate solution hα(i) of the system of equations of equation (4).
Thus, from the preceding description it will be appreciated that embodiments of the invention allow a system of equations of the form of equation (18) to be formed so as to allow more efficient solution of the system of equations of equation (4). More specifically, solutions obtained to a previous system of equations are used to obtain a solution to a further system of equations. This can be seen in Figure 1. Specifically, the system of equations of equation (5) 1, is processed together with the system of equations of equation (4) 2 to provide an approximate solution 3 to the system of equations of equation (4) 2. Instead of directly finding an approximate solution of the system of equations of equation (4) 2, the system of equations of equation (4) 2 is processed together with the system of equations of equation (5) 1 so as to determine a plurality of differences 4 as represented by equations (7), (8) and (9). An approximate solution 5 to the system of equations of equation (5) 1 is determined together with a residual vector. The differences 4, the approximate solution 5 and the residual vector are processed in the manner described above so as to generate the approximate solution 3 to the system of equations of equation (4) 2.
The preceding description has been concerned with the solution of a system of equations existing at time i with reference to a previous system of equations existing at time /-1. It will be appreciated that such a process can be repeated for a plurality of times. This is shown in Figure 2. Here, it can be seen that an approximation 3 of a system of equations i 2 is generated based upon differences between a system of equations i-1 1 and the system of equations i 2 together with an approximate solution 5 to the system of equations i-1 1. Similarly, the approximate solution to the system of equations /-1 1 is generated based upon differences between the system of equations i- 1 and a further system of equations existing at time i-2 6 together with an approximate solution 7 to the system of equations existing at time i-2. The approximate solution 7 to the system of equations existing at time i-2 is itself created based upon differences between equations existing at a time before time i-2 and the system of equations at time i-2 together with an approximate solution to that previous system of equations.
The preceding description has explained how a system of equations existing at time i can be solved with reference to data generated from determination of an approximate solution to a system of equations existing at time /-1. In the preceding description it was explained that a method was required to solve a system of equations such as that shown in equation (18) so as to provide an approximation for Δh(i) in the form Δhα(i). A method for determining the approximate solution to the system of equations of equation (18) is now described with reference to Figure 3.
For ease of notation, the system of equations of equation (18) is expressed in the following description as:
Figure imgf000016_0001
That is, the index (i) is omitted.
The method determines an approximation Δhα for the vector Δh. The described method uses the coefficient matrix R as well as a residual vector βr. Each element of the approximate solution vector Δhα is represented by Mb bits. That is, all elements of the approximate solution vector Δhα lie in an interval [-H, +H]. The algorithm carries out a predetermined number of iterations denoted by Nu. The following notation should be noted in the following description:
Rp is used to denote the pth column of the coefficient matrix R; RP,P denotes the (p,p)th element of the coefficient matrix R; βr,p denotes the pth element of the vector βr; sign(βr;P) denotes the sign of the pth element of the vector βr; and Δhα,p denotes the pth element of the approximate solution vector Δhα.
Referring now to Figure 3, at step Sl the variables utilised by the algorithm are appropriately initialised as follows.
The residual vector βr is initialised so as to be equal to the vector β0:
Figure imgf000017_0001
All elements of the approximate solution vector Δhα are set to be zero:
Figure imgf000017_0002
The parameter d indicating a step size is set to be equal to H, which is a maximum value taken by elements of the approximate solution vector Δhα.
d = H (24)
Two counter variables are also initialised:
q = 0 (25)
m = 0 (26)
As described in further detail below, the counter variable q determines a number of iterations through which the algorithm has progressed. A counter variable m is a counter variable counting through bits of values of the approximate solution vector Δhα. Referring back to Figure 3, having appropriately initialised the necessary variables at step Sl5 at step S2 an index p representing an element of the vector βr having the greatest magnitude is determined. At step S3 the value of the parameter d is divided by 2, while at step S4 the value of the counter variable m is incremented. At step S5 a check is carried out to determine whether the current value of the counter variable m is greater than the predefined number of bits Mb used to represent each of element of the vector Δhα. If this is the case, processing stops at step S6. Otherwise, processing continues at step S7. Here, a check is carried out based upon equation (27):
Figure imgf000018_0001
It can be seen from equation (27) that a check is carried out to determine whether the magnitude of the pth element of the vector βr is less than the (p,ρ)th element of the coefficient matrix R when that value is multiplied by
Figure imgf000018_0002
It should, be recalled that the pth element of the vector βr is the element of the vector βr having the greatest magnitude given the processing step S2.
Referring again to Figure 3, if it is determined that the check of step S 7 and equation (27) is satisfied processing returns to step S3. If however the check is not satisfied processing passes from step S 7 to step S 8.
At step S 8 the pth element of the approximate solution vector Δhα is updated according to equation (28):
Figure imgf000018_0003
That is, the value of pth element of the approximate solution vector Δhα is updated by the addition of the value of d to which the sign of the pth element of the vector βr is applied. Having updated the pth element of the approximate solution vector Δhα , at step S9 the residual vector βr is updated in accordance with equation (29):
Figure imgf000019_0001
That is, the residual vector is updated by subtracting the pth column vector of the coefficient matrix R5 when that column vector is multiplied by d and a sign taken from the pth element of the residual vector βr is applied,
In general terms the processing of steps S3 to S9 is arranged to appropriately update a currently processed value of the approximate solution vector Δhα . This processing involves first determining an appropriate value d which is to be used to update a currently processed value of the approximate solution vector Δhα (loop of steps S3 to
S7). The currently processed value of the approximate solution vector Δhα is then updated with the determined appropriate value (step S8), and the corresponding value of the residual vector βr is also updated. The processing of steps S3 to S7 is configured such that the value of d is repeatedly divided by two until it is sufficiently small that when multiplied by the corresponding element of the matrix R, the corresponding element of the residual vector βr has greater magnitude. It will therefore be appreciated that when the determined value of d is used to update the currently processed element of the residual vector βr (the update corresponding to an update made to the approximate solution vector Δhα ), the currently processed element of the residual vector β,. will decrease in magnitude indicating that the value for the currently processed element of the approximate solution vector Δhα has improved.
Processing passes from step S9 to step SlO where the counter variable q indicating the number of iterations is incremented. At step SI l a check is carried out to determine whether the number of iterations N11 has been carried out. If this is the case processing ends at step S 12. Otherwise, processing continues at step S 13 where the variable p is updated so as to indicate the element of the residual vector βr having the greatest magnitude. Processing then returns to step S7 and continues in the manner described above.
It will be appreciated that in this way a number of iterations Nu are carried out so as to determine values for the approximate solution vector Δhα.
It has been described that the value of H represents a value greater than or equal to the magnitude of the maximum value of the solution vector elements. In setting H it is desirable to ensure that it is a power of two. That is, H is set according to equation (30).
H = 2' (30)
where t is any integer (i.e. positive, negative or zero)
When H is chosen in accordance with equation (30), the value of d can be updated without multiplication or division, simply by appropriate bit shift operations. This is particularly advantageous, because microprocessors can typically carry out bit shift operations far more efficiently than multiplication or division.
In the preceding discussion, entries of the approximate solution vector Δhα are processed for each bit of the approximate solution vector entries, starting with the most significant bit. However, it can be seen from the preceding discussion, that at all steps the entire value of an element of Δhα is used for update. However bit wise processing is in fact achieved because immediately preceding each increment of m (step S4) a new value of d is created at step S3. Given that each increment of m will be accompanied by the value of d being divided by two, and given that d is used to update both Δhα and βr, after an update of d the next most significant bit is then updated. An application of the method described above is now presented. The method is used to determine filter coefficients which provide a filter having a minimum mean squared error between its output in response to a particular input and a desired output.
At each time i, a regression vector, in the form of a vector x(i), and a signal value, in the form of a desired scalar value z(i), are received. A vector of values h(i) represents the coefficients of an adaptive digital filter. These coefficients should be such that the application of the filter to the values of the regression vector produces a good approximation of the scalar value. That is:
Figure imgf000021_0001
where i is an index of time; z{ϊ) is a vector of all signal values received at time i and times before time i.
X(i) is an N x (i + 1) matrix comprised of a concatenation of vectors x(k) where
Figure imgf000021_0005
Multiplying both sides of equation (32) by X (i) gives:
Figure imgf000021_0002
It can then be seen that equation (33) can be represented by equation (4):
Figure imgf000021_0003
where
Figure imgf000021_0004
A value of h for equation (32) can be found by solving a related system of linear equations. However, as the received signal vector z and regression matrix X become large, it becomes more difficult to find a vector h representing filter coefficients which allow the left and right hand sides of equation (32) to have substantially equal values.
Several methods can be applied to determine the vector h, such that regression vectors x(i) and scalars z(i) for relatively distant time points in the past are given less weight. Two such methods are now described.
The first method involves the use of a forgetting factor, such that data from relatively recent time points is given exponentially more weight. When such a forgetting factor is used, an indication of error provided by a filter defined by the vector h is given by equation (37):
Figure imgf000022_0001
where
Eexp(i) is the sum of squared errors between the signal values, z(k), and the response of an adaptive filter to the regression vectors x(k); h is a vector of filter coefficients; π is an NxN regularization matrix, which can take a number of forms. In the preferred embodiment, the regularization matrix takes the form of a diagonal matrix containing small positive values; x(k) is an N-length regression vector of signal parameters received at a time step indicated by k; z(k) is a scalar representing a received scalar value at time k; and λ is a scalar value that lies between 0 and 1, but preferably, given by equation (38):
Figure imgf000023_0001
where q is a positive scalar value.
λ is used in equation (37) as the forgetting factor, such that the error between the filter response and the desired received signal value for relatively recent time steps contributes more to the total error, Eexp(i) as compared with less recent time steps.
A vector h(i) that provides the minimum to the error Eexp(i) of equation (37) can again be expressed in the form of equation (4):
Figure imgf000023_0002
where
Figure imgf000023_0003
The expression of equation (37) in the form of equation (4), given equations (39) and (40), is known from, for examples, S Haykin: "Adaptive Filter Theory", 2nd Edition, Prentice Hall, 1991, page 479.
Finding an approximate solution to the system of equations of equation (4) provides an estimate for the vector h(i) so as to minimise the error given by equation (37).
A difference between matrices R(i) and R(i-1) is given by equation (7):
Figure imgf000023_0004
where R(i) is defined by equation (39), equation (7) provides:
Figure imgf000024_0001
A difference between vectors β(i) and β(i - 1) is given by equation
(8):
Figure imgf000024_0003
Where β(i) is given by equation (40), equation (8) provides:
Figure imgf000024_0002
Multiplying both sides of equation (41) by hα(i-1) gives:
Figure imgf000024_0004
Recalling equation (6):
Figure imgf000024_0005
From equation (6) it can be seen that:
Figure imgf000025_0001
Substituting equation (44) into equation (43) gives:
Figure imgf000025_0002
i) is defined to be a scalar value representing the filter output at time /, according to equation (46):
Figure imgf000025_0003
Substituting equation (46) into equation (45) gives:
Figure imgf000025_0004
The system of equations to be solved is that of equation (18):
Figure imgf000025_0005
R(i) is known, β 0 (i) is given by equation (17):
Figure imgf000025_0006
Substituting equation (47) into equation (17) gives:
Figure imgf000025_0007
Expanding:
Figure imgf000026_0002
Simplifying:
Figure imgf000026_0001
Equation (42) provides:
Figure imgf000026_0003
Rearranging equation (42) gives:
Figure imgf000026_0004
Substituting equation (49) into equation (48) gives:
Figure imgf000026_0005
Rearranging:
Figure imgf000026_0006
An error e(i) is defined as a difference between an output of the filter y(i) and the received signal value z(i):
Figure imgf000026_0007
Substituting equation (52) into equation (51):
Figure imgf000026_0008
An estimate for h in equation (37) is determined by obtaining an approximation for Δh(/) in the system of equations of equation (18):
Figure imgf000027_0001
Processing required to form and solve a system of equations of the form shown in equation (18) is now described with reference to Figure 4.
At step S 14, appropriate initialisation is carried out. Vectors hα(-1) and βr(-1)are initialised to zero and R(-1) is initialised to II . At step S 15, a new signal value, z(ι) , and a new regression vector, x(i), are received. The signal value and regression vector can be received in any appropriate form, for example transmitted over wired, or wireless, communication means, from direct user input or read from a readable medium.
At step S16, ΔR(/) is determined by evaluating equation (41) using the values initialised at step S 14 and received at step S 15. At step S17, the matrix R(O is obtained by adding ΔR(;) to R(M) according to the relationship set out in equation
(7).
At step S 18, y(i) is calculated by multiplying the most recently received regression vector x(i) with the approximate solution from the previous iteration, hα(i -1), according to equation (46).
At step S 19, y(i) is used to calculate e(i), as shown in equation (52):
Figure imgf000027_0002
At step S20, the vector βo(i) is calculated from e(i) and βr(-1) according to equation (53):
Figure imgf000028_0002
It is clear that there are many possible methods by which βo(i) could be calculated. In one alternate embodiment, for example, βo(i) could be calculated directly according to equation (17), without the need to calculate e(ι). However, calculating βo(i) in accordance with equation (53) offers a performance enhancement.
At step S21, an approximate solution to a system of linear equations defined from the determined values of R(i) and βo(i) is obtained. The system of equations is as set out in equation (18):
Figure imgf000028_0001
In a preferred embodiment, the system of linear equations is solved by using the method described above with reference to Figure 3, although alternative methods are used in some embodiments of the invention. The solution obtained is denoted Ahα(i) .
At step S22, the value of βr(i) is calculated according to equation (6). It should be noted that in the preferred embodiment, the value of βr(i) is calculated as part of the method described above with reference to Figure 3 at step S9. Therefore, in the preferred embodiment, step S22 need not be performed separately.
At step S23, the values of hα(i) are calculated according to the identity shown in equation (19):
Figure imgf000029_0001
At this point in the algorithm, ha(i) represents an approximate solution to the exponentially weighted recursive least squares adaptive filtering problem at time z. The problem has been solved by solving an alternate system of linear equations derived from the difference between the system of linear equations at time i and the system of linear equations at time i - 1.
At step S24 it is determined whether any further values, z(i+l) and x(z+l), are received. If there are more values, processing returns to step S15 in order to solve the system of linear equations in the sequence at time z+1. If there are no more values then the method terminates at step S25.
In many applications of adaptive filtering, including echo cancellation and equalisation, the regression vector x(i) is defined by equation (54):
Figure imgf000029_0002
In such a case it is clear that the difference between x(i) and x(z-l) would be such that each element of x(i) is shifted down the column vector, a new value x(ϊ) is added at the top of x(z) and the least recent value x(i-N) would not be part of vector x(i). An example of such a case would be when the regression vector is comprised of the most recently received signal value in a signal, together with the last N-I signal values received.
The difference between a regression matrix X(i) and X(z-1) in this case would be small. This can be exploited in order to achieve an increase in efficiency in the method described above. Specifically, in the case that x(i) is added as the first row of matrix X(i), the matrix R(i) can be obtained at step S 17 by copying the values of the upper left (N - 1) x (N - 1) portion of R(i-1) to the lower right portion of R(i) and generating values for the first row and column of R(i) according to:
Figure imgf000030_0001
where
[R(i)] 0,n is the nth element of the first row of R(i).
This method requires far fewer calculations than the general method for solving the exponential weighted adaptive filtering problem and this presents a substantial increase in efficiency.
It will however be appreciated that copying an (N - 1) x (N - 1) block is relatively computationally expensive. To avoid such computational expense, a memory address modification can be carried out. Such modification means that the block does not change position within the memory and only pointer modifications are required.
The solution of a system of equations of the form of equation (18) so as to determine a value for the vector h is now described with reference to Figure 5. At step S30 appropriate initialisation is carried out. Such initialisation is equivalent to that carried out at step S14 of Figure 4. Similarly, at step S15 a vector x(i) and a scalar value z(i) are received.
At step S32 the matrix R is appropriately updated. Steps S33 to S40 correspond respectively to steps Sl 8 to S25 of Figure 4 and are therefore not described in further detail here.
An alternative solution to the problem of finding a vector h representing coefficients of an adaptive filter when an increasingly large matrix X and vector z are used, is a sliding window method.
When a sliding window is used, the error generated by the vector h is given by:
Figure imgf000031_0002
where
Figure imgf000031_0006
is the error in the system; and
Mw is a scalar representing the length of a sliding window.
It can be seen from equation (56) that a sliding window of size Mw restricts the calculation of error in the system to the Mw most recent time steps.
Again, an approximation for the vector h can be found by solving a system of equations of the form of equation (4):
(4)
Figure imgf000031_0005
Here R(O is given at each time i by equation (57a):
Figure imgf000031_0001
R(i-1 ) is given at each time step i by equation (57b):
Figure imgf000031_0003
Therefore, substituting equation (57b) into equation (57a) gives:
Figure imgf000031_0004
β(i) is given by equation (58a):
Figure imgf000032_0001
β(Z - 1) is given by equation (58b):
Figure imgf000032_0002
Therefore, substituting equation (58b) into equation (58a) gives:
Figure imgf000032_0003
A difference between the matrix R at time i, as defined in equation (57c), and at time i-1, as defined in equation (57b), is given by equation (59):
Figure imgf000032_0004
Equation (60) gives the difference between the vector β at time i and at time i-1:
Figure imgf000032_0005
Multiplying both sides of equation (59) by gives:
Figure imgf000032_0008
Figure imgf000032_0006
A scalar value y(i) provides the filter output at time step i, and is given by equation (62):
Figure imgf000032_0007
A scalar value yM(i) provides the filter output, at time step i, when the regression vector from time step i-Mw is input to the filter:
Figure imgf000033_0002
Substituting equations (62) and (63) into equation (61) gives:
Figure imgf000033_0003
The system of equations to be solved to find is again that of equation (18)
Figure imgf000033_0008
Figure imgf000033_0001
Having solved the system of equations of equation (18) to find can be
Figure imgf000033_0007
determined in the manner described above.
R(i) is known from equation (57). βo(i) is given by equation (17):
Figure imgf000033_0004
Substituting equation (60) into equation (17) gives:
Figure imgf000033_0005
Substituting equation (64) into equation (65) gives:
Figure imgf000033_0006
Simplifying equation (66) gives:
Figure imgf000034_0001
From equation (52) an error signal is defined:
Figure imgf000034_0002
An error signal at time (i-Mw) can also be defined:
Figure imgf000034_0003
Substituting equations (52) and (68) into equation (67) gives:
Figure imgf000034_0004
Knowing R(O and computing β 0 (i) , the system of equations of equation (18) can be solved as is now described with reference to Figure 6.
At step S41, Mw is initialised to an appropriate positive integer value indicating the size of the sliding window. Variable values
Figure imgf000034_0005
and are initialised to
Figure imgf000034_0006
zero, and R is initialised to II ; for time steps -Mw to -1, the vector x(i) is initialised to zero. At step S42, a new signal value, z(i) , and a new regression vector, x(i), are received.
At step S43, ΔR(i) is determined by evaluating equation (59), using the values initialised at step S41 and received at step S42. At step S44, R(O is calculated from ΔR(i) according to methods described above. At step S45, y(i) is calculated by multiplying the most recently received regression vector x(i) with the approximate solution from the previous iteration, hα(i -1) , according to equation (62).
At step S46, yM(i) is calculated by multiplying a regression vector received at time i— Mw by the filter approximation for time i-1, according to equation (63).
At step S47, y(i) is used to calculate e(i), according to equation (52), while at step S48 eM (i) is calculated according to equation (68).
At step S49, βo(i) is determined according to equation (69):
Figure imgf000035_0001
At step S50, a system of linear equations defined from the determined values of R(i) and β0 (i) , and as set out in equation (18), is:
Figure imgf000035_0002
In a preferred embodiment, the system of linear equations is solved by using the method described above with reference to Figure 3.
At step S51, the value of βr(i) is calculated according to equation (20g) given equation (2Oh). At step S52, hα(i) is calculated according to equation (19).
At this point in the algorithm, hα(i) represents an approximate value for h in equation (56). The approximation of hα(i) is such as to minimize the error of equation (56). hα(i) has been determined by solving an alternate system of linear equations derived from the difference between the system of linear equations at time i and the system of linear equations at time i—1.
At step S53 it is determined whether any further values, z(i+\) and x(i+1), are received. If there are more values, processing returns to step S42. A further set of linear equations is then solved. If there are no more values at step S53 then the method terminates at step S54.
As described above, many applications of adaptive filtering have regression vectors that update according to a shift of values and the addition of one of more new values. Recalling equation (45):
Figure imgf000036_0001
It is therefore true that large portions of R(O are equal to portions of R(i-l). When vector x(i) is added as the first row in matrix X(O and x(i) is composed according to equation (45), R(O can be generated at step S44 by copying the top left (N - V) X (N- V) portion of R(i-1) to the lower right portion of R(O and generating values for the first row and column of R(i) according to:
Figure imgf000036_0002
where is the n th element of the first row of R(i).
Figure imgf000036_0003
The autocorrelation matrix R(O is symmetric, hence the values of the first row of the matrix are equal to the values of the first column of the matrix. These values do not, therefore, need to be calculated independently but can be inferred from the result of equation (70), such that:
Figure imgf000037_0001
An implementation of an embodiment of the invention is now described with reference to Figures 7 to 10. The described implemented embodiment determines a value for a vector h in the least-squares problem described above. The invention is implemented by appropriate programming of a field programmable gate array (FPGA) 8. The FPGA 8 is in communication with a host computer 9. The host computer 9 can take any convenient form and can suitably be a conventional desktop computer.
In one implementation of the invention, the FPGA 8 was based upon a Xilinx Virtex- II Pro Development System, which is now described in general terms. The development system provides an evaluation board comprising an XC2VP30 FPGA (FF896 package, speed grade 7), which in turn features 13696 logic slices, each containing two D-type flip-flops and two four-input look-up tables (LUTs). The FPGA also has 136 block RAM components, each providing 18K bits of storage. VFfDL was used to specify an algorithm in accordance with the method outlined above for the solution of a system of linear equations. Appropriate VHDL code was downloaded to the FPGA using the Xilinx ISE 8.1i software package. The implementation operated from a single 100MHz clock with a FPGA DCM (Digital Clock Manager) and Global Clock Distribution Network being used to ensure a uniform delay between the system clock source and each logic slice.
The FPGA 8 is now described in further detail with reference to Figures 8, 9 and 10. Figure 8 shows components of the FPGA 8. The clock distribution modules are not shown. The FPGA comprises a host interface controller 10, which controls communication with the host computer 9 using the RS232 protocol. A Master State Machine 12 co-ordinates processing. A correlation module 13 is responsible for updating the matrix R(i) and for the calculation of the vector P0(O in the manner described above. An equation solver 14 is configured to solve a system of linear equations in the manner described above. Three dual-port RAM modules 15, 16, 17 are provided. R RAM 15 is used to store the matrix R, β RAM 16 is used to store the vector β and hα RAM 17 is used to store the approximation hα of the solution vector h. Both the correlation module 13 and the equation solver 14 access each of the RAM modules 15, 16, 17. Accordingly, each RAM module 15, 16, 17 is provided with an associated multiplexer 18, 19, 20.
An iteration of processing by the FPGA 8 at time / is now described. The host interface controller 10 receives and stores values of vector x(i) and signal z(ϊ) provided by the host computer 9. Values of x(i) and z(ι) are stored respectively in an x RAM 21 and a z register 22, both of which are internal to the host interface controller 10.
The correlation module 13 reads values of x(i) from the x RAM 21, values of signal z(i) from the z register 22, values of R(i-1) from R RAM 15 and values of βr(i -1) from β RAM 16. The correlation module 13 calculates values of R(i) and βo(ι) in the manner described above, and respectively stores the calculated values in R RAM 15 and β RAM 16.
The equation solver 14 reads values of R(i) and βo(i) from the R RAM 15 and β RAM 16, and determines the values of the adaptive filter represented by Δ hα(i) according to equation (18) using the method described above. The equation solver 14 provides a vector hα(i) to the hα RAM 17, being an approximate solution to the exponential recursive least squares problem of equation (37). The β RAM 16 contains values of residual vector βr(i) .
The host interface controller 10 reads the values of hα(i) from the hα RAM 15 and provides the values of hα(i) to the host computer 9. A plurality of iterations of processing by the FPGA 8 receive values of x and z and output values of hα , which represent an approximate solution to the exponential recursive least squares problem of equation (37).
The correlation module 13 is described in greater detail with reference to Figure 9. The correlation module is comprised of an x RAM reader 23, a hα RAM reader 24, an R RAM reader 25 and a β RAM reader 26, which respectively control reading of data from the x RAM 21, the hα RAM 17, the R RAM 15 and the β RAM 16. Two writers, MUYl writer 27 and MUY2 writer 28, respectively write values to inputs of two multipliers MUYl 29 and MUY2 30. Each multiplier 29, 30 has two inputs and one output, such that a value is produced at the output by multiplying together two values written to the inputs. Two writers, R RAM writer 31 and β RAM writer 32, respectively write values of R(i) to R RAM 15 and values of βo(i) to β RAM 16. A correlation module state machine 33 co-ordinates operation of the components of the correlation module 13.
A single iteration of processing by the correlation module 13 is now described in detail. It will be understood that a single iteration of processing by the correlation module 13 occurs for each time /.
The x RAM reader 23 provides addresses to the x RAM 21 which indicate the location of the values of the vector x(i) within the x RAM 21. Addressed values are provided from the x RAM 21 to the MUYl writer 27. The hα RAM reader 24 provides addresses to the hα RAM 17, which indicate the location of the approximate solution vector hα(i -1) within the hα RAM 17. Addressed values are provided from the hα RAM 17 to the MUY2 writer 28. The R RAM reader 25 provides addresses to R RAM 15 indicating the locations of the values of R(i -1) within R RAM 13. Addressed values are provided from the R RAM 15 to the R RAM writer 31. MUYl writer 27 writes the received values of x(i) to both inputs of multiplier MUYl 29 in such a way that the values of the vector product x(i)xT(i) are produced at the output of the multiplier MUYl 29.
A R RAM writer 31 reads the values of x(i)xT(i) from the output of multiplier MUYl 29. The R RAM writer 31 combines R(i-I) with the values of x(i)xT(i) in accordance with equation (41), such that values for R(i) are determined, then writes the values of R(i) to the R RAM 15.
The calculation of the values of vector βo(i) is carried out in two steps. First, error signal e(i) is calculated, then βo(i) is calculated using error signal e(i), as described below.
In order to compute the error signal e(i), MUY2 writer 28 provides the values of hα (i — 1) received from the hα RAM 17 and values of x(i) received from the x RAM 21 to the inputs of multiplier MUY2 30.
A β RAM writer 32 reads the values from the output of the multiplier MUY2 30 and accumulates them such that a value of y{i) is provided to the β RAM writer 32, which represents a filter output in accordance with equation (46):
Figure imgf000040_0001
The β RAM writer 32 reads the value of z(i) from the host interface controller 10 and calculates the value of e(i) according to equation (52):
Figure imgf000040_0002
In order to compute the values of vector βo(i) , the x RAM reader 23 provides addresses of the values of x(i) to the x RAM 21 and addressed values are provided to the MUY2 writer 28. The β RAM reader 26 provides addresses of values of βr(i -1) to the β RAM 16, and addressed values are provided to the MUY2 writer 28. The value of e(ι) is read from the β RAM writer 32 by the MUY2 writer 28. MUY2 writer 28 writes the values of x(i) and the value of e(f) to the multiplier MU Y2 30 such that a vector e(i)x(i) is produced at the output of the multiplier MUY2 30. The β RAM writer 32 reads the values of e(i)x(ϊ) from the output of MUY2 30 and reads the values of βr(i -1) from the β RAM 16. The β RAM writer 32 calculates βo(i) using the values of e(i)x(i) and βr(i -1), according to equation (53), and writes the values of βo(i) to the β RAM 16.
Implementation of the equation solver 14 of Figure 8 will now be described in greater detail with reference to Figure 10. The equation solver 14 is comprised of a core state machine 35, which co-ordinates the operation of the components of the equation solver 14 and contains values representing d, m, p, q, Nu and Mb as described above. A RAM reader 36 controls reading of data from hα RAM 17, R RAM 15 and β RAM 16. A comparator 37 compares provided values. A β updater 38 and a h updater 39 respectively control updating of values stored in β RAM 16 and hα RAM 17.
A single iteration of processing by the equation solver 14 is now described in detail. It will be understood that a maximum of Nu iterations of the equation solver 14 occur for each time i.
The RAM Reader 36 provides addresses of the values of βr p(i) into β RAM 16 and addresses of the values of RP,P(ϊ) to the R RAM 15. The comparator 37 uses the addresses provided by the RAM Reader 36 to read the values of βr p (i) from the β RAM 16 and the values of RP,P(i) from the R RAM 15. The core state machine 35 operates in accordance with Figure 3 described above and updates the values of d and m according to steps S3 and S4. The core state machine then performs a check in accordance with step S 5 of Figure 3 to determine whether m is greater than a value Mb. If m is greater than Mb then operation of the equation solver 14 terminates for the currently processed system of equation.
In accordance with step S7, the comparator 37 compares the values of Rp,p (i) and βr,p (i) according to the inequality of equation (27):
Figure imgf000042_0001
The comparison result is output to the core state machine 35. The core state machine 35 uses the comparison result to determine whether to proceed to the next stage of processing. If the result of the inequality is true then the core state machine 35 repeats the operations according to steps S3 to S7; otherwise the core state machine 35 allows processing to proceed.
The RAM reader 36 provides addresses of the values of vector R(p )(i) to the R RAM 15 and addresses of the values of vector βr(i) to the β RAM 16. The β updater 38 uses the addresses written by the RAM reader 36 to read the values of P,(i) from the β RAM 16. The h updater 39 reads the value of ha,p{i) from the hα
RAM 17, updates the value of hα p(i) in accordance with equation (28):
Figure imgf000042_0002
and writes the updated value of hα p(ϊ) to the hα RAM 15 in accordance with step S8.
The β updater 38 updates the values of βr(i) in accordance with equation (29):
Figure imgf000043_0001
Updated values of βr(i) are written to the β RAM 16 at step S9. During the updating of β ,.(i) , the β updater 38 determines the location of the new maximum value of βr(i) and outputs the appropriate value ofp to the core state machine 35.
In accordance with steps SlO and SI l, the core state machine 35 increments the number of updates q, and compares q to the maximum number of updates Nu. If q is equal to or greater than Nu then operation of the equation solver 14 terminates for the current processing operation. Otherwise, the core state machine 35 updates the value of p, in accordance with step S 13, and the equation solver 14 performs a further iteration.
It should be noted that at the start of processing by the equation solver 14, the hα RAM 15 contains values of the vector hα (i -1) . In a preferred embodiment, the values of vector hα(i-ϊ) are not removed from hα RAM 17 before processing by equation solver 14 begins. This is equivalent to adding the values of hα(i -1) to the values of Δhα(i) determined by the equation solver. The values in hα RAM 17 therefore represent hα (i) .
Experimental results obtained from a computer simulation of an embodiment of the invention are now presented. The mean square error (MSE) performance of the exponentially-weighted RLS adaptive filtering algorithm with regression vectors possessing the shift structure described above is compared against a known implementation of an exponentially-weighted RLS algorithm as described in, for example, S Haykin: "Adaptive Filter Theory", 2nd Edition, Prentice Hall, 1991,. The MSE is calculated as
Figure imgf000044_0001
where h(i) is a vector representing the N -length true impulse response to be approximated; and hα(i) is a vector representing the approximate solution of an RLS adaptive filtering algorithm provided by an embodiment of the invention.
AU N values of the impulse response filter h(i) are modelled as independent Gaussian random numbers with zero mean and unit variance. The received signal is modelled as
Figure imgf000044_0002
where n(i) is additive zero-mean Gaussian random noise with variance σ2
The vector x(i) contains autoregressive correlated random numbers given by
Figure imgf000044_0003
where a a value representing an autoregressive factor; and w(i) are uncorrelated zero-mean Gaussian random numbers of unit variance. The results shown in Figures 11 to 18 are described. In Figures 11 to 18, performance of an algorithm in accordance with the method outlined above is labelled DCD-RLS and shown in broken lines, while a known recursive least squares algorithm is labelled RLS and shown in solid lines.
Figure 11 shows the MSE performance when characteristics of the signal z(ϊ) are defined by parameters: α = 0.9 ; σ2 = 0.01. The filter h(i) is changed at time i = 500 by generating a new set of N values of the impulse response filter with the same statistical characteristics. Parameters to the described algorithm are Nu=IOOOO; N = 16 ; λ = 0.95 ; Mb = 16. Figure 11 shows that the algorithms have similar performance.
In the experiments used to generate the results shown in Figures 12 to 15, the filter h(j) changes at time / = 1000 by generating a new set of N independent random variables with same statistical characteristics.
Figure 12 shows the mean squared error performance of the algorithm described above compared to the known implementation of the RLS algorithm, where the signal z(i) has characteristics: a = 0.9 ; σ2 = 0.01 ; and for algorithm parameters: NU=8, 16, 64; N = 16 ; λ = 0.95 ; Mb = 16. Figure 12 shows that, as the maximum number of updates Nu increases, the algorithm described above approaches the performance of the known RLS algorithm.
Figure 13 shows the case where z(j) has signal characteristics: a = 0 ; σ2 = 0.001 ; and the parameters of the described method were: NU=8, 16, 64; N = 64 ; λ = 0.95 . In Figure 13, it is shown that when N11 = 8 the performance of the described method is close to that of the known RLS algorithm. Figure 14 shows the mean squared error performance of the algorithm described above where signal z(i) has characteristics: a = 0.9 ; σ2 = 0.001 ; and algorithm parameters were: Nu=8, 16, 64; N = 64 ; λ = 0.95 . It is shown that, with the same Nu, the convergence of the inventive method is slower than in the case of a white input process (a - 0 ), as shown in Figure 13. However, with small Nu, the algorithm has a smaller variation in mean squared error than the known implementation of the RLS algorithm.
Figure 15 shows the mean squared error performance of the algorithm described above when the signal z(i) has characteristics: a = 0.9 ; σ2 = 0.001 ; and parameters to the algorithm were: NU=8, 16, 64; N = 64 ; λ = 0.975. It is shown that with λ close to 1, the number of updates N11 can be reduced.
Figure 16 shows the tracking performance of the algorithm described above in a time- varying environment. Thepth element hp(i) of the true impulse response h(i) varies in time according to
Figure imgf000046_0001
where φp are scalar values representing independent random numbers uniformly distributed on ;
Figure imgf000046_0002
hp{0) are scalar values representing independent Gaussian random amplitudes of unit variance; and
F is a scalar value representing the variation rate. Signal z(i) has characteristics: F = 10-4; a = 0.9 ; σ2 = 0.001; and algorithm parameters were: Nu=2, 4, 16; N = 64 ; /1 = 0.975 . It is shown that when Nu increases, both the adaptive algorithms have similar tracking performance.
Figure 17 shows the mean squared error performance of the algorithm described above in an environment typical for an acoustic echo cancellation scenario. The length of filter h(i) is now increased to N = 512 and forgetting factor λ = 0.99. The filter h(i) changes at time / = 4000 by generating new N independent random variables with the same statistical characteristics. It is shown that when signal z(i) has characteristics: a = 0.9 ; σ2 = 0.01 ; and algorithm parameters were: Nu=l6, 64, 256; N = 512 ; λ = 0.99 ; the performance of the proposed RLS algorithm was similar to the performance of the known implementation of the RLS algorithm.
The results of an investigation of the performance of a fixed-point implementation of the algorithm described above, in a scenario with a very low noise level, are now presented. Figure 18 shows the mean squared error performance of a fixed-point implementation of the algorithm described above against a floating-point implementation of the same algorithm and an implementation of the known recursive least squares algorithm, also implemented in floating point. Floating-point is considered as having infinite precision. For representation of all variables in the algorithm, including the input data z(ϊ) and x(i) , elements of the matrix R(i) , vector βr , and others JVj bits are used.
Two cases were considered for the investigation: Nt = 16 and Nb=24. Signal z(i) has characteristics: α = 0 ; σ2 = 0.00001 ; and parameters to the described algorithm were: N = 64 , λ = 0.984 ; N11 = 2. It can be seen that the performance of both the fixed-point and floating point implementations of the algorithm depended on the parameter Mb that defines the number of bits for representation of the solution vector hα.
A single iteration of the described algorithm is approximately equivalent to 2N additions. Note that a multiplier of two Q-bit fixed-point numbers when implemented in hardware requires O(Q) of chip area compared to the O(ϊ) chip area for implementing a Q-bit adder. If Q = 16 bits, the chip area of one multiplier is approximately equivalent to that of 16 adders. Thus, for Q = 16, 8 iterations of the described algorithm are approximately equivalent to one multiplication, which is negligible with respect to 3N multiplications required for the other steps in the adaptive algorithm as described above. Therefore, the solution of the normal equations according to the method of Figure 3, described above, has a complexity significantly lower when compared to the whole complexity of the adaptive algorithm.
As Mb further increases (not shown here), the steady-state mean squared error performance approaches that of the known recursive least squares algorithm. For the fixed-point implementation of the described algorithm, for a fixed Mb, steady-state mean squared error performance depends on Nb. In this scenario, for Mb=16, the parameter Nb=16 limits algorithm performance, while Nb=24 provides enough accuracy to achieve the performance of the floating-point implementation.
A long-time fixed-point investigation of a fixed-point implementation of the algorithm described above, implemented on an FPGA platform with parameters Nb=16 and -Mb=I 6, has also been performed; the experiment included 107 iterations i. No divergence was observed during this experiment, showing that the described algorithm is stable. It has been explained above that the methods described above for determining filter coefficients have applications in echo cancellation. An echo cancellation apparatus in which the methods described above can be used is now described.
An echo cancellation method may be used, for example, in a handsfree telephone where a first speech signal is received and output through a loudspeaker. A second speech signal is input to a microphone together with part of the first speech signal output through the loudspeaker. In this case, the response of the system is defined by the manner in which an echo of the first signal is generated and included in the signal input to the microphone. Here, the response is an acoustic response determined by, for example, the geometry of the environment in which the microphone and loudspeaker are situated.
Referring to Figure 19, there is illustrated an echo cancellation apparatus. A signal 51 travels along an input line and is output through a digital to analog converter (not shown), to a loudspeaker 52. A further signal such a human voice (not shown) is passed to an input of a microphone 53. The microphone signal is converted to a digital signal by means of an analog to digital converter (not shown). It is desirable that an output signal 54 of the microphone 53 contains only the human voice signal, and none of the signal output by the loudspeaker 52. However, in practice, some of the signal output by the loudspeaker 52 will be received by the microphone 53 such that the output signal 54 comprises a combination of the human voice signal and part of the loudspeaker output signal (referred to as "the echo signal"). It is desirable to remove the echo signal present in the output signal 54 of the microphone 53.
As shown in Figure 1, the echo cancellation apparatus comprises a filter 55, which takes as input the signal 51 and which is configured to provide an estimate 56 of the echo signal. This estimate 56 is subtracted from the microphone output signal 54 by a subtractor 57. Therefore, if the echo is accurately estimated, an output 58 of the subtractor will be equal to the human voice signal input to the microphone 53. In order to accurately model the echo signal, the filter 55 must be provided with a set of filter coefficients which can be applied to the signal 51. The set of filter coefficients should model the acoustic response of the environment in which the loudspeaker 52 and microphone 53 are situated. The echo cancellation apparatus comprises a filter coefficient setting circuit 59, which produces suitable filter coefficients. The filter coefficient setting circuit 59 takes as inputs the signal 51 which is input to the loudspeaker and the signal 54 which is output from the microphone 53.
The echo cancellation apparatus of Figure 19 processes discrete samples in turn. The apparatus operates at a predetermined sampling frequency (Fs), which determines how often samples are taken. At each sample time, values of the signal 51 input to the loudspeaker 52 (sf), the signal 54 output from the microphone 53 (sm ), the output 56 of the filter 55 (y), and the output 58 of the subtracter 57 (Δ) are considered.
The purpose of the apparatus of Figure 19 is to remove the echo signal from the signal 54 output from the microphone 53. There will be a time delay between a signal output by the loudspeaker 52 being input to the microphone 53 to form the echo signal. Furthermore, a signal output from the loudspeaker 52 at a single instant will not be input to the microphone 53 at a single instant, instead the signal output by the loudspeaker 52 will be input to the microphone 53 at a range of different times. The delay between a signal being output by the loudspeaker 52 and substantially all of that signal which are to form the echo signal being received by the microphone 53 is estimated to correspond to the time taken to obtain N samples. JV will typically equate to a relatively large number of samples.
Operation of the filter 55 of Figure 19 is described by equation (76):
Figure imgf000050_0001
where y(ή is output 56 of the filter 55 at sample time t; ht is a set of N filter coefficients determined by the filter coefficient setting circuit 59 in the manner described below; and
Sf (t - r) is the signal 51 input to the loudspeaker 52 at sample time (t -τ) ;
The filtering described by equation (76) can conveniently be carried out by a Finite Impulse Response (FIR) filter, the implementation of which will be readily understood by one of ordinary skill in the art.
The signal 58 output by the subtractor 57 is described by equation (77):
Figure imgf000051_0001
where sm (t) is the signal 54 output from the microphone 53 at sample time t; y{i) is the signal 56 output from the filter 55 at sample time t; and A(t) is the signal 58 output from the subtractor 57 at sample time t.
If the filter 55 accurately models the echo signal included in the signal 54 output by the microphone 53, Δ(t) should be close to the human voice signal input to the microphone 53 at time t.
A value Ms represents the number of samples to be processed in a particular step, and coefficients of the filter 55 are updated after every Ms samples as described below. A counter variable is incremented with each sample to determine when Ms samples have been processed.
Calculation of coefficients for the filter 5 using the filter coefficient setting circuit 9 can conventionally be carried out using the methods described above, where R(i) is an autocorrelation matrix of the signal 51 provided to the loud speakers 52 and β(i) is a cross-correlation vector of the signal 51 to the loud speaker 52 and the signal 54 received from the microphone 53 a system of equations:
Figure imgf000052_0001
Can be solved as described above.
It will be appreciated that a filter coefficient setting circuit can be provided to generate coefficients for filters in devices other than the echo cancellation apparatus as described with reference to Figure 19.
Although specific embodiments of the invention have been described above, it will be appreciated that various modifications can be made to the described embodiments without departing from the spirit and scope of the present invention as defined by the appended claims.

Claims

1. A method for obtaining an estimate of a solution to a first system of linear equations, the method comprising: obtaining a second system of linear equations; obtaining an estimate of a solution to said second system of linear equations; determining differences between said first and second systems of linear equations; and determining an estimate of a solution to said first system of linear equations based upon said differences and said estimate of said solution to said second system of linear equations.
2. A method according to claim 1, further comprising: determining a system of equations based upon said differences; and determining an estimate of a solution to said system of equations based upon said differences; wherein determining said estimate of said solution to said first system of linear equations is based upon said estimate of said solution to said system of equations based upon said differences.
3. A method according to claim 2, wherein determining said estimate of said solution to said first system of linear equation comprises combining said estimate of said solution to said second system of linear equations and said estimate of said solution to said system of equations based upon said differences.
4. A method according to claim 3, wherein said combining comprises addition.
5. A method according to any preceding claim, wherein said first system of linear equations relates to a first time point, and said system of linear questions relates to a second earlier time point.
6. A method according to any preceding claim, wherein each of said first and second systems of linear equations has a form:
Rh = β
where:
R is a coefficient matrix of the respective system of equations; h is a vector of the unknown variables of the respective system of equations; and β is a vector containing the values of the right hand side of each equation of the respective system of equations.
7. A method according to claim 6, wherein said second system of linear equations relates to a time point i-1 and has a form
Figure imgf000054_0004
8. A method according to claim 7, wherein said first system of linear equations relates to a time point i and has a form
Figure imgf000054_0003
9. A method according to claim 7 as dependent upon claim 2, wherein said system of equations based upon said differences has a form:
Figure imgf000054_0001
where:
Figure imgf000054_0005
hα(i- l) is an estimate of the solution of the system of equations relating to time point i-1;
Figure imgf000054_0002
10. A method according to claim 9, wherein an estimate ( hα(i)) of the solution of said first system of linear equations relating to time point / is given by:
Figure imgf000055_0001
11. A method of estimating filter coefficients for an adaptive filter, the method comprising: generating first and second systems of linear equations; and obtaining an estimate of a solution to said first system of linear equations using a method according to any one of claims 1 to 10; wherein said estimate of said solution to said first system of linear equations is an estimate of said filter coefficients.
12. A method according to claim 11, wherein said first and second sets of linear equations represent a least squares problem arranged to provide an estimate of said filter coefficients.
13. A method according to claim 12, wherein said first and second sets of linear equations are a representation of a recursive least squares problem.
14. A method according to 13, wherein said first and second sets of linear equations are a representation of a recursive least squares problem incorporating a forgetting factor.
15. A method according to claim 13, wherein said first and second sets of linear equations are a representation of a sliding window recursive least squares problem.
16. A method according to any one of claims 11 to 15, wherein said adaptive filter filters audio data.
17. A method according to claim 16, wherein said adaptive filter is part of an echo cancellation apparatus.
18. A method according to claim 17, wherein said first system of linear equations has a form:
Figure imgf000056_0001
where: R.(z)is an auto correlation matrix of a signal provided to an audio output device; β(0 is a cross correlation vector of the signal provided to the audio output device and a signal received from a microphone; and h(i) is a vector of filter coefficients for the active filter.
19. A computer program for carrying out a method according to any one of claims 1 to 18.
20. A carrier medium carrying a computer program according to claim 19.
21. A computer apparatus for determining an estimate of a solution to a system of linear equations, the apparatus comprising: a memory storing processor readable instructions; and a processor configured to read and execute instructions stored in said program memory; wherein said processor readable instructions comprise instructions controlling said processor to carry out a method according to any one of claims 1 to 18.
22. A field programmable gate array configured to carry out processing according to any one of claims 1 to 18.
23. An integrated circuit configured to carry out processing according to any one of claims 1 to 18.
24. A method for generating an estimate of a solution of N linear equations in N unknown variables, the method comprising: initialising a vector indicating a first estimate of said solution; recursively modifying an update value until a predetermined criterion is satisfied; updating said vector based upon said update value.
25. A method according to claim 24, wherein said system of linear equations has a form:
Rh = β where:
R is a coefficient matrix of the respective system of equations; h is a vector of the unknown variables of the system of equations; and β is a vector containing the value of the right hand side of each equation of the system of equations.
26. A method according to claim 24 or 25, further comprising initialising a residual vector βr= β.
27 A method according to claim 26, further comprising: determining an element of said residual vector having the greatest magnitude; wherein said predetermined criterion is based upon said element having the greatest magnitude.
28. A method according to claim 27, wherein said predetermined criterion is based upon a relationship between said element having the greatest magnitude and a value of an element of the matrix R.
29. A method according to claim 28, further comprising determining an index of said element having the greatest magnitude, wherein said element of the matrix R is determined based upon said index.
30. A method according to claim 29, wherein the index of said element having the greatest magnitude is the pt\ι element βr p of the residual vector and said element of the matrix R is the (p,p)th element Rp ^ of the matrix R.
31. A method according to claim 30, wherein said predetermined criterion is \βr p < — Rp p for an update value d.
32. A method according to claim 26 or any claim dependent thereon, further comprising updating said residual vector based upon said update value.
33. A method according to claim 26 or any claim dependent thereon, further comprising carrying out a plurality of iterations of processing, each iteration of processing being based upon a value of the residual vector having the greatest magnitude following a preceding iteration.
34. A method according to any one of claims 24 to 33, further comprising initialising said update value to a power of two.
35. A method according to any one of claims 24 to 34, wherein modifying said update value comprises dividing said update value by a predetermined value.
36. A method according to claim 35, wherein said predetermined value is two.
37. A method according to claim 36, wherein said division is implemented as a bit-shift operation.
38. A method according to any one of claims 24 to 34, wherein modifying said update value ensures that said update value remains within predetermined limits.
39. A method according to any one of claims 1 to 18, wherein determining an estimate of said second system of linear equations comprises carrying out a method according to any one of claims 24 to 38.
40. A method according to claim 2 or any claim dependent upon claim 2, wherein determining an estimate of a solution to said system of equations based upon said differences comprises carrying out a method according to any one of claims 24 to 38.
41. A computer program for carrying out a method according to any one of claims 24 to 40.
42. A carrier medium carrying a computer program according to claim 41.
43. A computer apparatus for determining an estimate of a solution to a system of linear equations, the apparatus comprising: a memory storing processor readable instructions; and a processor configured to read and execute instructions stored in said program memory; wherein said processor readable instructions comprise instructions controlling said processor to carry out a method according to any one of claims 24 to 40.
44. A field programmable gate array configured to carry out processing according to any one of claims 24 to 40.
45. An integrated circuit configured to carry out processing according to any one ofclaims 24 to 40.
PCT/GB2008/003413 2007-10-16 2008-10-09 Equation solving WO2009050434A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0720292A GB0720292D0 (en) 2007-10-16 2007-10-16 Equation solving
GB0720292.2 2007-10-16

Publications (2)

Publication Number Publication Date
WO2009050434A2 true WO2009050434A2 (en) 2009-04-23
WO2009050434A3 WO2009050434A3 (en) 2010-06-24

Family

ID=38813972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2008/003413 WO2009050434A2 (en) 2007-10-16 2008-10-09 Equation solving

Country Status (2)

Country Link
GB (1) GB0720292D0 (en)
WO (1) WO2009050434A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260085A (en) * 2020-01-09 2020-06-09 杭州中恒电气股份有限公司 Device replacement man-hour evaluation method, device, equipment and medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003088076A2 (en) * 2002-04-11 2003-10-23 University Of York Method for solving systems of linear equations, particularly in communicatioms systems

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003088076A2 (en) * 2002-04-11 2003-10-23 University Of York Method for solving systems of linear equations, particularly in communicatioms systems

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
LIU J ET AL: "An FPGA-based MVDR beamformer using dichotomous coordinate descent iterations" PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2007), 24-28 JUNE 2007, GLASGOW, UK, June 2007 (2007-06), pages 2551-2556, XP002577979 *
LIU J ET AL: "Architecture and FPGA design of dichotomous coordinate descent algorithms" IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, vol. 56, no. 11, November 2009 (2009-11), pages 2425-2438, XP011267574 ISSN: 1549-8328 *
LIU J ET AL: "FPGA implementation of RLS adaptive filter using dichotomous coordinate descent iterations" PROCEEDINGS OF THE 2009 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2009), 14-18 JUNE 2009, DRESDEN, GERMANY, June 2009 (2009-06), XP002577981 *
LIU J ET AL: "Parallel FPGA implementation of DCD algorithm" PROCEEDINGS OF THE 2007 15TH INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP 2007), 1-4 JULY 2007, CARDIFF, UK, July 2007 (2007-07), pages 331-334, XP002577980 *
LUENBERGER D G: "Introduction to Linear and Nonlinear Programming" 1973, Addison-Wesley , XP002299483 , pages 158-161 section 7.8, paragraphs 1-4 *
SAYED A H ET AL: "Recursive least-squares adaptive filters" THE DIGITAL SIGNAL PROCESSING HANDBOOK (ED. MADISETTI V K ET AL), CHAPTER 21, 1999, XP002577976 *
SKIDMORE I D ET AL: "The KaGE RLS algorithm" IEEE TRANSACTIONS ON SIGNAL PROCESSING, vol. 51, no. 12, December 2003 (2003-12), pages 3094-3104, XP002577978 *
ZAKHAROV Y ET AL: "Low-complexity RLS algorithms using dichotomous coordinate descent iterations" IEEE TRANSACTIONS ON SIGNAL PROCESSING, vol. 56, no. 7, July 2008 (2008-07), pages 3150-3161, XP011216804 ISSN: 1053-587X *
ZAKHAROV Y ET AL: "Novel signal processing technique for real-time solution of the Least Squares problem" 2ND INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING FOR WIRELESS COMMUNICATIONS, 2-4 JUNE2004, LONDON, UK, June 2004 (2004-06), XP002577977 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260085A (en) * 2020-01-09 2020-06-09 杭州中恒电气股份有限公司 Device replacement man-hour evaluation method, device, equipment and medium
CN111260085B (en) * 2020-01-09 2023-12-12 杭州中恒电气股份有限公司 Device replacement man-hour assessment method, device, equipment and medium

Also Published As

Publication number Publication date
WO2009050434A3 (en) 2010-06-24
GB0720292D0 (en) 2007-11-28

Similar Documents

Publication Publication Date Title
CN102132491B (en) Method for determining updated filter coefficients of an adaptive filter adapted by an lms algorithm with pre-whitening
JP3008763B2 (en) Method and apparatus for system identification with adaptive filters
Douglas Fast implementations of the filtered-X LMS and LMS algorithms for multichannel active noise control
CA2433551A1 (en) Method and apparatus for noise suppression
EP0711035A2 (en) System identification method apparatus by adaptive filter
US5638311A (en) Filter coefficient estimation apparatus
JP4834046B2 (en) Echo erasing device, echo erasing method, echo erasing program, recording medium
JPH07176991A (en) Adaptive filter device and its control method
Kar et al. Pseudo-fractional tap-length learning based applied soft computing for structure adaptation of LMS in high noise environment
JP2009031425A (en) Noise estimation device, method and program
CA3155244C (en) Voice enhancement in presence of noise
EP1314247A1 (en) Partitioned block frequency domain adaptive filter
WO2009050434A2 (en) Equation solving
US20060288066A1 (en) Reduced complexity recursive least square lattice structure adaptive filter by means of limited recursion of the backward and forward error prediction squares
EP4199368A1 (en) Adaptive delay diversity filter, and echo cancelling device and method using same
JPH09261135A (en) Acoustic echo erasion device
WO2007001786A2 (en) A reduced complexity recursive least square lattice structure adaptive filter by means of limited recursion of the backward and forward error prediction squares
JP2010068457A (en) Echo cancellation device, signal processing apparatus and method, and program
JP2002533970A (en) Stable adaptive filter and method thereof
KR102218742B1 (en) Adaptive delay diversity filter, echo cancel device using the same, and echo cancel method thereof
US11309979B2 (en) Adaptive identification system, adaptive identification device, and adaptive identification method
KR20050047374A (en) A hybrid-active noise control system and methode for communication equipments
JP2010041450A (en) Adaptive equalizer, adaptive equalization method, and adaptive equalization program
US20230137830A1 (en) Wideband adaptation of echo path changes in an acoustic echo canceller
US7822193B2 (en) Estimation method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08806553

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08806553

Country of ref document: EP

Kind code of ref document: A2