WO2011114174A2 - Signal processing system and method - Google Patents

Signal processing system and method Download PDF

Info

Publication number
WO2011114174A2
WO2011114174A2 PCT/GB2011/050560 GB2011050560W WO2011114174A2 WO 2011114174 A2 WO2011114174 A2 WO 2011114174A2 GB 2011050560 W GB2011050560 W GB 2011050560W WO 2011114174 A2 WO2011114174 A2 WO 2011114174A2
Authority
WO
WIPO (PCT)
Prior art keywords
polynomial
inexact
polynomials
data
ooooe
Prior art date
Application number
PCT/GB2011/050560
Other languages
French (fr)
Other versions
WO2011114174A3 (en
Inventor
Joab Winkler
Original Assignee
The University Of Sheffield
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The University Of Sheffield filed Critical The University Of Sheffield
Publication of WO2011114174A2 publication Critical patent/WO2011114174A2/en
Publication of WO2011114174A3 publication Critical patent/WO2011114174A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Definitions

  • Embodiments of the present invention relate to signal processing systems and methods and to a floating point environment.
  • Signal processing is performed with a view to recovering an intentional, or exact, signal from a corrupted, or inexact, signal.
  • digital signal processing techniques that are directed to performing various tasks such as, for example, signal synchronization, noise reduction, image processing, digital remastering of data representing video and/or audio, pattern recognition and pattern matching.
  • the accuracy or efficacy of the foregoing depends, at least in part, on the degree of corruption of the signal and the ability of the techniques to perform well in the face of that corruption.
  • modern computers use processors that implement floating point arithmetic in accordance with the IEEE Standard for Floating-Point Arithmetic.
  • computers and/or computer languages allow or require that some or all arithmetic be carried out using IEEE 754 formats and operations.
  • IEEE 754-2008 which was published in August 2008.
  • the current standard comprises nearly all of the content of the original IEEE 754-1985 standard and the IEEE Standard for Radix-Independent Floating-Point Arithmetic, that is, IEEE 854-1987.
  • the precision with which processors and computers can perform floating-point arithmetic is limited. The limitation of precision leads to rounding errors, notwithstanding various rounding strategies that are used in an attempt to mitigate the adverse effects of such rounding. Rounding errors are another form of corruption or noise that can corrupt a given signal or data. This noise is particularly acute when performing integer arithmetic within a floating point environment.
  • FIG. 1 shows digital signal processing system
  • Figure 2 illustrates erroneous roots of ( -1) 100 ;
  • Figure 3 depicts erroneous roots of (y - l) m ;
  • Figure 4 shows a flow chart for signal processing according to an embodiment
  • Figure 5 illustrates an embodiment of the present invention to processing a transmitted signal.
  • the system 100 comprises an input or interface 102 for receiving a signal, or, more accurately, data 104 representing or associated with such a signal, to be processed.
  • the data 104 is preferably inexact data.
  • Inexact data is data that is not as intended such as, for example, data that has been corrupted in some manner or that bears noise.
  • the inexact data 104 might represent a blurred, or otherwise degraded, representation of an original image 106 that has been corrupted by noise 108.
  • the signal can represent any signal or data and is preferably digital in form such as, for example, a one-dimensional signal or data set, two- dimensional signals or data sets, representing, for example, images or speech, three- dimensional signals or data sets representing at least one of solids, volumes and other bodies or voxels (volumetric pixel), or a n-dimensional signal or data set.
  • the input 102 can be realised in the form of storage that is able to store all of the inexact data to be processed or one or more selected data of the inexact data.
  • Such an input 102 can be realised in the form of memory either directly or in the form of memory allocation associated with a parameter call of a function of the digital signal processor 1 10.
  • the memory has an interface via which the data 104 can be stored and accessed.
  • the system 100 comprises a digital signal processor 1 10 for performing the processing techniques such as digital signal processing (DSP) techniques for cleaning or filtering the inexact data to produce, via an output or interface 1 12, exact data 1 14 in the form of an image that has had the noise 108 removed or at least reduced or in relation to which the adverse effect of the noise has been at least reduced and, preferably, substantially removed.
  • DSP digital signal processing
  • the system described above might be applied in the context of image processing, such as, for example, blind image deconvolution, which is the process of recovering or estimating a true or more accurate image from one or more than one degraded image.
  • image processing such as, for example, blind image deconvolution, which is the process of recovering or estimating a true or more accurate image from one or more than one degraded image.
  • This process is performed without knowledge of the blurring functions that degraded the image(s).
  • Recovery of such a true or less degraded image from such one or more than one degraded image can be achieved by determining or at least approximating, in the z-domain, the greatest common divisor (GCD) of two polynomials corresponding to the z-transform of the blurred images; the approximate GCD represents the true image or less degraded image.
  • GCD greatest common divisor
  • the degraded or distorted image shown in figure 1 is the result of the original image plus noise.
  • the inexact image 1 04 will represent the convolution of the original image 106 with a transfer function.
  • the transfer function can be considered to be an unknown blurring function or any other function that causes the inexact data to differ from the exact data.
  • a first instance of inexact data or noisy image 104 is represented by a first function / (jc, y)
  • the original image is represented by o(x, y)
  • a first transfer or first blurring or distortion function is represented by d 1 (x, y)
  • a second transfer function or second blurring function is represented by d 2 (x, y)
  • a second instance of inexact data or noisy image is represented by a second function f 2 (x, y)
  • ⁇ (z) , E 2 (z) , D ⁇ z) , D 2 (z) , and O(z) are the two- dimensional z-transforms of f ⁇ x, y) , f 2 (x, y) , d ⁇ x, y) , d 2 (x, y) , and o(x, y) .
  • the digital signal processor 1 10 is adapted or arranged to make the above determinations of the greatest common divisor of input data or an input signal.
  • determining the greatest common divisor finds significant application in geometric modelling and computer aided geometric design because it can be used to determine if two curves or surfaces associated with one or more bodies intersect.
  • any signal can be represented as a polynomial. The more complex the signal, the higher the order of the polynomial needed to represent that signal. Given a set of data samples or measurements representing a signal, such as a received radio signal or an intersection between two surfaces of two or more solid bodies, either physically or geometrically modelled, a polynomial can be determined that fits such a set.
  • Embodiments of the present invention assume that such a fitted polynomial is an inexact polynomial, that is, a polynomial having coefficients that have been corrupted by noise.
  • Embodiments of the invention find the exact, or at least less inexact, roots of such an inexact polynomial, which removes the noise, or at least reduces it substantially, that is, in essence, the polynomial is filtered to recover the original signal.
  • GCD the greatest common divisor
  • connections between computing the GCD of two polynomials, and computing the roots of one of the polynomials is: (a) the computation of the GCD of two polynomials does not require the computation of the roots of the polynomials and (b) the computation of the roots of a polynomial f (y) requires the computation of the GCD of f (y) and its derivative.
  • DSP digital signal processor
  • the digital signal processor can be any type of digital signal processor such as, for example, an NXP Semiconductor DSP, based on TriMedia VLIW technology, optimized for audio and video processing, Texas Instruments' C6000 series DSP's or Open Multimedia Application Platform, Freescale's multi- core DSP family, the MSC81 xx, or ARM'S Cortex-A8 or any other digital signal processor.
  • embodiments of the present invention are equally application to processing other digital signals such as, for example, audio and/or speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for communications, control of systems, biomedical signal processing, seismic data processing, 3D modelling, ray tracing, computer aided design and/or manufacture etc.
  • digital signals such as, for example, audio and/or speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for communications, control of systems, biomedical signal processing, seismic data processing, 3D modelling, ray tracing, computer aided design and/or manufacture etc.
  • Embodiments of the present invention will herein after refer to signal, data and polynomial interchangeably and synonymously.
  • processing an inexact polynomial with a view to at least reducing, or removing, the associated noise is, in substance, filtering the signal of interest and calculating the roots of such an inexact polynomial or signal, or calculating the coefficients of such a polynomial, is also filtering or a form of digital signal processing. Knowing the roots allows the original signal, or at least a better
  • ⁇ ⁇ is the noise added to the i th coefficient a i .
  • Known techniques for calculating roots of such polynomials include, for example, Bairstow's method, Graeffe's root-squaring method, Jenkins-Traub algorithm, Laguerre's method, Muller's method and Newton's method.
  • S. Goedecker, Remark on algorithms to find roots of polynomials SIAM J. Sci. Stat. Comput., 15:1059-1063, 1994, who performed comparative numerical tests on the Jenkins- Traub algorithm, a modified version of Laguerres' algorithm and an application of the QR decomposition to the companion matrix of a polynomial.
  • Section 2 A signal processing technique or simple polynomial root finder according to an embodiment of the invention will now be described.
  • a multiple root is ill- conditioned with respect to random perturbations because they cause it to break up into a cluster of simple roots, but that it is stable with respect to perturbations that maintain its multiplicity.
  • a simple root is in general better conditioned than a multiple root, and it is, therefore, instructive to consider a polynomial root finder that reduces to the determination of the roots of several polynomials, each of which contains only simple roots.
  • the multiplicities of the roots are calculated by a sequence of greatest common divisor (GCD) computations.
  • GCD greatest common divisor
  • w Y (y) be the product of all linear factors of f(y)
  • w 2 (y) be the product of all quadratic factors of f(y)
  • w ; (y) be the product of all factors of degree i of f(y) . If f(y) does not contain a factor of degree k , then w k (y) is set to equal a constant, which can be assumed to be unity. It follows that to within a constant multiplier,
  • g (y) + Sg (y) may be coprime. Even if f (y) and g (y) are specified exactly and have a non- constant GCD, rounding errors may be sufficient to imply that they are coprime when the GCD is computed in a floating point environment.
  • the determination of the degree of the GCD of two polynomials reduces to the determination of the rank of a resultant matrix, but the rank of a matrix is not defined in a floating point environment.
  • the rank loss of a resultant matrix is equal to the degree of their GCD, and a minor perturbation in one or both of the polynomials is sufficient to cause their resultant matrix to have full rank, which suggests that the polynomials are coprime.
  • the determination of the rank of a noisy matrix is a challenging problem that arises in many applications. Such a minor perturbation can arise due to at least one of external noise sources such as a radio or propagation environment or internal noise sources such as a floating point environment as mentioned above.
  • the data in many practical examples is inexact, that is, having been corrupted, for example, by noise or otherwise distorted, and thus the polynomials are only specified within a tolerance.
  • the given inexact polynomials are, with high probability, coprime, and it is therefore desirable to perturb each polynomial slightly such that they have a non-constant GCD.
  • This GCD is known as an approximate greatest common divisor (AGCD) of the given inexact polynomials because inexact polynomials are near polynomials that have a non-constant GCD. It is therefore desirable to compute the smallest perturbations such that the perturbed forms of the inexact polynomials have a non-constant GCD.
  • the amplitude of the noise may or may not be known in practical examples, and even if it is known, it may only be known approximately as opposed to being known exactly. It is desirable that a polynomial root finder does not require an estimate of the noise level, which reflects the reality of receiving a transmitted signal over a noisy channel, and that all parameters and thresholds be calculated from the noisy received signal or noisy/inexact data, that is, the polynomials coefficients.
  • FIG 4 there is shown a flowchart 400 of the basic processing steps of signal processing according to embodiments of the present invention.
  • step 402 the data representing the polynomial form of the signal to be processed is received.
  • step 402 might comprise the step of receiving or generating data representing the signal to be processed and fitting a polynomial to that data using, for example, curve-fitting or spline-fitting.
  • the coefficients of the received polynomial, and its derivative, are pre-processed at step 404.
  • the preprocessing is directed to normalising the polynomial coefficients and to reducing, preferably minimising, the difference in magnitude of the maximum and minimum polynomial coefficients.
  • Step 406 determines the degree of the approximate greatest common divisor of the two polynomials.
  • Step 408 determines the approximate greatest common divisor of the two polynomials. It can be appreciated that embodiments of the present invention can use two methods of determining the approximate greatest common divisor; namely, computing the AGCD using the Sylvester matrix and computing the AGCD using approximate polynomial factorisation.
  • Polynomial division is performed at step 41 0 and the roots are refined at step 41 2.
  • two polynomials are coprime if and only if the determinant of their resultant matrix is equal to zero, and if they are not coprime, the degree and coefficients of their GCD can be calculated from their resultant matrix.
  • the rank loss of such a matrix is equal to the degree of the GCD of the polynomials, and the coefficients of the GCD are obtained by reducing the matrix to upper triangular form.
  • the Sylvester resultant matrix and its subresultant matrices are considered, which leads to their use in calculating the degree of a common divisor of two polynomials.
  • the polynomials f (y) and g(y) possess common divisors of degrees l, ..., d , because the degree of their GCD is d , but they do not possess a common divisor of degree d + l :
  • Theorem 3.1 A necessary and sufficient condition for the polynomials f (y) and g(y) , which are defined in (3.1 ), to have a common divisor of degree k ⁇ 1 is that the rank of S k (/, g) be less than (m + n - 2k + 2) , or equivalently, the dimension of its null space is greater than or equal to one.
  • Equations (3.4), (3.16) and (3.17) can be combined into one equation
  • ACD approximate greatest common divisor
  • embodiments of the present invention can use at least one, or both, of
  • an AGCD has fundamentally different properties compared to a GCD.
  • a GCD is unique up to an arbitrary scalar multiplier, but this uniqueness property does not extend to an AGCD because different definitions of an AGCD yield different polynomials, each of which is valid.
  • Embodiments of the present invention assume that the given signals or polynomials are inexact such that their Sylvester matrix has full rank. It is therefore necessary to perturb this matrix such that its perturbed form is singular, and the criterion for the calculation of the degree of an AGCD must be defined.
  • step 404 Embodiments of the present invention for realising step 404 will be described, it having been assumed that the two polynomials or signals in step 402 are f(y) and f (1> (y) , where /(y)is an inexact polynomial representing a signal for which an approximate greatest common divisor if it and its first derivative, f (l) (y) , is to be computed.
  • the first pre-processing step or operation follows from the partitioned structure of the Sylvester matrix, S(f,f ⁇ 1> )e of /(3 ⁇ 4/) and / 3 ⁇ 4) ;is given by
  • f '(y) can be scaled by a non-zero constant ⁇ , which can be interpreted as the weight of f (1> (y) relative to the unit weight of f(y) , assuming (y)and f (1> (y) are normalised.
  • non-zero constant
  • g(y) is proportional to, but not equal to, the derivative of f(y) . It is also assumed that all of the coefficients of f(y) and g(y) are non-zero, but more generally, their geometric means are taken with respect to their non-zero coefficients. It follows from the above that
  • Equation 4.7 can be simplified because rn a,
  • equation (4.4) states that
  • deg AGCD(/,s) deg AG ⁇ ( f,ag) (4.9) and the foregoing two conditions are satisfied or all non-zero values of a , but the value of a that achieves optimal computational results, using a defined numerical measure, must be determined when an AGCD of the inexact polynomials f(y) and f (1) (y) is computed.
  • a constitutes a second preprocessing operation. Determining its optimal value will be discussed after the third processing operation below has been discussed.
  • the degree of the greatest common divisor of two exact polynomials is equal to the rank loss of their Sylvester resultant matrix.
  • the singular value decomposition is usually used to calculate the rank of a matrix, but numerous computational experiments have shown that poor results are obtained because the SVD does not yield a large gap in the singular values of the Sylvester matrix of two inexact polynomials, and thus the incorrect value of the numerical rank is computed.
  • a random perturbation in one or both polynomials that have a non-constant GCD causes them to become coprime, and thus these inexact polynomials have an approximate greatest common divisor (AGCD). It follows that their Sylvester matrix has full rank, and the calculation of the degree of an AGCD of these inexact polynomials requires that they be perturbed slightly, so that the Sylvester matrix of these perturbed polynomials be non-singular.
  • Embodiments of the present invention use several methods to calculate the degree of the AGCD.
  • the calculation of the approximate quotient polynomials that are associated with an approximate greatest common divisor requires that the homogeneous equation
  • Computing the nearest singular Sylvester matrix to S(f, a 0 g) requires that the homogeneous equation (3.4) be transformed to an approximate linear algebraic equation by moving one of its columns to the right-hand side. Embodiments of the present invention will now consider criteria for selecting the column to be moved.
  • the degree of an approximate greatest common divisor of f (y) and g(y) , or equivalent ⁇ f(w) and g(w) , is calculated from S(f, a 0 g) , that is,
  • Method 1 The method of residuals
  • the degree d r of an AGCD is equal to the index k for which the change in r k between two successive values of k is a maximum
  • Method 2 The method of first principal angle
  • Embodiments of the present invention that use the second method are stated in terms of the first principal angle y k i , that is, the smallest angle, between the space spanned by c k i and the space spanned by the columns of A k i , that is, k I . , m ⁇ t ; / I 2m - 2k + 1, where
  • the degree d* o an AGCD is equal to the index k for which the change in ⁇ ⁇ between two successive values of k is a maximum
  • E uation (5.8) defines the criterion for which the calculation of d* , but an expression for ⁇ ⁇ ⁇ ,
  • Algorithm 5.1 shows the implementation of methods 1 and 2 for calculating
  • Algorithm 5.1 The calculation of the degree of an AGCD of a polynomial and its derivative Input : An inexact or noisy polynomial
  • Embodiments of the present invention can use a third method to determine the degree of the AGCD of a polynomial and its derivative.
  • the above two methods are generally applicable to two arbitrary polynomials.
  • the embodiments according to the third method is only applicable to a polynomial and its derivative in the presence of a constraint between the two. Further details of this embodiment will be given in below, which allows the degree d of an
  • the third method computes the degree of an approximate greatest common divisor of a polynomial and its derivative.
  • the third method differs from the two above methods, which can be modified to apply to two arbitrary polynomials, in that it applies only to a polynomial and its derivative.
  • whet3 ⁇ 4 k -tkt w) is ddt:i*sd m ⁇ aad and v* aj>& ⁇ r-epl&osd by 1 ⁇ 2(# aed 1 ⁇ 2#) s
  • the coefficient matrix in 1:5,31 is of order (2m + 1) x ( + who Q 3 ⁇ 4 .i and i3 ⁇ 4,2(% s #o) arc givran by, respectively.
  • This anal sis shows that initial estimates of a common divisor polynomial of degree and the associated quotient polynomials, can be calculated from the inexact polynomials /; >; ⁇ ) and ai c :. and they can he us3 ⁇ 4ed to calculate e ⁇ which is the rnaxinrum value of b.
  • this criterion yields two estimates of d because the optimal column, can be determined, by the rnefchod.i3 ⁇ 4 in Sections 5.2.1 and 5,2,2, and they may yield, different estimates of the optimal column, and. therefore different estimates of 3 ⁇ 4(3 ⁇ 4) and
  • Thes3 ⁇ 42 methods for the s ⁇ ima n of the degree of n AGCD of /( «? ! and g ⁇ w consider the variations of the condition nnmber and sm llest singular value of 3 ⁇ 4) S ith k. They require tw properties of matrices and ne property of the common divisors of the theoretically exact forms of /(3 ⁇ 4? ! and giw).
  • Equation (5.40) defines a criterion far the calculation of d in terms of t e stability of (5.38), arid (5.41) defines d in terms of the minimum distance of # ⁇ ) , ⁇ to singularity, where it can he assumed that the singular matrix is not & Sylvester matrix because the smallest singular value of a matrix is a measure of its distance to singularity, such that its structure is not retained in its singular farm.
  • Algorithm 5.2 The calculation of the degree of an AGCD of a polynomial and its derivative. An algorithm according to an embodiment is presented for calculating the degree of an AGCD of a polynomial and its derivative.
  • Equation (6.1 ) is therefore replaced by
  • S(f ,a 0 g) is denoted by S ⁇ a, ff) to emphasise that the method of SNTLN is used to compute the optimal values of a and ⁇ . It follows that S ⁇ a, ff) is given by
  • structured perturbations are applied to the approximation in order to make it an equation that has an exact solution.
  • the perturbations of the coefficients of f (w) and g (w) are, respectively,
  • c d and h d are the q th columns of S d (a,0) and ⁇ ⁇ ( ⁇ , ⁇ , ⁇ ) respectively,
  • Equation (6.6) is a non-linear equation for ⁇ , ⁇ , ⁇ and z that is solved by the Newton-Raphson method.
  • the residual that is associated with an a roximate solution of this equation is ⁇ ( ⁇ , ⁇ , ⁇ , ⁇ ) (6,8)
  • r r( + ⁇ , ⁇ + SO, x + &r, 2 + ⁇ ), then - ⁇ ( + , ⁇ + ⁇ ) + h,i(a + ⁇ , ⁇ + ⁇ , ⁇ + ⁇ )
  • 0 m _ ii _ 2 is a column vector of zeros of length m-d-2.
  • the vectors h d , — - and— - ⁇ da have similar forms, that is,
  • the Newton-Raphson method is used to calculate z, a, and 0.
  • the 3 ⁇ 4 iteration in the Newton-Raphson method for calculating z, a, and # is obtained from (6.10)
  • Equation (6.11) is of the form
  • Algorithm 6.1 SNTLN for a Sylvester matrix of a polynomial and its derivative
  • Algorithm 6.1 shows the application of SNTLN for calculating a structured low rank
  • Output A structured low rank approximation of the Sylvester matrix, S(f (y), f a) (y)) , of f (y) and its derivative f m (y)
  • the value of the residual indicated above that is, 10 can be changed to other values.
  • the various values of the residual represent a balance between the number of iterations needed and the margin of error required; the former increase with the latter decreasing.
  • the polynomials defined by (6.16) and (6.17) have a non-constant GCD, but it cannot be computed directly from the Sylvester matrix. Rather, it is first preferable to compute the coprime polynomials from which the GCD of the polynomials (6.16) and (6.17) can be computed.
  • Equation (6.19) can be written in a form that is very similar to (5.31 ) where
  • Embodiments of the present invention are not limited to the above. Embodiments of the present invention can equally well use approximate factorisation of a polynomial and its derivative in determining the AGCDs, especially assuming that the pre-processing has been performed and that the degree of the AGCD to be found is known already from the above. Further details regarding approximate polynomial factorisation in determining AGCDs can be found in the appendix at pages 95 to 1 19. Nevertheless, there now follows an algorithm for the
  • Algorithm 7.1 Approximate factorisation of an inexact polynomial and its derivative
  • the above algorithm 7.1 enables the GCD of the corrected forms of f (w) and g (w) to be calculated. It is implemented for each of the AGCD computations in the polynomial root solver in section 2.1 , and it is therefore used several times to calculate the polynomials that are required for the polynomial divisions in the polynomial root solver. It will be appreciated that the above algorithm makes reference to numerous equations, which can be fould in the appendix on pages 95 to 1 1 9.
  • step 410 that the next step in signal processing is to perform polynomial division.
  • Embodiments of the present invention will now be described for polynomial or signal division.
  • the signal processing described above with reference to figure 4 comprises two steps of polynomial divisions after the AGCD computations have been implemented.
  • Embodiments of the present invention use any one of several methods for performing polynomial division.
  • Polynomial division is equivalent to the deconvolution of two polynomials, which is an ill-posed problem because even if the ratio p(y)/q(y) is a polynomial, a minor perturbation in either p(y) or q(y) causes the ratio to be a rational function.
  • a form of polynomial division in the form of polynomial deconvolution, which reduces to a least squares solution of linear algebraic equations will be presented, followed by two linear structure preserving matrix methods for polynomial division.
  • the polynomials, or signals, that define the inputs to the polynomial division are the results of the AGCD computations, implemented either by the Sylvester matrix or approximate factorisation of two polynomials. It is necessary for both methods that the polynomials be expressed in the same independent variable, as per example, 6.2, and therefore divisions of the form (6.24) and (6.25) should be performed. The procedures below assume that the foregoing has been undertaken.
  • Section 8 Methods for polynomial division
  • g(y) is a polynomial approximation of a rational function.
  • This sequence of operations fails to consider the coupled nature of the convolutions, in particular, the polynomial q Y (y) appears in the first and second deconvolutions, the polynomial q 2 (y) appears in the second and third deconvolutions, and more generally, the polynomial q t . ⁇ y) appears in the i i3 ⁇ 4 and (i + l) th deconvolutions.
  • Equation (8.2) can be written in matrix form as
  • a matrix of structured perturbations is added to each of the Cauchy matrices
  • Th ere exist matrices 3 ⁇ 4( t ) € 1 ⁇ ,)!; - 1+! 5 ⁇ ' ⁇ + ⁇ ! -
  • Algorithm 8.1 shows how the QR decomposition can be used to solve the LSE problem.
  • Algorithm 8.1 Deconvolution using the QR decomposition
  • A is of order ( "r + 1 + ' _, u ⁇ , wii )
  • the final stage in signal processing which, in effect, amounts to recovering the signal with reduced noise, is applying a non-linear least squares refinement to each of the simple roots.
  • a first embodiment of step 412 of the flowchart of figure 4 comprises solving a non-linear least squares problem.
  • HVk + pk) h( k ) + pi Jvgk f r(y k ) + ipj %3 ⁇ 4) + 3 ⁇ 4, (9.4) and this quadratic function achieves it minimum value when where
  • the Gauss-Newton iteration is derived from the Newton iteration (9.6) by neglecting the matrix Q k , that is, the second derivatives of r k , and therefore this iteration is
  • the iteration (9.7) is better behaved than the iteration (9.6) because J k T J k is, at least, positive semi-definite, but Q k may or may not be positive definite.
  • the matrix inverse in (9.7) exists if the rank of J k is equal to m , that is, J k has full column rank, and this will be assumed.
  • the iterations (9.6) and (9.7) behave similarly, and convergence of the Gauss- Newton method is almost quadratic. If, however, the residuals are large, then the convergence of the Gauss-Newton iteration may be substantially inferior to the convergence of the Newton iteration.
  • the Gauss-Newton iteration solves linear problems with only one iteration, and it has fast local convergence on weakly nonlinear problems.
  • the Gauss-Newton iteration is used for the refinement of the initial estimates of the roots of the polynomial equations (2.4).
  • Embodiments of the present invention that use the method of non-linear least squares to refine the initial estimates of the roots of a polynomial equation will now be described, assuming that the multiplicities of the roots are known.
  • denotes the correspondence between the polynomial p— p ⁇ y ) and. a, the vector of its normalized coefficients. If the distinct roots of p(y) are and the root a i has multiplicity then
  • J ⁇ j) s Ss is ⁇ !qi:i;il to the jth «x>kima of J
  • imi «s ⁇ -a* of ( ⁇ .13s,
  • Appendices 2 to 14 contain MATLAB code, as is familiar to those skilled in the art for implementing embodiments of the present invention.
  • the embodiments demonstrate the ability to process the input signal, that is, inexact data, which is expressed in the form of polynomials, in particular, inexact polynomials, and to calculate the factors of those polynomials, from which the greatest common factors can be deduced and, thereby, the signal of interest, such as, for example, the video image, or transmitted data as indicated above.
  • the recovered factors can be used to recover the originally formed or transmitted (uncorrupted signal), as will be appreciated from the noise reductions noted below.
  • the Matlab code uses a function called samplepoly3.
  • the function samplepoly3 is used to test the efficacy of embodiments of the present invention using different signals or inexact polynomials according to the parameter passed.
  • Executing the Matlab code produces numerous graphs and the final output comprises two tables.
  • the first table shows the results before a non-linear least algorithm is implemented and the second table shows the results after the least squares algorithm has been implemented. As one skilled in the art would expect, the results in the second table are better than the results in the first table.
  • This value can be varied to assess the efficacy of embodiments of the present invention is the presence of noise. It can be appreciated, upon executing the code, that embodiments of the present invention still perform well even if the noise is three orders of magnitude greater than presently set.
  • the noise to signal amplitude is used to create corrupted signals, or inexact polynomials from exact polynomials. This is done in order to demonstrate the performance of embodiments of the present invention in the presence of noise and so that the noise reduction achieved by embodiments of the present invention can be reduced.
  • samplepoly3(22) selects case 22 as the relevant polynomial, that is,
  • first column represents the roots of the polynomial and the second column represents the multiplicity of those roots.
  • the polynomial is, in fact, an exact polynomial.
  • noise is added to the coefficients of the polynomial according to the set noise figure.
  • embodiments of the present invention implement digital signal processing and, more particularly, digital filtering.
  • the user is requested to indicate which polynomial they want to use as a test, namely, one selected from the database provided in the function "samplepoly.m" or a randomly generated polynomial.
  • samplepoly.m the database provided in the function "samplepoly.m” or a randomly generated polynomial.
  • the first polynomial from the database was selected.
  • the final table in the output shows that the correct number of roots was identified, as were the multiplicities and that the noise component, that is, the error, was reduced from the initial value of between 10 9 and 10 6 added to each component to the errors values listed for embodiments of the present invention that use approximate polynomial factorisation, as can be appreciated from Appendix 2, lines 20 and 21 ..
  • the noisy signal represented by the inexact polynomial, has been filtered to remove or at least reduce the noise.
  • Example 1 given below is an example of the output of an embodiment of the present invention that uses approximate polynomial factorisation in filtering the signal or polynomial.
  • Figure 5 illustrates an embodiment of the present invention within the context of using the greatest common divisor to recover a signal 500.
  • output by a transmitter is received by a receiver as a received signal, Rx.
  • Rx receives the received signal from a transmitter.
  • the received signal, Rx has been divided into two portions 502 and 504 spanning a relatively short time period over which is acceptable to consider the channel conditions, or the transfer function to be constant.
  • r j (x) h(x) ® t : (x) , where h(x) is the transfer function of the communication channel and t l (x) is the portion of the transmitted signal corresponding the first portion 2102 and ® is the convolution operator.
  • r 2 (x) h(x) ® t 2 (x) , where h(x) is the transfer function of the communication channel and t 2 (x) is the portion of the transmitted signal corresponding the first portion 2104 and ® is the convolution operator.
  • R 2 (z) H (z).T 2 (z) , from which the greatest common factor can be determined. It can be appreciated that the greatest common factor is H(z) . Once H(z) has been determined, both
  • a curve could be fitted to data samples representing the received signal 500. Such a curve is then considered to be an inexact polynomial that could be processed as described above to recover the originally transmitted signal, either free from noise or with substantially reduced noise.
  • the apparatus 600 comprises an input interface 602 for receiving data 604 presenting a first signal.
  • the data 604 is stored in a memory 606.
  • the memory 606 stores code 608 arranged when executed by a processor 61 0 having a floating point arithmetic unit 612 to implement the above mathematics or signal processing and thereby produce second data 614 representing a second signal recovered or derived from first signal.
  • the second data 614 is output via an output interface 61 6.
  • polynomials representing signals or data such as, for example, video data or audio data
  • embodiments are not limited thereto.
  • Embodiments can be realised in which the polynomials represent 3 dimensional objects being designed in a CAD system or the like.
  • the polynomial or signal of interest can represent the curve of the intersection between two bodies, which is usually a very high order polynomials such as polynomials have a degree 0(64).
  • floating point environments operate within a constrained environment, which is the precision with which numbers can be represented. Clearly such precision is limited by the word length used by the computer system.
  • the limited word length operates, in effect, as quantisation noise, which causes the exact polynomial to become a noisy polynomial.
  • embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape or the like. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs comprising instructions that, when executed, implement embodiments of the present invention.
  • embodiments provide machine executable code for implementing a system, device or method as described herein or as claimed herein and machine readable storage storing such a code. Still further, such programs may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.
  • the algorithm is designed to compute multiple roots of a polynomial
  • rand ( ' seed ' , 23 ) ;
  • % stream RandStream ( ' mcgl6807 ' , ' Seed ' , 23 ) ;
  • polysource input (' 1 — from the database: 2 — a random polynomial:

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Operations Research (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Error Detection And Correction (AREA)
  • Noise Elimination (AREA)
  • Optical Communication System (AREA)

Abstract

Embodiments of the present invention relate to signal processing, in particular, digital signal processing, in which a noisy signal is processed to recover the intended signal with the noise having been filtered so that it is removed or at least substantially reduced.

Description

SIGNAL PROCESSING SYSTEM AND METHOD
Priority
The present application claims priority from patent applications no. GB1004610.0 and
GB1019534.5, which are incorporated by reference herein in their entirely, both filed in the UK on March 19th, 2010 and November 18th, 2010 respectively.
Field of the invention
Embodiments of the present invention relate to signal processing systems and methods and to a floating point environment.
Background to the invention
Signal processing is performed with a view to recovering an intentional, or exact, signal from a corrupted, or inexact, signal. There are many digital signal processing techniques that are directed to performing various tasks such as, for example, signal synchronization, noise reduction, image processing, digital remastering of data representing video and/or audio, pattern recognition and pattern matching. The accuracy or efficacy of the foregoing depends, at least in part, on the degree of corruption of the signal and the ability of the techniques to perform well in the face of that corruption. Furthermore, with the exception of a limited number of specialist mainframe computers, modern computers use processors that implement floating point arithmetic in accordance with the IEEE Standard for Floating-Point Arithmetic. Still further, computers and/or computer languages allow or require that some or all arithmetic be carried out using IEEE 754 formats and operations. One skilled in the art appreciates that the current version, at the time of filing, is IEEE 754-2008, which was published in August 2008. The current standard comprises nearly all of the content of the original IEEE 754-1985 standard and the IEEE Standard for Radix-Independent Floating-Point Arithmetic, that is, IEEE 854-1987. One skilled in the art appreciates that the precision with which processors and computers can perform floating-point arithmetic is limited. The limitation of precision leads to rounding errors, notwithstanding various rounding strategies that are used in an attempt to mitigate the adverse effects of such rounding. Rounding errors are another form of corruption or noise that can corrupt a given signal or data. This noise is particularly acute when performing integer arithmetic within a floating point environment.
The limitations regarding the precision of processors to perform floating-point arithmetic are regularly encountered in many data processing activities, such as imag
e processing, CAD/CAM modelling, etc. Indeed, the very capable and widely respected
MATLAB™ software encounters a difficulty in certain situations. For example, MATLAB provides an incorrect solution to y4 - 4y3 + 6_y2 - 4_y + l = 0 . The solution should be the root y = 1 , with multiplicity 4. However, MATLAB returns the following simple roots of 1 .0002,
1 .000+0.0002/, 1 .0000-0.0002/, 0.9998, where i = V-ϊ .
The precision with which floating point environments can perform arithmetic is known as machine precision. The performance of floating-point arithmetic environments are limited, as indicated above, by a machine's precision and is particularly problematical for situations in which multiple roots are encountered or anticipated.
It is an object of embodiments of the present invention to at least mitigate one or more problems of the prior art.
Brief Description of the Drawings
Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings in which:
Figure 1 shows digital signal processing system;
Figure 2 illustrates erroneous roots of ( -1)100 ;
Figure 3 depicts erroneous roots of (y - l)m ;
Figure 4 shows a flow chart for signal processing according to an embodiment;
Figure 5 illustrates an embodiment of the present invention to processing a transmitted signal.
Detailed Description of Embodiments of the Invention
Referring to figure 1 , there is shown schematically a processing system 100. The system 100 comprises an input or interface 102 for receiving a signal, or, more accurately, data 104 representing or associated with such a signal, to be processed. In an embodiment, the data 104 is preferably inexact data. Inexact data is data that is not as intended such as, for example, data that has been corrupted in some manner or that bears noise. For example, the inexact data 104 might represent a blurred, or otherwise degraded, representation of an original image 106 that has been corrupted by noise 108. The signal can represent any signal or data and is preferably digital in form such as, for example, a one-dimensional signal or data set, two- dimensional signals or data sets, representing, for example, images or speech, three- dimensional signals or data sets representing at least one of solids, volumes and other bodies or voxels (volumetric pixel), or a n-dimensional signal or data set.
The input 102 can be realised in the form of storage that is able to store all of the inexact data to be processed or one or more selected data of the inexact data. Such an input 102 can be realised in the form of memory either directly or in the form of memory allocation associated with a parameter call of a function of the digital signal processor 1 10. The memory has an interface via which the data 104 can be stored and accessed.
The system 100 comprises a digital signal processor 1 10 for performing the processing techniques such as digital signal processing (DSP) techniques for cleaning or filtering the inexact data to produce, via an output or interface 1 12, exact data 1 14 in the form of an image that has had the noise 108 removed or at least reduced or in relation to which the adverse effect of the noise has been at least reduced and, preferably, substantially removed.
The system described above might be applied in the context of image processing, such as, for example, blind image deconvolution, which is the process of recovering or estimating a true or more accurate image from one or more than one degraded image. This process is performed without knowledge of the blurring functions that degraded the image(s). Recovery of such a true or less degraded image from such one or more than one degraded image can be achieved by determining or at least approximating, in the z-domain, the greatest common divisor (GCD) of two polynomials corresponding to the z-transform of the blurred images; the approximate GCD represents the true image or less degraded image. The recovery process would be implemented and performed by the digital signal processor 1 1 0.
One skilled in the art appreciates that the degraded or distorted image shown in figure 1 is the result of the original image plus noise. In practice, the inexact image 1 04 will represent the convolution of the original image 106 with a transfer function. The transfer function can be considered to be an unknown blurring function or any other function that causes the inexact data to differ from the exact data.
If a first instance of inexact data or noisy image 104 is represented by a first function / (jc, y) , the original image is represented by o(x, y) and a first transfer or first blurring or distortion function is represented by d1 (x, y) and a second transfer function or second blurring function is represented by d2 (x, y) , and a second instance of inexact data or noisy image is represented by a second function f2 (x, y) , then
f1 (x, y) = d1 (x, y) ® o(x, y) and f2 (x, y) = d2 (x, y) ® o(x, y) or, in the z-domain,
ij (z) = O1 (z) x 0(z) and E2 (z) = D2 (z) x O(z) , where ® denotes convolution and x denotes multiplication and z = [¾ , ¾] . It can, therefore, be appreciated that the original image can be recovered by determining the greatest common factor of ^ (z) and E2 (z) , which would be 0(z)■ The inverse transform of 0(z) could then taken to recover the original image o(y) or at least an estimate thereof.
The same applies if one skilled in the art is faced with a single inexact or distorted image and an assumption that the distortion function, that is, corrupting function, introducing noise and/or other distortions, is uniformly applicable across the single image providing one skilled in the art processes separate portions of the single inexact or distorted image with a view to determining firstly the distortion function as will be appreciated from the following.
If a first instance of inexact data or noisy image is represented by f^x, y) corresponding to a first portion ο^χ, γ) of an original image, o(x, y) , has been subjected to distortion, d^x, y) , such that /j (x, y) = dl (x, y) <8> ol (x, y) , and a second instance of inexact data or noisy image is represented by f2 (x, y) of a second portion o2 (x, y) of the original image, o(x, y) , has been subjected to distortion, d^x, y) , such that f2 (x, y) = d1 (x, y) ® o2 (x, y) , determining the distortion function, d^x, y) , reduces to determining the inverse transform of the greatest common factor of ^ (z) and E2 (z) . Once the distortion has been determined, it can be used to determine or recover the original image o(x, y) .
One skilled in the art appreciates that ^ (z) , E2 (z) , D^z) , D2 (z) , and O(z) are the two- dimensional z-transforms of f^x, y) , f2 (x, y) , d^x, y) , d2 (x, y) , and o(x, y) . Although the above embodiments have been described with reference to bivariate functions and transforms, embodiments are not limited thereto. Embodiments can be equally well realised in which the functions and transforms are univariate or multivariate. Similarly, the embodiments described herein are not limited to image processing.
Therefore, the digital signal processor 1 10 is adapted or arranged to make the above determinations of the greatest common divisor of input data or an input signal. One skilled in the art appreciates that determining the greatest common divisor finds significant application in geometric modelling and computer aided geometric design because it can be used to determine if two curves or surfaces associated with one or more bodies intersect. Furthermore, any signal can be represented as a polynomial. The more complex the signal, the higher the order of the polynomial needed to represent that signal. Given a set of data samples or measurements representing a signal, such as a received radio signal or an intersection between two surfaces of two or more solid bodies, either physically or geometrically modelled, a polynomial can be determined that fits such a set. Embodiments of the present invention assume that such a fitted polynomial is an inexact polynomial, that is, a polynomial having coefficients that have been corrupted by noise. Embodiments of the invention find the exact, or at least less inexact, roots of such an inexact polynomial, which removes the noise, or at least reduces it substantially, that is, in essence, the polynomial is filtered to recover the original signal. One skilled in the appreciates that there are two separate issues that must be considerd when determining the roots of a polynomial or filtering a signal or other data set represented by a polynomial by making its inexact coefficients less inexact; namely, the computation of the greatest common divisor (GCD) has applications in geometric modelling and computer aided geometric design because it can be used to determine if two curves or surfaces intersect. With respect to image processing, the GCD is the deblurred image, that is, the image that is computed, given the two blurred images f1 (x, y) ar d f2 (x, y) , in particular, computing the roots of a polynomial is not required. One skilled in the art appreciates that the connections between computing the GCD of two polynomials, and computing the roots of one of the polynomials is: (a) the computation of the GCD of two polynomials does not require the computation of the roots of the polynomials and (b) the computation of the roots of a polynomial f (y) requires the computation of the GCD of f (y) and its derivative. Although the above embodiment has been described in terms of processing an image, that is, in terms image processing, embodiments are not limited thereto. Embodiments can be realised in which other signals are processed. Furthermore, the data processed in the above embodiment is two dimensional data. However, embodiments can be realised in which one dimensional data is processed or data having n-dimensions is processed, where n is greater than or equal to 2, such as, for example, data representing a volume or 3D objects. The digital signal processor (DSP) can be any type of digital signal processor such as, for example, an NXP Semiconductor DSP, based on TriMedia VLIW technology, optimized for audio and video processing, Texas Instruments' C6000 series DSP's or Open Multimedia Application Platform, Freescale's multi- core DSP family, the MSC81 xx, or ARM'S Cortex-A8 or any other digital signal processor.
Furthermore, although the above embodiment has been described with reference to image processing, one skilled in the art appreciates that embodiments of the present invention are equally application to processing other digital signals such as, for example, audio and/or speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for communications, control of systems, biomedical signal processing, seismic data processing, 3D modelling, ray tracing, computer aided design and/or manufacture etc.
One skilled in the art appreciates that such data, that is, such signals, can be represented as polynomials. Any noise or corruption of the signals manifests itself in the form of noisy coefficients or noisy roots of the polynomials, that is, inexact coefficients or inexact roots.
Embodiments of the present invention will herein after refer to signal, data and polynomial interchangeably and synonymously. One skilled in the art appreciates that processing an inexact polynomial with a view to at least reducing, or removing, the associated noise is, in substance, filtering the signal of interest and calculating the roots of such an inexact polynomial or signal, or calculating the coefficients of such a polynomial, is also filtering or a form of digital signal processing. Knowing the roots allows the original signal, or at least a better
approximation thereto, to be recovered.
Section 1
Known techniques for calculating roots are well-known within the art such as, for example, the Newton-Raphson method. However, the performance or results of such techniques deteriorate as the degree of the polynomial increases, and/or the multiplicity of one or more roots increases, and/or the roots become more closely spaced. Still further, within a floating point data processing system or floating point processor (referred generically herein as a floating point environment), one skilled in the art appreciates that rounding errors can be sufficient to cause totally incorrect results. Indeed, such rounding errors, that is, the machine precision, can create noise in a hitherto noise-free signal. This is exacerbated when the coefficients of such roots are subject to uncertainty, which is the case for a signal corrupted by noise. One skilled in the art appreciates that an intended or transmitted signal can be represented as a polynomial such as, for example, f(y) =∑ iym-i (1 .1 )
i=0
However, once corrupted by noise, which adversely affects the coefficients, the signal becomes f(l) ( ) =∑( i + Sai )ym-i (1 .1 a),
i=0
where δι is the noise added to the ith coefficient ai . Techniques for solving /( y) = 0 and/or f (l) (y) = 0 are the subject of a vast amount of research since such complex curves arise often in the calculation of the intersection points of curves and surfaces used in geometric modelling within such as CAD systems. Known techniques for calculating roots of such polynomials include, for example, Bairstow's method, Graeffe's root-squaring method, Jenkins-Traub algorithm, Laguerre's method, Muller's method and Newton's method. The foregoing techniques yield satisfactory results when used in relation to an average polynomial, that is, a polynomial of moderate degree with simple and well-separated roots, assuming that a good starting point in the iterative scheme is used, but exceptions exist such as, for example, the well-known Wilkinson polynomial
20
( ) = n(^ - = ( -D( - 2)...( - 20) (1 .2)
i=l
because its roots are very difficult to compute reliably. Generally, as the degree of a polynomial increases and/or the multiplicity of a root thereof increases, the quality of the results of the known techniques deteriorates. Still further, corruption of the coefficients of such a polynomial exacerbates the situation.
Simple examples of the poor performance of known signal processing techniques and/or the effect of floating point environment rounding errors in accurately computing the roots of a polynomial, especially one that has multiple roots will now be given.
Consider a signal represented by the following fourth order polynomial
y4 - 4y3 + 6y2 - 4y + l = (y - l)4 (4)
which has a root of y = 1 with multiplicity 4. The roots function of the well-known and much respected MATLAB software, which uses the QR algorithm to compute the eigen-values of the companion matrix, results the following roots: 1 .002, 1 .0000+0.0002/', 1 .0000-0.0002/', and 0.9998, which clearly shows the effect of rounding errors due to floating point arithmetic, which errors are 0(10"1 6), thereby giving a relative error in the solution of 2x10"4.
Consider the roots of a signal represented by the polynomial (y - 1)100 . Determining the roots using MATLAB's roots function yields the results shown in figure 2, which shows a plot 200 of the roots of ( -1)100 calculated using MATLAB. It can be appreciated that the multiple root has deteriorated into 100 distinct roots as opposed to the actual single root y = 1 of multiplicity 100.
Consider a third example in which the constant coefficient of the polynomial (y - 1)10 is perturbed by an amount, -ε , which represents noise, or some other corruption of the original signal represented by the polynomial ( y -l)10 . The roots of the perturbed polynomial are the solutions of (y -1)10 - ε = 0 . The roots are given by j_
y = l +€i0
Figure imgf000008_0001
if ε = 2 10 , and therefore the perturbed roots lie on a circle in the complex plane centred at (1 ,0) having a radius of ½. The relative error in the constant coefficient of 2 10 =1/1024 causes a relative error of ½ in the solution. If the more general equation (y -l)m = 0 is considered with a coefficient perturbation or noise corruption of 2 10 , the general solution is
Figure imgf000008_0002
The plot 300 of figure 3 shows the roots for (y -l)m = 0 for m = 1, 6, 11, ... when the constant term is perturbed by 2 10 . It can be appreciated that as m→∞ , the roots lie on a circle of unit radius centred at (1 ,0). One skilled in the art appreciates that these results agree with the remarks of S. Goedecker, Remark on algorithms to find roots of polynomials, SIAM J. Sci. Stat. Comput., 15:1059-1063, 1994, who performed comparative numerical tests on the Jenkins- Traub algorithm, a modified version of Laguerres' algorithm and an application of the QR decomposition to the companion matrix of a polynomial. Geodecker notes on page 1062 that none of the methods give acceptable results for polynomials of degrees higher than 50 and notes on page 1063 that if roots of high multiplicity exist, any ... method has to be used with caution. It can be appreciated that the errors in the computed roots in the first two examples above arise due to rounding errors within a floating point environment and due to noise corrupting the signal in the third example.
Section 2 A signal processing technique or simple polynomial root finder according to an embodiment of the invention will now be described. One skilled in the art appreciates that a multiple root is ill- conditioned with respect to random perturbations because they cause it to break up into a cluster of simple roots, but that it is stable with respect to perturbations that maintain its multiplicity. A simple root is in general better conditioned than a multiple root, and it is, therefore, instructive to consider a polynomial root finder that reduces to the determination of the roots of several polynomials, each of which contains only simple roots. The multiplicities of the roots are calculated by a sequence of greatest common divisor (GCD) computations. One skilled in the art will appreciate that the terms "polynomial root finder" and "signal processing" or "signal filtering" are used synonymously therein and that the same applies to the terms "signal" and "polynomial"; the latter being a convenient expression of the former.
Consider the polynomial
f(y) = (y-ylY1(y-y2Y2...(y-ylY'g0(y)
where η≥ 2, i = l,...,l, g0(y) contains only simple roots, and the multiple roots are arranged such that rl≥r2≥r3> ...≥rt . Since a root of multiplicity η of f(y) is a root of multiplicity η -1 of its derivative f(l)(y) , it follows that
Figure imgf000009_0001
(*/ - Vi) i;
where g0(y) and g^y) are coprime polynomials, and the roots of are simple. It follows that
Q .V) := GCD (f(y),f{l)(y)) - iy
Figure imgf000009_0002
and thus the polynomial f(y)/ql(y) is equal to the product of the distinct roots of f(y) , a = (y-y1)(y-y2)...(y-yl)g0(y)-
The greatest common divisor of qY(y) and
Figure imgf000009_0003
is q2{y) * · * {y— |/&)r¾-i ?
Figure imgf000009_0004
where η>2, i = l,...,k, and thus = (y - y^iy - y2)---(y - yk)> which is the product of all q2(y)
of the roots of f(y) whose multiplicity is greater than or equal to 2. The above process of GCD computations and polynomial divisions is repeated and it terminates when the division yields a polynomial of degree one, corresponding to the divisor of f(y) of maximum degree.
To generalise the above process, let wY(y) be the product of all linear factors of f(y) , let w2(y) be the product of all quadratic factors of f(y) , and in general, let w;(y) be the product of all factors of degree i of f(y) . If f(y) does not contain a factor of degree k , then wk(y) is set to equal a constant, which can be assumed to be unity. It follows that to within a constant multiplier,
Figure imgf000010_0001
ql{y) =
Figure imgf000010_0002
= w2{y)w {y)wl{y)...wr--l{y) (2.1) Similarly,
q2(y) =
Figure imgf000010_0003
= w {y)wl{y)wl{y)...wr--2{y) ,
¾( ) = CD(^2( ),^( )) = W4( H2( )W6 3( )...W;::-3( ) (2.2)
¾( ) = CD(¾( ),¾ (1)( )) = W5( )W6 2( )W7 3( )...W;::-4( )
and the sequence terminates at qr (y) , which is a constant. A sequence of polynomials ht(y) , j = l,"-,rmax is defined such that
f(y)
(y) =—— = w1(y)w2(y)w3(y)... , h2(y)=^- = w2(y)w3(y)..., h3(y)=^- = w3(y)...
q3(y) qr _!
K (y) = m" = (y) and thus all of the functions, vt^y), w2(y) , w3(y) wr (y) are determined from
Figure imgf000010_0004
(2.3) The equations
w1(y) = 0, w2(y) = 0, w3(y) = 0, ..., w (y) = 0 (2.4)
contain only simple roots, and they yield the simple, double, triple, etc. roots of f(y) . In particular, if y0 is a root of wt(y) , then it is a root of multiplicity i of /(y) . The pseudo-code algorithm expressed below realises the above, that is, it is an algorithm for calculating the roots of a polynomial.
Input : A polynomial f(y)
Output : The roots of f(y)
Begin 1. Se\ q0=f
2. Calculate the GCD of / and / (1)
¾ =GCD( , (1))
3. If degree ql = 0 (which means that f(y) has simple roots), solve f(y) =
Go to End
4. Calculate hl=^-
5. Set 3 = 2
6. While degree q l > 0 do
Calculate the GCD of q l and its derivative, qj =
Figure imgf000011_0001
Calculate hj
Figure imgf000011_0002
Set j = j + l
Set J J
End While
Figure imgf000011_0003
do
Calculate the roots of w l (they are of multiplicity j-l
End For
End
Applying the above to an example, such as, f(y) = y6 -3y5 +6y3 -3y2 -3y + 2 whose derivative is f(1>(y) = 6y5 -15/ + 18/ -6y-3, it follows that
¾ (y) = GCD(f(y), f(l (y)) = / - / - y + 1 and
Figure imgf000011_0004
Therefore,
q2(y) = GCD(ql(y)^)(y)) = y-\ and q3(y) = GCD(q2(y),q2 (l)(y))=l. The polynomials l (y), h2(y) and h3(y) are
\(y) = ^- = y3-2y2-y + 2, h2(y) = ^ = / -1 and h3(y)=^- = y-l and thus the
¾(y) ¾(y)
polynomials w^y), w2(y) and w3(y) are = y-l. Therefore, the factors of
Figure imgf000011_0005
f(y) are /(y) =
Figure imgf000011_0006
One skilled in the art appreciates that f(y) has a triple root at y = 1 , a double-root at y = -1 and a simple root at y = 2.
However, one skilled in the art appreciates that the above algorithm faces implementation difficulties within a floating point environment, which are (1 ) The computation of the GCD of two polynomials is an ill-posed problem because it is not a continuous function of their coefficients. In particular, the polynomials f (y) and g (y) may have a non-constant GCD, but the perturbed or noisy polynomials f (y) + Sf (y) and
g (y) + Sg (y) may be coprime. Even if f (y) and g (y) are specified exactly and have a non- constant GCD, rounding errors may be sufficient to imply that they are coprime when the GCD is computed in a floating point environment.
(2) The determination of the degree of the GCD of two polynomials reduces to the determination of the rank of a resultant matrix, but the rank of a matrix is not defined in a floating point environment. In particular, the rank loss of a resultant matrix is equal to the degree of their GCD, and a minor perturbation in one or both of the polynomials is sufficient to cause their resultant matrix to have full rank, which suggests that the polynomials are coprime. The determination of the rank of a noisy matrix is a challenging problem that arises in many applications. Such a minor perturbation can arise due to at least one of external noise sources such as a radio or propagation environment or internal noise sources such as a floating point environment as mentioned above.
(3) Polynomial division, which reduces to the deconvolution of their coefficients, is an ill- posed problem that must be implemented with care in order to obtain a computationally reliable solution.
(3) The data in many practical examples is inexact, that is, having been corrupted, for example, by noise or otherwise distorted, and thus the polynomials are only specified within a tolerance. The given inexact polynomials are, with high probability, coprime, and it is therefore desirable to perturb each polynomial slightly such that they have a non-constant GCD. This GCD is known as an approximate greatest common divisor (AGCD) of the given inexact polynomials because inexact polynomials are near polynomials that have a non-constant GCD. It is therefore desirable to compute the smallest perturbations such that the perturbed forms of the inexact polynomials have a non-constant GCD.
(4) The amplitude of the noise may or may not be known in practical examples, and even if it is known, it may only be known approximately as opposed to being known exactly. It is desirable that a polynomial root finder does not require an estimate of the noise level, which reflects the reality of receiving a transmitted signal over a noisy channel, and that all parameters and thresholds be calculated from the noisy received signal or noisy/inexact data, that is, the polynomials coefficients.
Referring to figure 4, there is shown a flowchart 400 of the basic processing steps of signal processing according to embodiments of the present invention.
At step 402, the data representing the polynomial form of the signal to be processed is received. Alternatively, step 402 might comprise the step of receiving or generating data representing the signal to be processed and fitting a polynomial to that data using, for example, curve-fitting or spline-fitting.
The coefficients of the received polynomial, and its derivative, are pre-processed at step 404. The preprocessing is directed to normalising the polynomial coefficients and to reducing, preferably minimising, the difference in magnitude of the maximum and minimum polynomial coefficients.
Step 406 determines the degree of the approximate greatest common divisor of the two polynomials.
Step 408 determines the approximate greatest common divisor of the two polynomials. It can be appreciated that embodiments of the present invention can use two methods of determining the approximate greatest common divisor; namely, computing the AGCD using the Sylvester matrix and computing the AGCD using approximate polynomial factorisation.
Polynomial division is performed at step 41 0 and the roots are refined at step 41 2.
Section 3
One skilled in the art will appreciate that the above described polynomial root finder requires that the greatest common divisor (GCD) of a pair of polynomials be computed several times. This section of the patent specification will describe the application of the Sylvester resultant matrix and its subresultant to the foregoing. It will be appreciated that the foregoing GCD computations used a polynomial and its derivative. However, generality will be preserved below in section 3.1 , but the generality will be then specifically applied to a polynomial and its derivative in section 3.2.
One skilled in the art appreciates that two polynomials are coprime if and only if the determinant of their resultant matrix is equal to zero, and if they are not coprime, the degree and coefficients of their GCD can be calculated from their resultant matrix. The rank loss of such a matrix is equal to the degree of the GCD of the polynomials, and the coefficients of the GCD are obtained by reducing the matrix to upper triangular form. The Sylvester resultant matrix and its subresultant matrices are considered, which leads to their use in calculating the degree of a common divisor of two polynomials.
The coefficients of polynomials that arise in practical examples are inexact, and it is necessary to consider an approximate greatest common divisor (AGCD) of two inexact polynomials because such polynomials are, with high probability, coprime. The differences between the GCD and the AGCD will also be discussed.
Section 3. 1 : Subresultant matrices
Consider two polynomials /(y) and g(y) , f (y) =∑aiym-i and * ( .ν ) ∑b y' (3.1 ) where ci0 , b0≠ 0 . If the degree of the GCD of f (y) and g (y) is d , there exist quotient polynomials uk (y) and vk (y) , and a common divisor polynomial dk (y) , such that
/(//) yi )
(¾·(?/.) (leg Vk < degg = H, cl g?fi. < (leg./ = m, (3.2) < *> i d, where
Figure imgf000014_0001
and
,,>.
It follows from (3.2) that
Figure imgf000014_0002
which can be written in matrix form as
Vfc Vfc
= sSi. = 0, k !.. .. min(m, >?■), where
Figure imgf000014_0003
u¾≠ (J, Vk 0. for i: = 1, ... , ci,
iifc = 0, = 0. for k = d + 1... , , raro(m. n).
The matrix
) im+n-fc+l)>f (m.+«— 2A:+2)
Figure imgf000014_0004
is the subresultant matrix of the Sylvester resultant matrix S(f,g) = S, 1(/,g) , Ck = Ck(f)€ ]Rim+M-*+l •fe+i> and / >;. = .¾{#} e R(™+«-*+i5 <*»-*+!) re Caild- IV matrices.
Figure imgf000015_0001
The polynomials f (y) and g(y) possess common divisors of degrees l, ..., d , because the degree of their GCD is d , but they do not possess a common divisor of degree d + l :
rank 5 ¾. (_/ , g) < rn + n - 2A + 2. k— 1 , .
3,8 ) rank = m. + «— 2£: + 2, ; = <
It follows that the use of the subresultant matrices for calculating the degree d of the GCD of f (y) and g(y) reduces to a rank estimation problem. Suitably, one skilled in the art will appreciate the following theorem
Theorem 3.1 : A necessary and sufficient condition for the polynomials f (y) and g(y) , which are defined in (3.1 ), to have a common divisor of degree k≥ 1 is that the rank of Sk (/, g) be less than (m + n - 2k + 2) , or equivalently, the dimension of its null space is greater than or equal to one.
The Sylvester resultant matrix S(f, g) is given by
Figure imgf000016_0001
where the coefficients <¾of f(y) occupy the first n columns and the coefficients b;of g(y) occupy the last m columns. The following corollary follows from theorem 3.1
Corollary 3.2 The polynomials f(y) and g(y) have a common divisor of degree k≥ 1 if and only if their Sylvester resultant matrix S(f,g) is singular. The matrices Sk(f,g),k=l...,m (m,n), are appropriate to two arbitrary polynomials f(y) and g(y) . The polynomial root finder above, however, requires that the GCD of a polynomial and its derivative be considered, and it is therefore appropriate to consider the modifications to
Figure imgf000016_0002
g(y) = fm(y) , which is considered below in section 3.2.
Section 3.2
There does not exist a relationship between the quotient polynomials uk(y) and vk(y) if the polynomials /(y)and g(y) are arbitrary. The satisfaction of the condition that g(y) = fm(y) implies, however, that the quotient polynomials are not independent, and the equation that defines this independence is now considered.
It follows from 3.2), with n = m-\, that
Figure imgf000016_0003
where k = l,...d
m-l-k
Figure imgf000016_0004
i=0
and dk(y) is defined in (3.3). It therefore follows that f (y) = <>(y)dk (y) + uk (y)d (y) = vk (y)dk (y) (3.10) which establishes the connection between uk(y), vt( )and dk(y). This polynomial equation can be cast in matrix form, where the vectors of coefficients of a polynomial of degree k and its derivative are related by a diagonal matrix of order kx(k + l) . Specifically, it follows from (3.10) that
Figure imgf000017_0001
and the vector coefficients of " f(1Hv < * -) can be expressed as
Figure imgf000017_0002
where
Figure imgf000018_0001
and
Figure imgf000018_0002
u, ' and i.a f, aie related by a diagonal, matrix R
The vectors
Figure imgf000018_0003
where the elements of ' I -'' satisfy
:» - k + I - ; if ΐ = j
0 tber ise.
One skilled in the art will appreciate the following example of the application of the above Example 3.1 : If m = 6 and k = 3, then and thus
Figure imgf000019_0001
It follows that
Figure imgf000019_0002
which satifies (3.15).
It follows from (3.11 ) and (3.15) that fi! = (i¾( )i? + (¾( }) , (3.16) and the product vkiV}( { } n (3.10) can be written as
^.JV.t,
where
Figure imgf000019_0003
Equations (3.4), (3.16) and (3.17) can be combined into one equation,
= 0, where
£"¾ = £¾(/) and Dk = DkifiV- ) are defined in (3,7), F¾. = i¾|t¾) and <¾ =
Gfc(t¾.) are defined in. (3.13.) and (3,14), and udvi are defined in (3..1.2) and (3,1 S), i¾ = BS .f'V'.dk! = i — (3.20)
Fk FkR + Ok
can be consi ered to be the Sylvester resultant matrix of fly) and ^'(¾τ), In particular, if and /^(y) have a non-const- ant C BIIHOB divisor of degree kf the polynomials; «½(¾?) and ?¾h J are noit-iser , and thus ¾ 0 and ¾ =έ: 0t which implies that rank j¾( ; {^ 4} < ¾ ¾ - 2fc 4- .1,
It has been, assumed in Section 3,1 arid this secti n that the polynomials /(?/) and
and /(¾/) and f^X}( ) respectively, re specified exactly and ail computations are performed exactly, such that r oodoff errors due to floating point arithmetic and data errors are not present, This situation is not realised in practice, and it is therefore necessary to modify the theory; such that it is appropriate for computations perforriaed in a floating point environment on inexact polynomials, that is, polynomials whose eocmoients are perturbed by random noise.. This situation, is considered in. the nest secton.
Section 3.3 Approximate greatest common divisor
Polynomials that arise in practical problems are inexact, and computations are, in the vast majority of applications, performed in a floating point environment. A computationally robust algorithm must be insensitive to these sources of error, and it is therefore necessary to consider modifications to the theory in Sections 3.1 and 3.2 such that reliable computations can be performed on the Sylvester matrix and its subresultant matrices.
Consider the theoretically exact polynomials /(y)and g(y) , (//) = ^" α.;¾ί1ϊί_ί and g{y) = y and their inexact forms f(y) and g(y) , respectively f(y) =∑(ai+Sai)ym-i and g(y) =∑(bi+Sbi)yn-i
Computations cannot be performed on the Sylvester matrix S( f,g) and its subresultant matrices because even if f(y) and g(y) ,have a non-constant GCD, the inexact polynomials f (y) and g(y) are, with high probability, coprime, and thu S(f, g) has full rank. In particular, even if there exist polynomials uk (y) and vk (y) , and a common divisor polynomial dk (y) of degree > 1 , such that
f{y)— nk( )dk(y and g[y) =
it can be assumed that, for random perturbations δαι and 5bi , there do not exist polynomials Uk (y) and vk (y) , such that f{p)— Uk(y)dk{ij) and g(y)— t¾(y
where deg dk (y) > l . It therefore follws that a minor change in the coefficients of one or both polynomials causes a discontinuous change in their GCD, which shows that the computation of the GCD is an ill-posed problem.
This property of inexact polynomials leads to the concept of an approximate greatest common divisor of two inexact polynomials.
The approximate greatest common divisor (AGCD) of two coprime polynomials f (y) and g(y) is a polynomial d(y) , defined up to an arbitrary scalar multiplier, of maximum degree that satisfies
f ~(y) ~ U(y)d(y) and g(y) ~ v(y)d(y) .
Several criteria can be used to measure the error in the approximation in the definition of an AGCD, and the AGCD differs according to the criterion chosen. For example, embodiments of the present invention can use at least one, or both, of
(1 ) Calculate the minimum distance the coprime polynomials f (y) and g(y) must be
moved such that they have an AGCD of specified degree.
(2) Calculate an AGCD of two coprime polynomials f(y) and g(y) such that the
perturbations satisfy
Figure imgf000021_0001
where / is a constant.
One skilled in the art can appreciate that an AGCD has fundamentally different properties compared to a GCD. In particular, a GCD is unique up to an arbitrary scalar multiplier, but this uniqueness property does not extend to an AGCD because different definitions of an AGCD yield different polynomials, each of which is valid.
Embodiments of the present invention assume that the given signals or polynomials are inexact such that their Sylvester matrix has full rank. It is therefore necessary to perturb this matrix such that its perturbed form is singular, and the criterion for the calculation of the degree of an AGCD must be defined.
Section 4
Embodiments of the present invention for realising step 404 will be described, it having been assumed that the two polynomials or signals in step 402 are f(y) and f(1>(y) , where /(y)is an inexact polynomial representing a signal for which an approximate greatest common divisor if it and its first derivative, f(l)(y) , is to be computed.
Let the signal or polynomial / = f(y) be f( ) =∑aiym-' , a0≠0. (4.1)
i=0
Therefore,
-l
f(l)(y) =∑biy (4.2)
The first pre-processing step or operation follows from the partitioned structure of the Sylvester matrix, S(f,f{1>)e of /(¾/) and / ¾);is given by
Figure imgf000022_0001
(4.3)
It specifically from this matrix that
Figure imgf000022_0002
and, therefore, f '(y) can be scaled by a non-zero constant α , which can be interpreted as the weight of f(1>(y) relative to the unit weight of f(y) , assuming (y)and f(1>(y) are normalised. Although embodiments of the present invention can use the 2-norm of the coefficients of a polynomial, preferred embodiments realise normalisation using the geometric mean because it performs better if the coefficients vary over several orders of magnitude. Therefore, the polynomial f(y) is redefined as f( ) ΚΙ = 1 (4,5)
Figure imgf000023_0001
and
is replaced by the polynomial g(y) given by g(y) (4.6)
Figure imgf000023_0002
where g(y) is proportional to, but not equal to, the derivative of f(y) . It is also assumed that all of the coefficients of f(y) and g(y) are non-zero, but more generally, their geometric means are taken with respect to their non-zero coefficients. It follows from the above that
J |(m -fia — Sin - J Ift; 0, ... {47
Figure imgf000023_0003
where bmis defined to be zero. Equation 4.7 can be simplified because
Figure imgf000023_0004
rn a,
Figure imgf000023_0005
which, on substitution into 4.7) gives
Figure imgf000023_0006
Now, equation (4.4) states that
deg AGCD(/,s)=deg AG∞( f,ag) (4.9) and the foregoing two conditions are satisfied or all non-zero values of a , but the value of a that achieves optimal computational results, using a defined numerical measure, must be determined when an AGCD of the inexact polynomials f(y) and f(1)(y) is computed. One skilled in the art will appreciate that including a constitutes a second preprocessing operation. Determining its optimal value will be discussed after the third processing operation below has been discussed.
Computations on polynomials for which the ratio of the maximum coefficient (in magnitude) to the minimum coefficient (in magnitude) is large may be numerically unstable and therefore embodiments of the present invention seek to at least reduce, and preferably minimise, this ratio. Any such reduction or minimisation is established using the substitution
= OU (4- 10) into equations (4.5) and (4.6) above, where w is the new independent variable and Θ is a parameter whose preferred or optimal value is to be determined. The substitution of (4.10). which defines the third pre-processing operation, transforms the polynomials (4.5) and (4.6) to give f (w , $) = T (o - ) xvm~ (4. 11 ) and
m-1
Figure imgf000024_0001
i =Q
respectively. One skilled in the art will appreciate that the transformation (4.10) does not change the multiplicities of the roots of f (y) and g(y) .
All operations are performed on the Sylvester matrix ' 0 ' ? ^i ? ( .^·' ί .U and its subresultant matrices, but the preferred or optimal values of Θ and a must be determined. In particular, a criterion for calculating these optimal values is based on the difficulty or performing computations on polynomials whose coefficients vary widely in magnitude. Therefore, it is preferable to calculate these optimal values such that the ratio of the maximum coefficient (in magnitude) to the minimum coefficient (in magnitude) of f (w, 0) and ag(w, 0) is at least reduced and preferably minimised. It follows from (4.1 1 ) and (4.12) that the entries of $ (f(wt a9i;wi ) ) are the coefficients ai&n-i and abfi"1 1 1 and thus Θ and a in (4.4), and (4.1 1 ) and (4.12) respectively are calculated by solving the minimisation problem
Figure imgf000024_0002
It follows that it is necessary to solve the following minimisation problem: Minimise - , subject to
s
f. > 0f i = 0 ,·/Ϊ t > 0, . . . . m - 1
i = 0,
j = 0. . , . . m— 1
The transformations
5 = log <s. (;· = θ.
Figure imgf000025_0001
enable this constrained minimisation problem to be written as:
Minimise T-S subject to
T in T ?«
-5 + d > — o. = 0,
Figure imgf000025_0002
This minimisation problem can be written as
Minimise i — I 0 0 j > b.. (4.13;
Figure imgf000025_0003
4m+2) x4
where and If a0 and θ0 are solutions of the linear programming problem (4.13), then the polynomials (4.1 1 ) and (4.1 2)
become
m-l.
f(w) = ά^ιυιη 1 and ag(w)— ^ ^^ hfU 71 1 \
■■'—«.· ··'— u
whose coefficients are
(4.15 ) and all co
Figure imgf000026_0001
mputations are performed on and its subresultant matrices because they have been processed to improve the computational results. It follows from (4.8) and (4.14) that
Figure imgf000026_0002
where f (1) (w) = f (1) (y = 6)w) , which establishes the relationship between f (1) (w) and g (w) .
One skilled in the art appreciates that the three pre-processing operations should be performed before computations are performed on their Sylvester matrix and subresultant matrices. These operations are motivated by the structure of the Sylvester matrix and its subresultant matrices, and by numerical considerations because their aim is the improvement in the numerical stability of the computations. They involve the normalisation of the coefficients of f (y) and f (1> (y) by the geometric means of their coefficients, and the solution of a linear programming problem for calculating a0 and θ0.
Section 5
Next will be described the computation of the degree of the approximate greatest common divisor. The degree of the greatest common divisor of two exact polynomials is equal to the rank loss of their Sylvester resultant matrix. One skilled in the art appreciates that this presents a significant problem when inexact polynomials are considered. Specifically, the theoretically exact form of a matrix may be singular, but the addition of random noise causes the matrix to become non-singular. The singular value decomposition (SVD) is usually used to calculate the rank of a matrix, but numerous computational experiments have shown that poor results are obtained because the SVD does not yield a large gap in the singular values of the Sylvester matrix of two inexact polynomials, and thus the incorrect value of the numerical rank is computed.
A random perturbation in one or both polynomials that have a non-constant GCD causes them to become coprime, and thus these inexact polynomials have an approximate greatest common divisor (AGCD). It follows that their Sylvester matrix has full rank, and the calculation of the degree of an AGCD of these inexact polynomials requires that they be perturbed slightly, so that the Sylvester matrix of these perturbed polynomials be non-singular.
There are therefore several issues that must be addressed:
calculating the degree of an AGCD
■ calculating of the coefficients of the AGCD
calculating of the perturbed forms of the given inexact polynomials, where
these perturbed polynomials have a non-constant GCD.
Embodiments of the present invention use several methods to calculate the degree of the AGCD. The calculation of the approximate quotient polynomials that are associated with an approximate greatest common divisor requires that the homogeneous equation
0 k = l, ...m - l
Figure imgf000027_0001
be transformed to an approximate linear algebraic equation.
All computations are performed on the preprocessed polynomials (4.14), and as noted above, their Sylvester matrix and subresultant matrices *¾ ( * a09 )*> & ~ 1* · > - s m ~~ are nonsingular. These matrices are used to compute the approximate greatest common divisor of the inexact polynomials (4.14) by computing a singular Sylvester matrix that is near the given the non-singular matrix ' ·· ' < >^' 1 *■ ' 1 ·· ·· ^ , computation of this singular matrix requires that the homogeneous equation (3.4) be reconsidered when inexact polynomials are specified. More particularly, the coefficient matrix in this equation is singular for k = l, ..., d, as can be appreciated from (3.4), when the polynomials are exact, that is, free from errors or noise. However, it is singular for all values of k = l, ..., m - l , when inexact polynomials are considered. Computing the nearest singular Sylvester matrix to S(f, a0g) requires that the homogeneous equation (3.4) be transformed to an approximate linear algebraic equation by moving one of its columns to the right-hand side. Embodiments of the present invention will now consider criteria for selecting the column to be moved.
The degree of an approximate greatest common divisor of f (y) and g(y) , or equivalent^ f(w) and g(w) , is calculated from S(f, a0g) , that is,
Figure imgf000028_0001
and its subresultant matrices Sk(f,a0g), k = 2,...,m-l. All of these matrices have full column rank, and therefore there does not exist a column in Sk(f,a0g) that lies in the column space spanned by the other columns oiSk(f, 0g) . Computing a structured low rank approximation oiS(f,a0g) for computing an approximate greatest common divisor of f(w) and g{w) requires that the approximation
Ak xk ;¾ k = 1, ... , m— l;i = 1,..., 2m -2^+1, (5.1) be considered, where cki is the ith column oiSk(f,a0g) and Ai; is the matrix formed from the remaining columns
oiS ,a ), ck m.-2k ck:2m-2k+l
Figure imgf000028_0002
It is noted that cki =cki(f)orcki =cki(a0g) , depending on the value of /', and thatA^. = Aki(f,a0g) . The indices /(and /'are calculated such that the error in (5.1) is a minimum. There are at least two issues, therefore, that must be addressed, which are
- calculating the degree k = d of an AGCD of f(w) and g{w) , and
-calculating the index i = q of the column of Sd(f,a0g) that defines the column cdi in (5.1). These issues can be considered simultaneously because the calculation of the index i = q follows immediately from calculating the degree of k = d of an AGCD of f(w) and g{w) . Two methods will be presented for calculating d and q . The two methods may yield different results but they are both acceptable provided they satisfy specified properties such as, for example, an error test yielding a sufficiently small value. This property of the computed values of d and q is consistent with the non-uniqueness of an AGCD, and therefore all of the computed answers may be acceptable. The optimal value(s) of computed by the two methods are then used in other methods to calculate the optimal value of d .
One skilled in the art will now be presented with embodiments of the present invention that determine the degree of the AGCD using the two methods; namely, one method based on the residual rk i of the approximate linear algebraic equation (5.1 ), and one method based on the first principal angle ψΙζ ί between the space spanned by ck i and the space spanned by the columns of Ak i to calculate the indices fc and i in (5.1 ), such that the error in this
approximation is a minimum. It is noted that if exact polynomials are specified and the degree of their GCD is d , there always exists a column of Sk (f, a0g) such that rk i and i^ are zero for k = 1,..., d . It is assumed, however, that inexact polynomials are specified and thus rk i and ψΙζ ί are non-zero for all values of k and i . It therefore follows that refined methods are required to determine the optimal values of k and i , which will be discussed next as Method 1 and Method 2.
Method 1 : The method of residuals
Let zk . be the least squares approximate solution of (5.1 ) and let rk i = rk i (Ak i, ck i) be the r idual associated with this approximate solution,
Figure imgf000029_0001
for k = l,..., m - l and i = l,..., 2m - 2k + l , where
Figure imgf000029_0002
It follows that ||rw|| is equal to the perpendicular distance of the point with position vector ck i to the point with position vector Ak izk on the plane t = Ak ixk i that defines the column space of
The minimum value of ||rw|| with respect to i = l,..., 2m -k + 1 , for each value of k = l,...,m -l , is calculated
rk = mm {
Figure imgf000029_0003
!, . . . , m - (5.2) and the column i = q[ for which each of the m - l minima occurs is recorded, thereby yielding the vector
Figure imgf000030_0001
where the superscript r denotes that these column indices are obtained using a criterion based on the residual. The degree dr of an AGCD is equal to the index k for which the change in rk between two successive values of k is a maximum,
dr = [k : log rt+1 - log / — » max,fc = l,...,m - 2| (5.4) and thus the indices k = dr and i = qd r r define the optimal column ck i in (5.1 ).
Method 2 : The method of first principal angle
Embodiments of the present invention that use the second method are stated in terms of the first principal angle y k i , that is, the smallest angle, between the space spanned by ck i and the space spanned by the columns of Ak i , that is, k I . , m ···· t ; / I 2m - 2k + 1, where
span
\
The calculation of an estimate of the degree of an AGCD using the criterion of the first principal angle follows closely the calculation of dr , which is defined in (5.4). Therefore, the minimum value of <f>k of ψΙζ ί for each value of k is calculated,
<¾· = min {≠k,h * = !. - -■- 2m— 2k + 1} , k = 1. . . . , m— 1, ! 5.6 ; and the column i = q{ for which each of the m - l minima occurs is recorded, thereby yielding the vector
' . i I ft ¾ ft«-2 -i € 1 !R-l where the superscript denotes that these column indices are obtained using a criterion based on the first principal angle. The degree d* o an AGCD is equal to the index k for which the change in φΙζ between two successive values of k is a maximum,
Figure imgf000031_0001
and the column index i = is calculated from (5.7). The indices k = άφ and i = define the column ck i in (5.1 ) for which the error in this approximation is a minimum, using this criterion based on the first principal angle.
E uation (5.8) defines the criterion for which the calculation of d* , but an expression for ψΙζ ί ,
efined in (5.5), must be obtained. This requires that an orthonormal basis for
Figure imgf000031_0002
calculated, and this is obtained by applying QR decomposition to Ak .
Figure imgf000031_0003
N € E(¾»-fc x i 2 ^) $ ('2m.— 2k) H (2m— 2k'
where is an upper triangular matrix and the columns of Nk i define an orthonormal basis for Every vector can therefore be written as
Tp'2>n -2h-
¾ = Nktiwkti where wk .≡ The first principal angle, y/k i , between aanndd ^ϊ- ΐ,ί ί is equal to the smallest angle the unit vector w.
Figure imgf000031_0004
and vk ; and therefore eos -¾.j = max max (uk Nk ) wkj. (5.9)
Figure imgf000031_0005
If the SVD of uk T iNk i is uk T iNk i = ^ .Qk T i where Qk i is an orthogonal matrix of order 2m - 2k and ∑¾ i = [c >1 0 ... 0] e ! then (5.9) yields
Figure imgf000032_0001
which implies that cos^ is equal to the non-zero singular value of uk T iNk .
Figure imgf000032_0002
This maximum is attained when wk i is equal to the first column qk i l of Qk i , vk i = Nk iqk i l and thus from (5.10) i , = COS- 1 <7i, , 1 .
One skilled in the art will appreciate that computation problems arise when y/k . ¾ 0 because it follows from this equation that to first order, This problem is solved by
Figure imgf000032_0003
consideri the orthogonal complement where
Figure imgf000032_0004
dim L i = 2m — — 1 and dim Hf, = k. i¾ and U^,
Considering the spaces ! ' leads to an expression for the first principal angle between ' '^>* t i i that is numerically stable. This expression allows the angles (j)k,k = l,...,m -l in (5.6) to be calculated.
Algorithm 5.1 , given below shows the implementation of methods 1 and 2 for calculating
(<f ,^ ) and (d*,<& ) .
Algorithm 5.1 : The calculation of the degree of an AGCD of a polynomial and its derivative Input : An inexact or noisy polynomial
Output : Two estimates of dr and d* of the degree of an AGCD of f (y) and g(y) , and the indices qr and ρφ φ of the columns of the subresultant matrix of f (w) and a0g(w) , such that the error in (5.1 ) is minimised.
Begin 1 . Preprocess f (y) and g(y) to yield the polynomials f(w) and g(w) , as disclosed above, where f(w) and g{w) are defined in (4.14).
2. For k = l, ..., m - l % loop for subresultant matrices
(i) Form the matrix Sk (f, a0g) .
(ii) For i = l, ..., 2m - 2k + 1 % loop for the columns
(a) Define the column ck i
(b) Define the matrix Ak i
(c) Calculate the angle / i and the residual rk i
End i
(iii) Calculate rk and r from (5.2) and (5.3) respectively, and <f>k and q* from (5.6) and (5.7) respectively.
End k
3. Calculate the estimates of dr and d* of the degree of an AGCD from (5.4) and (5.8), and the column indices qr and ρφ φ
End
Embodiments of the present invention can use a third method to determine the degree of the AGCD of a polynomial and its derivative. The above two methods are generally applicable to two arbitrary polynomials. In contrast, the embodiments according to the third method is only applicable to a polynomial and its derivative in the presence of a constraint between the two. Further details of this embodiment will be given in below, which allows the degree d of an
AGCD of f (w) and g(w) to be calculated.
The third method computes the degree of an approximate greatest common divisor of a polynomial and its derivative. The third method differs from the two above methods, which can be modified to apply to two arbitrary polynomials, in that it applies only to a polynomial and its derivative.
It will be recalled that the constraint imposed by the condition g(y) = f w (y) was considered above, in section 3.2, and the modification to it required after the preprocessing operations are implemented are now considered. In particular, it was shown above, in section 4, that all computations are performed on the polynomial (4.14), repeated here for convenience whose coefficients are delined in (4.15), which are also repeated, for c Bveiiieuccu
Figure imgf000034_0001
An approxhtt fce c mmon divisor dk(w) of. f(-w) and. (w), of degr¾¾? safcisfkKi '(¾;) ύ .ν )dk{w)< ( ~ ¾ = 1.,..,; fi, (5,13) arid if the approximate quotient: polynomials associated with a.n A.GCD of f{y) and ί > »re
Figure imgf000034_0002
Mv) =∑ S*,^ "*- an ¾¾(« = ^'"-*-5-1, then it follows Hxtna (4,10).. wi h # = i¾t that the transformed polynomials are m— k m—&
= Υ^ϊο, Γ^}^1-"-* =∑ (5,M) arid
Figure imgf000034_0003
where
••-I
Figure imgf000035_0001
a msEmatiS eisrctEitoM m sr k f / ¾?) ¾ad g{y-) is
Figure imgf000035_0002
and thus i: \n< im d ixtBd a roxi ate misiiiK divisor p.<l yiv 'in lal
Figure imgf000035_0003
C¾it iffei? t -r sec*:m<l <>cj nation in i,'l s
whet¾ k -tkt w) is ddt:i*sd m ίβΛΨ aad and v* aj>&< r-epl&osd by ½(# aed ½#)s
Figure imgf000035_0004
(l;l¾
In part i<; i.ilar . t IK sul *t itufk-n cl ί ·1.10 ) . ith χ~ iute tin
y½kfe, usi (1. )^ sm«4 (S,t:7)<
Figure imgf000036_0001
¾¾are the ®«iai¾5 of 1?ί¾>:ι ass
¾ k o ® thai: th ' Kter G e dS¾ki¾*ss f hi?,,) Λ f;i½ ^ ·:χνη fe« & md¾¾ted
by
Figure imgf000037_0001
here is tfeined in i l s
Figure imgf000038_0001
Figure imgf000038_0002
h o rii l verified t t c£¾) d &rg t da l b/ di&goBaJ j¾a*rk R€
which is < ¾«d in i>i,l¾.
Ex&rap a 5.1. If m - β and * - 3:! then
········ pV* + ¾ι§* - ¾23ί - ¾
Figure imgf000039_0001
It id I -*¾':¾ that
Figure imgf000039_0002
h kM rs (BM) *m:l (5,22;) that
Figure imgf000039_0003
Figure imgf000039_0004
BJ4) he wmbiriatior. f {->.- ami yidr
Figure imgf000040_0001
Figure imgf000040_0002
Figure imgf000040_0003
Figure imgf000041_0001
tJ a it i foro from i ®k
% , ;·.·.·. 1; , , , . - 1, i!iJS e Kofcaikisa rf .¾ £¾ ao<¾ J¾ ϊ.» . | ½ dfo&3¾§® in a refer to e nsoa^tent w Et tfk@ n ta ion
Figure imgf000041_0002
Figure imgf000041_0003
and the combination of (S.26J aid (5.27) yields
(S.28)
where the coefficient matrix of this approximation is (3m— k) x (2m— 2k + 1),
The coefficient matrix of the approximate homogeneous equation (5.26) is used to compute the degree d of a AG CD of /(¾) and
Figure imgf000042_0001
As shown in Section 5.1, it is necessary to perform a search for the column of ,'.>'· i o,, , j . k = 1..... ,m— 1. that is closest to the space spanned by the other columns of ¾ ο¾.,ί¾). Thus, for each value i if = 1, ... , m— 1, each of the 2m-— 2h + 1 columns is examined to determine its proximity to the space spanned by the other columns of <¾), and the column that is nearest to this space is the best column to rnovs to the right hand side, and thereby transform this homogeneous equation to an approximate linear algebraic equation.
The removal of the jih column from ¾ο¾,%} is achieved by postnui!ti lying it by € ^ 1^(2^-^^ m equaj to the identity matrix after the removal of the ,-th column,
' i <:-"> ¾- " ' " ('2-,->k i * w ere s¾ € MJ?ii_2¾w is the ith unit; hams vector. Similarly the jt . column of i , ) is equal to .
Example S.2. If α i> e d
Λ =
« / J5? A
then
Figure imgf000043_0001
1 0 0 (I (} 1 0 a b ά u
AM = A ¾ = .
0 0 0 , f h 1
(} o i u
1 0 0 0 ) 1 0 , b c 0
AM* A . i
0 0 1 e / § 0 0 0 0 1
The removal of the jth column of to the right hand side therefore yelds the approximate Ihieaar algebraic equatioit
Figure imgf000044_0001
wh re
Figure imgf000044_0002
The solution of (5,29) allows the approximate polynomial deco ipositiong i'5,13) to be written as, using (5,11), (5-14) and (5,15),
here f>< fi,.,, tlr ,. ! , _
Figure imgf000045_0001
— .¼ -
Figure imgf000045_0002
T
The coefficient matrix in 1:5,31 is of order (2m + 1) x ( + who Q¾.i and i¾,2(%s#o) arc givran by, respectively.
I. -J J J !
Figure imgf000045_0003
and
Figure imgf000046_0001
The feast squares solution of (5.31) is om u ed,
i 0,dd) where is defined in (5,32), and this ena es the coefficients ?l¾.4 = r¾^< ί = {),... ; k, of a common divisor dk(v>) to be eala.da.ted. The variation of normalised r sidual.
Figure imgf000046_0002
with b enables the degree >J of an AG-CD of f{-w ) ant; ; ?<'! to be determined.
This anal sis shows that initial estimates of a common divisor polynomial of degree and the associated quotient polynomials, can be calculated from the inexact polynomials /; >;·) and ai c :. and they can he us¾ed to calculate e\ which is the rnaxinrum value of b. Thes initial estimates require that the optimal colurnrj of - .b ) for each v hie of k = 1, . . . , m— 1, be computed, and the methods in Sections 5,2,1 and 5.2,2 can be n d for this calculation. It herefore follows that two raphs of fch« variation of with k re obtained and can be used to calculate d,
An error measure derived from (5.28) can. also be used to calculated he degree of an AGCD of f{w ) and g(te) , in parfcicoiar., the methods in Sections 5,2.1 and 5,2.2 «ua l » for each value of k— 1. . , , . m— 1, the optimal c lumn o to be calenlated, and the least squares solution of (5.29 ) allows the vectors and ¾.(i3o } to be computed for each k = I, . . , , m— 1 , The error measure
Figure imgf000047_0001
is calculated, and the i.itd.ex & for which this function achieves its in.tmmt.un value is equal to d. As noted ab ve, this criterion, yields two estimates of d because the optimal column, can be determined, by the rnefchod.i¾ in Sections 5.2.1 and 5,2,2, and they may yield, different estimates of the optimal column, and. therefore different estimates of ¾(¾) and
It has been shown that ( 5,36) and (5.37) can b x d to calculate the de ree d of an AGCD of /(«?} and §{w)^ and that each of these equations yields two estimates of d. The next section considers two mote methods that can be used to calculate <L
Still further embodiments of the present invention for calculating the degree of an AGCD of f(w) and g(w) are given given now in section 5.2.4 below.
5,2,4 The condition number and smallest singular value of
Thes¾2 methods for the s^ima n of the degree of n AGCD of /( «? ! and g{w consider the variations of the condition nnmber and sm llest singular value of ¾) S ith k. They require tw properties of matrices and ne property of the common divisors of the theoretically exact forms of /(¾? ! and giw).
Property 1: If P€ mxn is an arbitrary matrix of rank r and .¾ is the ith column of P, where*„¾ is linearly dependent on. the other columns of P,
Figure imgf000048_0001
then
Figure imgf000048_0002
Pmpetig 2: The linear algebraic equation Ax = h has either no solution, or exactly one solution.* or an infinite number of solutions:
If b does not lie in the column, s ace of .4, the equation. Ax = h does not ss ss a solution.
If 5 lies in the column space of and A has full column rank, then t ere is exactly one solution,
If 6 lies in the column space of A , and A does not have full column rank, then there is an. infinite number of solutions. Pmperty 9: If /(«?) and the theoretically exact forms of f{w) mid iw) res ective lys have a GCD d{w) of degree t/, then they have a finite number c > 1 common divisors of degrees 1.2,„ .. , d— 1. For example, if
¾) = - l)i u - 2)iw - Z)\u- - 4), then:
• The polynomials fiw and g(w) have four oommon linear divisors,,
(¾·— 1). («?— 2), ( J— 3). (us— 4)
• The polynomials f(w and gi.v) have six common quadratic divisors. i :' - !)(«> - 21, (a- - 1 ·ι - 3), {w - l w - 4 |,
(.·, - 2)(¾' - :;;.u ί - 2W - 4). («? - 3)(«? - ii.
• The polynomials and «?{/«,') have four oomm ri eubic divisors,
[v - l)( - 2){w - 3), >¾· - Di - - 2} e - 4).
(,·.:· - l) e - ;i<(iv - 4), (¾? - 2)(w - 3)(¾; - ii
• The polynomials /(«?) aod §{w) have one common quart ie divisor, which is equal to i'w), their GOD,
Since /(«?) and #{te) are polynomials of degrees m and m— 1 respectively, and the degree of then GOD is (1 it foll ws that
.rank <¾(/, g) < 2m -2k, k = l(,,.(d rank. Sk(ff g) = 2m - 2k 4- 1, & = -d + 1, .. , m - 1.
If J 1 ι·ί' jr-h column, of the 4th suforesultaui matrix .¾(-α¾, ΐ¾) is .linearly dependent on the other columns of .S'¾.iao?#0)t then, the equation:
(·¾■ ,.0o) j)ar = .¾i (!!¾j(;;> (5.38) has sax exact sohrtion. & that, stores the coefficients of the uotent polyiiomials ¾(·«?) and ¾%( >). It follows rxom Property 2 that (5.38) either has one solution, or it has an infinite number of solutions, and Propert 3 shows that the uni u solution occurs for k— d, Furthermore, Pr e ly 3 states that (5.38 ) has a fiuite number N(k' ) > 1 solutions for k = 1 , . , , td— 1, but Propert 2 states that if (5.38 has more than one solution, then it has an iriimif ·.· number of solutions, Only a finite number Nik) of this infinite number of solutions correspond to quotient polynomials such, that
uk\w) vk{w)' is a polynomial The other solutions, of which there is an infinite nunthei . define polynomials and such that ϊ 5.39) a rational function..
Since (5.38) has an infinite number of solutions for & = 1, . , . , if — 1, it follows that <¾(ci¾, 0®$ is rank deficient, and its condition number )Mf
Figure imgf000050_0001
j is infinite fo these values of &, The situation defined by k— d is different because (5.38) has a que s hrdon in this dretinisfcance and thus κ{ ¾(ι$»;>, (¾) j ) Is finite.
Figure imgf000051_0001
It follows from this equation that the smallest singular value ¾.≥iii_2i;-i-i °£ »¾((%ί satisfies
Figure imgf000051_0002
Equation (5.40) defines a criterion far the calculation of d in terms of t e stability of (5.38), arid (5.41) defines d in terms of the minimum distance of #ο) ,· to singularity, where it can he assumed that the singular matrix is not & Sylvester matrix because the smallest singular value of a matrix is a measure of its distance to singularity, such that its structure is not retained in its singular farm.
The methods discussed in this section are described m Algorithm S.2.,
Algorithm 5.2 : The calculation of the degree of an AGCD of a polynomial and its derivative. An algorithm according to an embodiment is presented for calculating the degree of an AGCD of a polynomial and its derivative.
Input : An inexact polynomial f(y)
Output : The degree d of an AGCD of f(y) and its derivative
Begin
1. Calculate f(1>(y) and pre-process f(y) and f(1>(y) using the methods described above.
2. Form the polynomials f(w) and g(w) , which are defined in equation (5.11)
3. For k = l,...,m-l % for loop for the subresultant matrices
(i) Form the matrix Sk(a0, 6>0 )
(ii) For j = l,...,2m-2k + 1 % loop for the columns of Sk(ao,0o)Mj
(a) Define the matrix Sk(a0, 6>0 )M j
(b) Calculate the condition number of
Figure imgf000052_0001
and its smallest singular value nk j .
End j
(iii) Calculate
.(_4¾) = min K. ^¾(G¾ . £b)A j ] : j = 1, . . . , 2m— 2k + 1. ¾ = max {¾ : j = 1, -■ > 2m - 2Ar 4- 1} , and the columns at which these minimum and maximum values occur.
End k
4. Use the variation of ^("^fc ) · 11(1 'ife wjth ^ _ ^ m -\ ; t0 determine estimates of the degree of an AGCD.
End
One skilled in the art will appreciate that algorithms 5.1 and 5.2 achieve the same objective and are therefore equivalent.
Section 6
Next, embodiments of the present invention for determining the AGCD of an inexact polynomial and its derivative will be described, using the method of structured non-linear total least norm (SNTLN), applied to the Sylvester resultant matrix, where the degree of the AGCD has been previously determined.
The Sylvester matrix S(f, f (1>) of the given inexact polynomial f = f (y) and its derivative yd) _ jS non-singular, and it is therefore required to determine perturbations Sf (y) and <5f < ) (y) , of minimum magnitude, such that the Sylvester matrix
S(f + Sf, f<1) + Sf<1)) = S(f, f w) + S(Sf,Sf <1)) (6.1 ) is a structured low rank approximation of S(f, f (1>) . This equation follows from equation (3.9) because the entries of S(f, f m) are linear functions of the coefficients of the polynomials.
Again, it is assumed that the pre-processing operations described above have been performed. One skilled in the art will appreciate from the description below that a still further pre-processing operation is desirable before applying the method of SNTLN to construct a structured low rank approximation of S(f + Sf, f w + df(1)) . It will be recalled from the above that the preprocessing operations on the given inexact polynomials f (y) =∑ iym-i and f {1)(y) =∑biym-1-', b^ im -i)^ gives
i=0 i=0
Figure imgf000053_0001
where
Figure imgf000053_0002
Equation (6.1 ) is therefore replaced by
S(p + Sp, a0(q + Sq)) = S (p,a0q) + S (δ p,a0Sq) (6.3) where δρ = and 5q = 5q{w) , a0 is computed from (4.13) and the method of SNTLN requires the iterative solution of a non-linear equation for computing the coefficients of Sp(w) and Sq(w) . This non-linear equation is derived from S(p + Sp, a0(q + Sq)) . As indicated above, it is numerically preferable to normalise p(w) and q(w) , preferably by the geometric mean of their coefficients to balance the matrix. Also, the values of a and Θ are refined in the iterative scheme, using the initial values of a0 and θ0 , that is, the solutions of the linear programming problem (4.13). Therefore, equation (6.2) can be written as
m -l
f(w) s¾ ( ^'-* ) wm→ id q(w) ^ ^' (h,&ra-i→) w!!i-l- (6.4)
(=0 i=i)
where θ ~ θη ,
Figure imgf000053_0003
One skilled in the art appreciates that the constant θ0 is retained in the denominator of these expressions for ai and bt because it simplifies the update procedure for Θ between successive iterations.
Structured low rank approximation of the Sylvester Matrix
Embodiments of the present invention compute the structured low rank approximation of the
Sylvester matrix, S(f ,a0g) , where f (w) and g(w) are defined as above in (6.4). In particular, S(f ,a0g) is denoted by S{a, ff) to emphasise that the method of SNTLN is used to compute the optimal values of a and Θ . It follows that S{a, ff) is given by
Figure imgf000054_0001
where arbitrary values of « and Θ are used, as opposed to the solutions a0 and 6>0 of the linear programming problem in (4.1 3), because they will be refined by the method of SNTLN, using a0 and 6>0 as the initial values.
As noted above, it is assumed that the degree d of an AGCD of f (w) and g (w) has been calculated, and the column q of Sd (ao , 0o)Mj to move the right hand side, such that the error in the approximate equation (5.29) is minimised for k = d and j = q .
According to embodiments of the present invention, structured perturbations are applied to the approximation in order to make it an equation that has an exact solution. In particular, the perturbations of the coefficients of f (w) and g (w) are, respectively,
ζβη~ι , i = 0, ..., m and azm+1+^m_1~' , i = 0, ..., m - l , and therefore the d'h subresultant matrix
2m-ai x ( 2m— 2d+l )
hese structured perturbati
Figure imgf000054_0002
where Applying the method of SNTLN to computing a structured low rank approximation of S(ao,0o) requires that the qth column of Bd be removed and therefore the approximation equation (5.29) becomes
((Sd(a,0) + Bd(a,e,z))Mq)x = cd(a,e) + hd(a,e,z) (6.6) where
cd and hd are the qth columns of Sd(a,0) and Βά(α,θ,ζ) respectively,
cd(a,0) = Sd(a,0)eq and
hd( ,0,z) = Bd( ,0,z)eq,
Figure imgf000055_0001
unit basis vector.
One skilled in the art will note that cd and hd may not be functions of a , depending upon the column q :
if 1 f¾ = i 9) if <i+ 1 2m - 2d + 1 ha = hdi$,. z) if 1 C <>;— d
Figure imgf000055_0002
The following theory is developed assuming m - d + 1 < q≤ 2m - 2d + 1, but one skilled in the art will appreciate that the dependencies of cd and ¾ on a is removed if \<q<m-d. Equation (6.6) is a non-linear equation for α,θ,χ and z that is solved by the Newton-Raphson method. The residual that is associated with an a roximate solution of this equation is τ(α,θ,χ,ζ) (6,8)
Figure imgf000055_0003
and thus if r is defined as
r := r( + δ , Θ + SO, x + &r, 2 + δζ), then -ι( + ,θ + δθ) + h,i(a + δο, θ + δθ,ζ + όζ)
( Sd(a + Sa, θ + δθ) + Βά(α + όα·. θ + όθ, ζ + Sz)) M x + δχ
Figure imgf000056_0001
to a first order. It follows that
Figure imgf000056_0002
and again one skilled in the art will note that the exact forms of cd and hd and their derivatives, depend on the value of q , as shown in (6.7).
To emphasise the above, one skilled in the art might consider it instructive to consider example 6.1 below
Example 6.1
If q = m-d +3>m-d, then cd=cd(a,0) and hd =hd(a,0,z) and, therefore,
Figure imgf000056_0003
3h 3h where 0m_ii_2 is a column vector of zeros of length m-d-2. The vectors hd, — - and— - δθ da have similar forms, that is,
Figure imgf000057_0001
respectively. The partial derivatives
Figure imgf000057_0002
are calculated in a similar manner.
It is still assumed that q > m - d, and therefore the general expression for hd is
Figure imgf000057_0003
where and
Figure imgf000058_0001
It therefore follows that
Figure imgf000058_0002
i=Q
which enables the penultimate term in (6.9) to be simplified. Also, there exists a matrix
Yd = Yd (a, 0, z) W such that
Figure imgf000058_0003
for all z, x, a, and Θ. It therefore follows that on differentiating both sides of this equation with respect to z gives
Figure imgf000058_0004
and thus (6.9) simplifies to
Figure imgf000058_0005
The Newton-Raphson method is used to calculate z, a, and 0. The ¾ iteration in the Newton-Raphson method for calculating z, a, and # is obtained from (6.10)
Figure imgf000059_0001
where rU) =ru)(a,0,x,z);
(2m-(l)x(2m+l)
Figure imgf000059_0002
and the values of a,e,x,zat the (j + l) iteration are
Figure imgf000059_0003
Figure imgf000059_0004
The initial value of z is z(0) = 0 because the given data is the inexact data, and the initial values of a and Θ are a0 and 6>0 , which are calculated from (4.13).
Equation (6.11) is of the form
Cy = q (6.12)
(it should be noted that the vector, q , in (6.12) should not be confused with the integer q in Mq and e ), where
Figure imgf000060_0001
It is necessary to calculate the vector y of minimum magnitude that satisfies (6.12), that is, the solution that is closest to the given inexact data is required. Since
Figure imgf000060_0002
where x0 , the initial value of x , is calculated from (6.8), f 6, 15)
E = L
Figure imgf000060_0003
and y is defined in (6.13). It is noted that E is constant and not updated between iterations. The minimisation of (6.14) subject to (6.12) is a least squares minimisation with an equality constraint (the LSE problem),
min subject Cy = q. which can be solved using, for example, the QR decomposition. This LSE problem is solved at each iteration, where C, q and p are updated between successive iterations.
Algorithm 6.1 : SNTLN for a Sylvester matrix of a polynomial and its derivative
Algorithm 6.1 shows the application of SNTLN for calculating a structured low rank
approximation of S(a,0) .
Input : An inexact polynomial f (y) of degree m
Output : A structured low rank approximation of the Sylvester matrix, S(f (y), f a) (y)) , of f (y) and its derivative f m (y)
Begin
1 . Transform f (y) and g(y) to f (w) and g(w) , which are defined in equation (6.4), by normalising each polynomial, preferably by its geometric mean, the substitution (4.10) and solving the linear programming problem (4.13) for a0 and θ0 .
2. Calculate the integers d and q , and construct the matrix Mq and vector eq .
3. % Initialise the data
3.1
Set z = .s^ = 0, which yields B,f = ψ = tiff - 0 and /> ',-· = ¾ i t> = Ckx = .
3.2
Calculate Yj, < . ¾ ^ and ¾ for # = fo. a = a and the initial value ;i¾ of x, which is defined in (6.15). Calculate the initial value of q, which is equal to the residual.
ri' o, <¾, x<
and Bet the initial value of p, p = 0,
3.3 Define the matrices C and D
4. % loop for the iterations
% Use the QR decomposition to solve the LSE problem at each iteration
Repeat
4.1 Compute the QR decomposition of CT
Figure imgf000062_0001
4.2 Set wl = R Tq
4.3 Partition EQ
where
i 'Iff-— i ^ ( m— 2 d+ 3) X ( 2m— d-\- 3 }
4.4 Compute
Figure imgf000062_0002
where † denoted the pseudo-inverse.
4.5 Compute the solution
Figure imgf000062_0003
4.6 k 4- ; ;= a + 6x, a := - δ and θ := Θ + 4.7
Update 5d,¾t, i
Figure imgf000062_0004
3ft da i nd there fore C from a„ Θ. a; and , Compute the residual ri , 0, r. s) = (Q + ¾) - (<¾ + Bd)Mqx, and thus update <?. Update ρ from a, *. ,r and
Figure imgf000063_0001
One skilled in the art will appreciate that the value of the residual indicated above, that is, 10 can be changed to other values. The various values of the residual represent a balance between the number of iterations needed and the margin of error required; the former increase with the latter decreasing.
End.
One skilled in the art will appreciate that algorithm 6.1 for calculating a structured low rank approximation of a Sylvester matrix enables the corrected polynomials
/(¾') = ("¾ + ¾ ; "¾"!_i, (6.16)
md
Figure imgf000063_0002
where ¾, i = 0,..., 2m, and Θ are the values of the perturbations z; , i = 0,..., 2m, and Θ respectively at the termination of the algorithm, to be computed. These polynomials have a non-constant GCD, but it is still required to compute their GCD, which is addressed below. Calculating the GCD of the corrected polynomials
The polynomials defined by (6.16) and (6.17) have a non-constant GCD, but it cannot be computed directly from the Sylvester matrix. Rather, it is first preferable to compute the coprime polynomials from which the GCD of the polynomials (6.16) and (6.17) can be computed.
The coprime polynomials are computed from the vector x , which is defined in (5.29), and thus the vectors ck (0) and ek (0) can be calculated from (5.30), where k = d , the degree of an
AGCD, j = q , the column index of Sd (a,0) that defines the vector on the right had side of (5.29), and Θ and a are the values of Θ and a at the termination of algorithm 6.1
The least squares solution of (5.29) is
x = (Sd (a,e)Mq pd (a,e)eq (6.18) where Sd (a, 0) \s the Sylvester matrix of the polynomials f (w) and g(w) that are defined in
(6.16) and (6.17) and where† denoted the pseudo-inverse. The vectors ck {6) and ek {6) , which are the coefficients of the coprime polynomials, can then be calculated from (5.30). The GCD d(w) is calculated from the polynomial decomposition (5.13), which is not exact because the method of SNTLN has been applied with k = d , such that f(w ) =
Figure imgf000064_0001
&gi. v' }— Vd{w)dd{w) , (6.19) where f (w) and g(w) are normalised by the geometric means of their coefficients. It therefore follows from (6.16) and (6.17) that these polynomials are redefined as
Figure imgf000064_0002
and
Figure imgf000064_0003
Equation (6.19) can be written in a form that is very similar to (5.31 )
Figure imgf000064_0004
where
Figure imgf000064_0005
ΐ(θ) θ)
the entries and ·· " ' are the coefficients of the polynomials (6.20) and (6.21 ) respectively
Figure imgf000064_0006
Figure imgf000065_0001
and
<¾,ι(¾., 6 and Qk^i&k* #o) are defined in (5,33) and (5,34) respectively.
¾0
(■■■
Figure imgf000065_0002
The computations, that is, the structured low rank approximations of the Sylvester matrix, the calculation of x from (6.18) and the solution of (6.22) for r ^ §) must be implemented for each GCD computation in the polynomial root solver in above. It is clear that the result of the i'h AGCD computation defines the data for the (i + l)'h AGCD computation.
One skilled in the art will appreciate that when a sequence of AGCD computations is performed, in particular, that the implementation of the polynomial root solver in section 2, figure 4, above contains several AGCD computations, and the substitution (4.10) is made in each of these computations. It therefore follows that each polynomial qt.{y) in (2.1 ) and (2.2) is expressed in a different basis. An example of the foregoing will now be given.
Example 6. . CoBsider the AGCD computations when (2.1) a d ( 2.2) are implemented with inexact polynomials. If four AGCD omputatioiiB are performed, them
Figure imgf000066_0001
y = 9iw11 '«¾ = = = (6,23) are- niade in the first, seco d, third and fourth AGCD computations respectively, it therefore follows that if the grv¾n ine ct polynomial is
Figure imgf000066_0002
then the polyrr inials that result from the first and second. AGCD co u ations are
Figure imgf000066_0003
and the polynomials that result from the third and fourth AGCD eomputa-tktns are
Figure imgf000066_0004
Eae of the polynomials ¾(¾:·ι .) , %(*¾ and 's expressed, in the power basis, but: the imiependent variables ¾?, wit w2i «¾ and are different. The polynomial divisions (2.3) t erefore require that the invers of the substitutions (6.23 ) be made, so that all the polynomials in these divisions are expressed in the same independent variable. For exam le, if the results of the divisions (2,3) are ex ressed in the independent variable ¾ .. they are evaluated as
Figure imgf000066_0005
and
Figure imgf000066_0006
It will be appreciated that the above method of SNTLN calculates a structured low rank approximation of the Sylvester resultant matrix to compute an approximate greatest common divisor of an inexact polynomial and its derivative. It was assumed that the polynomials have been preprocessed using the methods discussed in section 4, and that the degree of the AGCD has been computed using the methods discussed in section 5.
It was shown that the coefficients of an AGCD cannot be obtained directly from the Sylvester matrix, and that additional calculations are required. Also, the preprocessing operations cause the independent variable in the given polynomial to change, and it was shown that caution must therefore be taken when polynomial divisions in the polynomial root finder in section 2.1 are implemented.
Section 7
A series of AGCD calculations forms the first stage in the polynomial root solver in section 2.1 , and the use of the Sylvester resultant matrix for this computation was disclosed above.
Embodiments of the present invention are not limited to the above. Embodiments of the present invention can equally well use approximate factorisation of a polynomial and its derivative in determining the AGCDs, especially assuming that the pre-processing has been performed and that the degree of the AGCD to be found is known already from the above. Further details regarding approximate polynomial factorisation in determining AGCDs can be found in the appendix at pages 95 to 1 19. Nevertheless, there now follows an algorithm for the
approximate factorisation of an inexact polynomial and its derivative
Algorithm 7.1 : Approximate factorisation of an inexact polynomial and its derivative
Input : An inexact polynomial f (y)
Output : An AGCD and associated quotient polynomials of the theoretically exact forms of
Figure imgf000067_0001
Begin
(a.1 ) Calculate a0 and 6>0 using the method of linear programming described above.
(a.2) Calculate the degree, d , of an AGCD of f (y) and f(1> (y) using the techniques described above.
% Initialise the data for the solution of the LSE problem
(b.1 ) Calculate the coefficients of Ud (w, 0) and vd (w, 0) .
(b.2) Form the matrices Cd l (cd , 0Q) and Cd 2(ed , 0Q) and their derivative at θ = θ0 and form the vector
(b.3)
Figure imgf000067_0002
from
Figure imgf000068_0001
Figure imgf000068_0002
and the initial residual from
0.. (¾, o
Figure imgf000068_0006
Figure imgf000068_0007
Figure imgf000068_0003
Calculate the initial values of the components of the derivative , for * = * from
Figure imgf000068_0004
(b.5) Initialise some variable
(05 and
Figure imgf000068_0005
(b.6) Calculate ¾ (fj\¾), where is defined, in (7,22)
(b.7) Evaluate § and §§ at Θ = % (fo.8) Set ¾ and ^ equal to zero. (b.S) Set and equal to zero.
Figure imgf000069_0001
defined in { i .11) and. (7.18 ) respectively. Initialise C from (v .30), and define E, which is defined in (' 7.3Γ>.
% The iterative solution of the LSE problem.
Iteration = 0, % The counter for the number of iterations
Repeat % Use QR to solve the LSE problem at each iteration (c. l) Iteration = Iteration + 1,
Figure imgf000070_0001
>>f C;'r,
ic.-i) l'; tltl:.n as
Figure imgf000070_0002
¾ ....¾ (/ -
\ -- > Compute lli*. SK)l:Ji¾ki
Figure imgf000070_0003
u ,7. Sol
¾ 4- i¾:s j¾i : ···· 'd {■■ ::···- ¾!■■(■■
Mid
A. ¾ ·Ι· <>:¾ : (> 1 , ?
(s',a) ) φ·:1»<>: £ .i■: ί■ I t-xn (7Λ4 ) and i.;.J .: til :s t i t a il«ivs,i:iv<> ¾ .
f^9) !Jjxkis 1, § and.≤ foara
(c.I ) Upda e and from ;¾ ¾»d *s.
( . { 1 ί?> and ¾ from ¾ attd
(i;;ji2) Update :,:<xd b m .¾ d <?. ai l ii m. ¾
Mi d .
(c, ) Upd&i* S, T d C, which ¾r« tk8»*d in (7.1), t M) satd ?. 0),
< ·:·.! t · Ci<mp.it;e> ι.Ικ· t*:id!i.:d
Figure imgf000071_0001
d<5fiii*d m . 7 11 ;
viiid li s itpd&is g. Update! > wti h is ifeftos in (?.■¾!}, Irani ¾.; ί?
Mi d
Figure imgf000071_0002
Vvti) m-11 OR teuton > SO.
E*id
The above algorithm 7.1 enables the GCD of the corrected forms of f (w) and g (w) to be calculated. It is implemented for each of the AGCD computations in the polynomial root solver in section 2.1 , and it is therefore used several times to calculate the polynomials that are required for the polynomial divisions in the polynomial root solver. It will be appreciated that the above algorithm makes reference to numerous equations, which can be fould in the appendix on pages 95 to 1 1 9.
It will be recalled from figure 4, in particular, step 410 that the next step in signal processing is to perform polynomial division. Embodiments of the present invention will now be described for polynomial or signal division.
The signal processing described above with reference to figure 4 comprises two steps of polynomial divisions after the AGCD computations have been implemented. Embodiments of the present invention use any one of several methods for performing polynomial division.
Polynomial division is equivalent to the deconvolution of two polynomials, which is an ill-posed problem because even if the ratio p(y)/q(y) is a polynomial, a minor perturbation in either p(y) or q(y) causes the ratio to be a rational function. Firstly, a form of polynomial division in the form of polynomial deconvolution, which reduces to a least squares solution of linear algebraic equations will be presented, followed by two linear structure preserving matrix methods for polynomial division.
The polynomials, or signals, that define the inputs to the polynomial division are the results of the AGCD computations, implemented either by the Sylvester matrix or approximate factorisation of two polynomials. It is necessary for both methods that the polynomials be expressed in the same independent variable, as per example, 6.2, and therefore divisions of the form (6.24) and (6.25) should be performed. The procedures below assume that the foregoing has been undertaken.
Section 8 : Methods for polynomial division
Polynomial multiplication and Cauchy matrices
Let f(y) be a polynomial of degree m , and let g(y) =
Figure imgf000072_0001
, such that f(y) = {jn -i)ai
Figure imgf000072_0002
and therefore the polynomial h(y) = f(y)/g(y) is of degree 2m - l ,
2m-l
h(y) =∑ y2"1'1'1■ This polynomial multiplication can be written in matrix form as
i=0
Oif)g■■■■■ h< (8,1) where
C(f) € 2 m m g€ m and h € M2m axe given by
Figure imgf000072_0003
The deconvolution problem requires the calculation of g(y) , given h(y) and f (y) , and the least squares solution of (8.1 ), g = C is the simplest estimate of the coefficients of g(y) . This solution results in the residual
Figure imgf000073_0001
and therefore g(y) is a polynomial approximation of a rational function.
This method is used for each of the polynomial divisions in the polynomial root solver of figure 4 or section 2.1 of the appendix, and therefore, the rmax deconvolutions for the computation of the polynomials hl(y),i = l,..., rmax are evaluated sequentially. This sequence of operations fails to consider the coupled nature of the convolutions, in particular, the polynomial qY(y) appears in the first and second deconvolutions, the polynomial q2(y) appears in the second and third deconvolutions, and more generally, the polynomial qt.{y) appears in the i and (i + l)th deconvolutions. A structured matrix method that considers the coupled nature of both sets of deconvolutions in section 2.1 , that is, the computation of the polynomials hl(y),i = l,..., rm!ai and wt.{y),i = l,..., rmax_1 is described in the next section and computational results show that this method yields better results.
Structured deconvolution
Embodiments of the present invention that use the method of structured total least norm (STLN) for the solution of the r deconvolution problems, h y) = f^- , i = \,..., r (8.2)
My)
where the polynomial fk (y) appears in the kth and (k + l)th deconvolutions will now be presented.
The degrees of the polynomials are
Figure imgf000073_0002
where ∑(«¾ + 1 )
Figure imgf000074_0001
Mi - 'i — M 4- f m + 1),
and + 1) = A" /?t = nit-i — mi i = 1,
Figure imgf000074_0002
Equation (8.2) can be written in matrix form as
Figure imgf000074_0003
Figure imgf000074_0004
€ ] W +K i = I
and the coefficient matrix in (8.3) is of the order of MxN .
It is assumed that the coefficients of the polynomials are inexact, that is, they represent signals corrupted by noise, and therefore, (8.3) does not posses an exact solution. It is therefore desirable to add a structured matrix to the coefficient matrix, and a structured vector to the right hand side, of this equation. In particular, let be the vector of perturbations f
added to the vector * of coefficients of the polynomial fi(y),i = 0,..., r and let
r
where Γ
Figure imgf000075_0001
m2+l
Ζ·;>
-••m -l-*?'*!-!-'
Z,, =
A matrix of structured perturbations is added to each of the Cauchy matrices
,· 1.¾ ) j i = i,...,r and therefore the coefficient matrix in (8.3) is replaced by
B{zh..,,zr) = C'ff], .. -,fr) + Efa, ....,ζ,.)
Figure imgf000075_0002
where i?(z:b . , . ; ¾f.)€ R ;X V and ii z,-'}€ = 1, are Cauchy matrices.
Consider now the vector on the right hand side of (8.3), the perturbed form of which is
Figure imgf000076_0001
where
p = 0
It follows that the corrected form of (8.3) is f'8.4)
Figure imgf000076_0002
The residual due to an approximate solution of (8.4) is r = ) = (f + Pz) - (Gif1; ... , i' r; + P; h... , /, and therefore a first order Taylor expansion of yields
Figure imgf000076_0003
- (C(h ,■■ ·■ , fr) + £(¾ + ,„,¾ + <¾·)) ( + r(z) + P<fe - (C(fi, ... , £,) + E(z ... ,z,))t¾
[8.6) where
Th
Figure imgf000077_0001
ere exist matrices ¾( t)€ 1ί,)!;-1+! 5νίίί'έ+ΐ!-
Figure imgf000077_0002
and therefore
¾ = L ... , r from which it follows that
Figure imgf000077_0003
where
r = Fihi. is equal to Filii, , =
Figure imgf000078_0001
The substitution of (8.7) into (8.6) yields
r(z + <5z) = rizi - (C + E)Sh - (Y - P)6z>
and thus the Newton-Raphscm method requires the iterative solution of eh
[C + E) (Y - P)
6z which is an under-deerminel equation, where r = r(z) ai C + ? (Y-P) € IR ?xi7Y + i)
If h(U} and ziw) = 0 are th initial values of h and z, respectively, in the Newton- R.aphson method, then the (j + l)th iteration requires the minimisation of
Figure imgf000078_0002
subject to
Figure imgf000078_0003
Figure imgf000079_0001
where the initial value of is calculated from (8.3),
Figure imgf000079_0002
This is an LSE problem
mill subject to Gy = .
y
where
Figure imgf000079_0003
and t = r G E M
Algorithm 8.1 below shows how the QR decomposition can be used to solve the LSE problem. Algorithm 8.1 : Deconvolution using the QR decomposition
Input : The r + 1 polynomials fl(y),i = 0, ..., r
Output : The d polynomials hl(y),i = l, ..., r
Begin
(i l i~ i (0)
1 . Set z 'u and calculate 11 from (8.8)
2. Repeat
(a) Compute the QR decomposition of G1
Figure imgf000080_0001
(h) Set Wi = R~Tt. (c) Partition FQ as
FQ F 2
¾viiere Ft £ N+Ah X; and. /·'
(·: 1 } Compute
Figure imgf000080_0002
(e) Compute the solution
Figure imgf000080_0003
if) Set := h + Sh and z ;= z + Sz,
(g) Update and. , and compute the residual r(z) from
Figure imgf000080_0004
Structured deconvolution and preprocessing
The structured matrix method for deconvolution considered above can be modified slightly by processing the polynomials ft {y),i = 0,..., r using methods similar to those described above
(and also in Chapter 4 of the appendix), before the deconvolution is performed. In particular, the coefficients of these polynomials may vary widely in magnitude, and it is therefore desirable to use the parameter substitution (4.10) on the polynomials fl(y),i = 0,..., r . Let the coefficients of the polynomial fl(y),i = 0,..., r be al , j = 0,...,ml,
Figure imgf000081_0001
=y
and following the analysis in Section 4 above, or Chapter 4 of the appendix, and that given above, the coefficients are normalized by the geometric means of their coefficients, which gives
Figure imgf000081_0002
whose coeffi ients are u, ; i = 0, . , > , r: j = 0, . > , s /re, .
Figure imgf000081_0003
The substitution (4.10) gives the polynomial
Figure imgf000081_0004
where the optimal value θ0 of Θ is calculated by solving the following minimisation problem
Figure imgf000081_0005
which is equivalent to the following minimisation problem
Minimise - subject to
Figure imgf000081_0006
< > 0
Θ > 0.
The transformation = log , 5 = log , φ = log #, a,, , = log ,
enable this constrained minimisation problem to be written as Minimise T - S subject to
T — (:?¾— j}0 i = (I r: j— 0. . , . , i>
0 ,. , . r: i = 0. . . . , r?¾
This minimisation problem can be written as:
Minimise I -1 0 > IK f'S. IO )
Figure imgf000082_0001
A is of order ( "r + 1 + '_, u··, wii )
where x 3.
The solution of (8.10) enables 6>0 to be calculated, and therefore it follows from (8.9) that the linear structure preserving matrix method in the previous section is implemented on the polynomials
Figure imgf000082_0002
Section 9
Referring to figure 4, it can be appreciated that the final stage in signal processing, which, in effect, amounts to recovering the signal with reduced noise, is applying a non-linear least squares refinement to each of the simple roots.
A first embodiment of step 412 of the flowchart of figure 4 comprises solving a non-linear least squares problem.
Consider the problem
Figure imgf000082_0003
where
r = rijj) € ΪΤ\ y = { } € Rr\ in < n and each residual r. = r.(y) is non-linear. One skilled in the art will appreciate that
Figure imgf000083_0001
and therefore at a stationary point
Figure imgf000083_0002
J
where = JM = ¾€
The second derivative of h(y)
Figure imgf000083_0003
If the Hessian matrices Gi(y),i = l,...,n are defined as
Figure imgf000084_0001
then
V2 (y) = J(y {y) + Q(y), Q(y) =∑ r >i:y)Gt(;y) , (9.3) where Gi(y) = Gi(y)T . The formulae for the Jacobian matrix J(y) and the Hessian matrices Gt{y),i = ί,.,., η enable Newton's method for the minimisation of h(y) to be developed.
Specially, consider a quadratic model of h(y) about y = yk ,
HVk + pk) = h( k) + pi Jvgk f r(yk) + ipj %¾) + ¾, (9.4) and this quadratic function achieves it minimum value when
Figure imgf000084_0002
where
= <%); % = Q(Vk) > >Ά· = -) and <¾¾ + 0¾€ Em*m.
The vector /?t that satisfies this equation is called the Newton direction, and leads to the Newton iteration pk+i = + Pk = = - (*¾ ' + <¾) <¾ Tfc, (9.6) which converges quadratically if Jk TJk + Qk is positive definite and the initial estimate y0 is near the solution. If these conditions are satisfied and the quadratic model (9.4) is accurate, the iteration (9.6) converges quickly. It cannot, however, be guaranteed that Jk TJk + Qk is positive definite, and moreover, the determination of the entries of Qk involves the calculation of
1 T
—mn(m + l) second derivative, which is significant. If Jk Jk + Qk is not positive definite, the quadratic model (9.4) may not have a minimum, and it may not have a stationary point. If Jk TJk + Qk is singular, a stationary point exists only if Jk Trk lies in the column space of
The Gauss-Newton iteration is derived from the Newton iteration (9.6) by neglecting the matrix Qk , that is, the second derivatives of rk , and therefore this iteration is
Figure imgf000085_0001
The iteration (9.7) is better behaved than the iteration (9.6) because Jk TJk is, at least, positive semi-definite, but Qk may or may not be positive definite. The matrix inverse in (9.7) exists if the rank of Jk is equal to m , that is, Jk has full column rank, and this will be assumed.
It follows from (9.3) that the approximation Jk TJk + Qk » Jk TJk assumes that
Figure imgf000085_0002
;< = 1
is small, that is, the residuals are small and/or they are only weakly non-linear. In this circumstance, the iterations (9.6) and (9.7) behave similarly, and convergence of the Gauss- Newton method is almost quadratic. If, however, the residuals are large, then the convergence of the Gauss-Newton iteration may be substantially inferior to the convergence of the Newton iteration.
The vector pk , which is given by Jjyfc Newton iteratk m
Gauss- ewton iteratio .
Figure imgf000085_0003
defines the direction of the iteration at the k' stage,
Figure imgf000085_0004
The expression for pk for the Gauss-Newton iteration is obtained from the equation
Figure imgf000085_0005
which is the solution of the linear least squares problem
Figure imgf000085_0006
but an equivalent formula does not exist for the Newton iteration. The Gauss-Newton iteration solves linear problems with only one iteration, and it has fast local convergence on weakly nonlinear problems. The Gauss-Newton iteration is used for the refinement of the initial estimates of the roots of the polynomial equations (2.4).
Refinement of the roots of a polynomial equation Embodiments of the present invention that use the method of non-linear least squares to refine the initial estimates of the roots of a polynomial equation will now be described, assuming that the multiplicities of the roots are known.
G ffiL = 0 rn.
Consider the polynomial p(y) of degree m with coefficients x pO ) = y]piym→ = rn +pivm~l H— +pm-iv where pi y i ~ e 1
and. ~ denotes the correspondence between the polynomial p— p{y ) and. a, the vector of its normalized coefficients. If the distinct roots of p(y) are and the root ai has multiplicity then
Figure imgf000086_0001
and
(9.10)
Figure imgf000086_0002
It is clear that each function g; is real and that its arguments are real and/or occur in complex conjugate pairs.
Equation (9.1 1 ) leads to the e uation Gl (a) = p where " ■■' ! 1
Figure imgf000086_0003
The pejorative manifold of a polynomial can now be defined.
Definition 9.1 : An ordered array of positive integers / , I = is called a multiplicity structure of degree m if (9.9) is satisfied. For a given multiplicity structure / , the set of vectors
Figure imgf000087_0001
j.i rn is called a pejorative manifold of multiplicity structure / , where * * "' ^ , which is defined in (9.12) , is called the coefficient operator with respect to the multiplicity structure / . It follows from the theory above that the distinct roots aj = l,..., r of a polynomial, given their multiplicities, are th
Figure imgf000087_0002
This is a set of m equations in r unknowns, where r < m \\ p(y) contains a multiple root, and m = r if and only if all the roots of p(y) are simple. These equations are solved by the method of non-linear least squares, and therefore it is necessary to determine the vector a that solves the minimisation problem
1111.11
Figure imgf000087_0003
( ) - a,-)2 |
Comparison of this function with h(y) in (9.1 ) shows that
an
Figure imgf000087_0004
d the elements of the Jacobian matrix of the function r;(«) are
Or, g. i )
da da.
The stationary condition (9.2) becomes .ί J = [Gs(a) - a J = 0, (9.14) which shows that the vector Gt(a)-a is orthogonal to the tangent plane of the manifold
Π = \ u = Gi{a) J € C:i } at u = Gi ) where = is a solution of (9.14).
The Jacobian matrix / = J(a) is
Figure imgf000088_0001
and it can be shown that J(a) has full rank if the roots ai,i = l,...,r are distinct.
The coefficients of the normalized polynomial (9.10)
— { — «i )'l{y— «2)ls{</— ay )' {9.15 !
Pa
can be obtained by repeated convolution, and this enable the expression for gl(a),j = l,...,m to be derived. Algorithm 9.1 shows pseudo-code for calculating the entries gl(a),j = l,...,m of Gt(a).
Algorithm 9.1 : Calculation of Gt(a)
Input : The integers r and m , the roots at,i = l,...,r , and the multiplicity Z;of ai
Output : The entries gl(a),j = l,...,m of the vector Gt(a) .
Begin = [1]
for i = 1.2, , . , ,r
for I 1 : 1... : ·';
s = conv(s, (1,—a*)) % s is of length fr¾ + 1)
end for
end for
Figure imgf000089_0001
end for
End
The jth column of J is given by the vector
Figure imgf000089_0002
and consider the polynomials qi(y),j = l,...,r of degree m-l,such that the coefficients of qt(y) are formed from the entries of J j ,
rom (9.10)
Figure imgf000089_0003
This equation can be written as
1 = 1. and this expression is used in the pseudo-code in Algorithm 9.2 for calculating the elements of J .
Algorithm 9.2 : Calculating J {a)
Input : The integers r and m , the roots al,i = l,...,r , and the multiplicity Z;of ai
Output : The Jacobian matrix J {a)
Begin
u = [1] for ?' = 1, 2, ... , r for 1= 1, 2, .... , L
u = n.v(ii, ('1
end for
<md for for j = 1,2, .. , ,r for s' = 1,2, r, I:≠
Figure imgf000090_0001
J{ j) = s Ss is <!qi:i;il to the jth «x>kima of J
fti<! for
End
A still further algorithm is presented for refining the roots, in algorithm 9.3, which is a combination of algorithms 9.1 and 9.2 for the least squares solution
':.t = l,,„,,m, of (9 Λ 3).
Algorithm 9.3 : computation of the roots a*,i = l,...,r Input Th« wcter <J¾ <J ifos laitiiil estiiaatas ¾f the kssst s>;|imi«s κΑιΐίοη -a* of (§.13s,
the itiuliU'li' ity i( · ·! stash distinct Κ-Λ <¾U; i ~ 1, , . . ;?\ thu ·<···■ v.. j <}. f rwirmfjtised
(j eflkisirts, the itrtsg¾« r ;m-i m, ami An error AsraM ¾.
OiM T'iu- kssst sc a-res ;* .'Un ion u*
Figure imgf000091_0001
a) Cak;i:d:su¾ t±t« vaster using Algorithm ft.l. d metilnsk
!"f iff} ·.·.: ,ν,ι' .·>.. :— ί ·.·.: 1 ., .. . !<..
% a : Is ti® . th it'iErto of i e. vector <*.
fb) Oat utatt* sins Ja sabiaii matrix J ~ using Algorithm thZ.
ϊε) Oafcid&fe' ί .ΓΓ. ι_1 o ';
i d CafcuJact* ~ ¾ ·)· ¾¾■
i<>■ Calculate ^¾aj._i).
(i - C&kala e » stror
Figure imgf000091_0002
( · 0' 'G TO Step 3 (k >sd mi nimum attaiEi^ij
<·]· <;·
;¾ t ,· :~ J I 1.
Go t gts ≥&,
slid else
·ί re isfit
3. SeC ft* it , ., ,
Εικϊ
One skilled in the art appreciates that the above presents methods or embodiments of nonlinear least squares for the refinement of the solutions of the polynomial equations (2.4). The Jacobian matrix in the iterative formula is non-singular because the roots of these polynomial equations are simple. The Newton iteration and the Gauss-Newton iterations were considered, and it was shown that the Gauss-Newton iteration has superior numerical properties.
Appendices 2 to 14 contain MATLAB code, as is familiar to those skilled in the art for implementing embodiments of the present invention. In particular, the embodiments demonstrate the ability to process the input signal, that is, inexact data, which is expressed in the form of polynomials, in particular, inexact polynomials, and to calculate the factors of those polynomials, from which the greatest common factors can be deduced and, thereby, the signal of interest, such as, for example, the video image, or transmitted data as indicated above. Still further, having determined the factors of the inexact polynomial, one skilled in the art appreciates that the recovered factors can be used to recover the originally formed or transmitted (uncorrupted signal), as will be appreciated from the noise reductions noted below. The Matlab code uses a function called samplepoly3. The function samplepoly3 is used to test the efficacy of embodiments of the present invention using different signals or inexact polynomials according to the parameter passed. Executing the Matlab code produces numerous graphs and the final output comprises two tables. The first table shows the results before a non-linear least algorithm is implemented and the second table shows the results after the least squares algorithm has been implemented. As one skilled in the art would expect, the results in the second table are better than the results in the first table.
One skilled in the art will appreciate that the fourth line of the function ex_rootsfinder.m, that is, the statement ec=1 e-8 sets the ratio of noise amplitude to signal amplitude. This value can be varied to assess the efficacy of embodiments of the present invention is the presence of noise. It can be appreciated, upon executing the code, that embodiments of the present invention still perform well even if the noise is three orders of magnitude greater than presently set. The noise to signal amplitude is used to create corrupted signals, or inexact polynomials from exact polynomials. This is done in order to demonstrate the performance of embodiments of the present invention in the presence of noise and so that the noise reduction achieved by embodiments of the present invention can be reduced.
Returning to the signal or inexact polynomial under test, one skilled in the art will appreciate that samplepoly3(22) selects case 22 as the relevant polynomial, that is,
a=[ 8 . 1 03 1 2 . 0 0 0 0
3 . 50 78 8 . 0 0 0 0
- 0 . 63 06 8 . 0 0 0 0
-5 . 82 1 1 9 . 0 0 0 0 ] ;
Where the first column represents the roots of the polynomial and the second column represents the multiplicity of those roots. One skilled in the art will appreciate that the polynomial is, in fact, an exact polynomial. However, noise is added to the coefficients of the polynomial according to the set noise figure.
Executing the code produces the following output:
[0001] Enter the source of the polynomial
[0002] 1 ~ from the database: 2 ~ a random polynomial: 1
[0003] Input the number of the polynomial, between 1 and 56: 22
[0004] exact root multiplicity [0005]
[0006] 8.1 031 0000Θ+000 2
[0007] 3.50780000Θ+000 8
[0008] -6.30600000Θ-001 8
[0009] -5.821 1 0000e+000 9
[0010]
[0011] Optimization terminated.
[0012] AGCD computation number
[0013] 1
[0014] Optimization terminated.
[0015] Number of iterations required in LSE problem:
[0016] 22
[0017]
[0018] AGCD computation number
[0019] 2
[0020] Optimization terminated.
[0021] Optimization terminated.
[0022] Number of iterations required in LSE problem:
[0023] 8
[0024]
[0025] AGCD computation number
[0026] 3
[0027] Optimization terminated.
[0028] Optimization terminated.
[0029] Number of iterations required in LSE problem:
[0030] 40
[0031]
[0032] AGCD computation number
[0033] 4
[0034] Optimization terminated.
[0035] Optimization terminated.
[0036] Number of iterations required in LSE problem:
[0037] 12
[0038]
[0039] AGCD computation number [0040] 5
[0041] Optimization terminated.
[0042] Optimization terminated.
[0043] Number of iterations required in LSE problem:
[0044] 14
[0045]
[0046] AGCD computation number
[0047] 6
[0048] Optimization terminated.
[0049] Optimization terminated.
[0050] Number of iterations required in LSE problem:
[0051] 17
[0052]
[0053] AGCD computation number
[0054] 7
[0055] Optimization terminated.
[0056] Optimization terminated.
[0057] Number of iterations required in LSE problem:
[0058] 51
[0059]
[0060] AGCD computation number
[0061] 8
[0062] Optimization terminated.
[0063] Optimization terminated.
[0064] Number of iterations required in LSE problem:
[0065] 51
[0066]
[0067] Optimization terminated.
[0068]
[0069]
[0070] multiplicity exact root computed root error
[0071]
[0072] 2 8.10310000e+000 8.10310050e+000 6.23145607e-008
[0073] 8 -6.30600000e-001 -6.30600003e-001 4.79636083e-009
[0074] 8 3.50780000e+000 3.50779998e+000 6.94397020e-009 9 -5.821 10000e+000 -5.82109994e+000 1 .01404827e-008One skilled in the art can appreciate from the above that the noise is substantially removed from the signals or inexact polynomials and that the multiple roots are preserved, unlike the prior art in which multiple roots usually separate into simple roots in the presence of noise. Embodiments of the present invention take a corrupted signal and recover the original signal by removing the noise.
Suitably, embodiments of the present invention implement digital signal processing and, more particularly, digital filtering.
Executing the function ex_rootsfinder3.m using MATLAB produces the following output.
Initially, the user is requested to indicate which polynomial they want to use as a test, namely, one selected from the database provided in the function "samplepoly.m" or a randomly generated polynomial. In the example, below, the first polynomial from the database was selected.
It can be appreciated that it has 10 roots; three of which are multiple roots. The exact polynomial is corrupted with noise to create an inexact polynomial or noisy signal and then embodiments of the present invention process the noisy signal with a view to recovering the original signal or polynomial.
The final table in the output shows that the correct number of roots was identified, as were the multiplicities and that the noise component, that is, the error, was reduced from the initial value of between 10 9 and 10 6 added to each component to the errors values listed for embodiments of the present invention that use approximate polynomial factorisation, as can be appreciated from Appendix 2, lines 20 and 21 .. In essence, the noisy signal, represented by the inexact polynomial, has been filtered to remove or at least reduce the noise.
Example 1 given below is an example of the output of an embodiment of the present invention that uses approximate polynomial factorisation in filtering the signal or polynomial.
Example 1
Enter the source of the polynomial
1 ~ from the database: 2 ~ a random polynomial: 1
Input the number of the polynomial, between 1 and 56: 1 exact root multiplicity
-1 .66400000e+000 3
3.13720000e+000 1
2.55950000e+000 1
-4.16030000e+000 2
-1 .36700000e+000 1
-9.69030000e+000 2 Optimization terminated.
AGCD computation number
1
Optimization terminated.
Number of iterations required in LSE problem:
51
AGCD computation number
2
Optimization terminated.
Optimization terminated.
Number of iterations required in LSE problem:
6
Optimization terminated.
multiplicity exact root computed root error
1 -1 .36700000e+000 -1 .36699985e+000 1 .08606175e-007
1 2.55950000e+000 2.55950006e+000 2.41 1 02277e-008
1 3.13720000e+000 3.13719987e+000 4.0541 1 21 7e-008
2 -9.69030000e+000 -9.69030092e+000 9.48362129e-008
2 -4.1 6030000e+000 -4.16029948e+000 1 .24599065e-007
3 -1 .66400000e+000 -1 .6640001 Oe+000 6.26660881 e-008
End of Example 1
Figure 5 illustrates an embodiment of the present invention within the context of using the greatest common divisor to recover a signal 500. A transmitted signal, tx, (showing
schematically in the form of data it represents for simplicity and understanding), output by a transmitter is received by a receiver as a received signal, Rx. It can be appreciated that the received signal is noisy or corrupted relative to the transmitted signal. This is due to at least one of noise and the communication channel, or, more accurately, the transfer function of the communications channel. The received signal, Rx, has been divided into two portions 502 and 504 spanning a relatively short time period over which is acceptable to consider the channel conditions, or the transfer function to be constant.
Let the first portion of the received signal be represented by
rj (x) = h(x) ® t: (x) , where h(x) is the transfer function of the communication channel and tl (x) is the portion of the transmitted signal corresponding the first portion 2102 and ® is the convolution operator.
Let the second portion of the received signal be represented by
r2 (x) = h(x) ® t2 (x) , where h(x) is the transfer function of the communication channel and t2 (x) is the portion of the transmitted signal corresponding the first portion 2104 and ® is the convolution operator.
Mapping the signals to the frequency domain or z-domain, gives R^z) = H z)Tl {z) and
R2 (z) = H (z).T2 (z) , from which the greatest common factor can be determined. It can be appreciated that the greatest common factor is H(z) . Once H(z) has been determined, both
7] (z) = R1 (z) / H(z) and T2 (z) = R2 (z) / H (z) can be determined. Calculating the inverse transforms would yield the transmitted signal. One skilled in the art will appreciate that the polynomials representing the received signal portions will be subject to noise and, therefore, be inexact polynomials.
Alternatively, or additionally, a curve could be fitted to data samples representing the received signal 500. Such a curve is then considered to be an inexact polynomial that could be processed as described above to recover the originally transmitted signal, either free from noise or with substantially reduced noise.
Referring to figure 6, there is shown an apparatus 600 according to an embodiment of the present invention. The apparatus 600 comprises an input interface 602 for receiving data 604 presenting a first signal. The data 604 is stored in a memory 606. The memory 606 stores code 608 arranged when executed by a processor 61 0 having a floating point arithmetic unit 612 to implement the above mathematics or signal processing and thereby produce second data 614 representing a second signal recovered or derived from first signal. The second data 614 is output via an output interface 61 6.
Furthermore, many applications of embodiments of the present exist in science and engineering, including computer vision such as, for example
Faugeras. Three-Dimensional Computer Vision : A Geometric Viewpoint. The MIT Press, Cambridge, MA, 1 993; and
S. Petitjean. Algebraic geometry and computer vision : Polynomial systems, real and complex roots. Journal of Mathematical Imaging and Vision, 10:1 91 -220, 1 999,
computer graphics such as
J. T. Kajiya. Ray tracing parametric patches. Computer Graphics, 1 6:245-254, 1982, geometric modelling such as
T. Sederberg and G. Chang. Best linear common divisors for approximate degree reduction. Computer Aided Design, 25:1 63-1 68, 1 993
and control theory such as
P. Stoica and T. Soderstrom. Common factor detection and estimation. Automatica, 33(5):985- 989, 1997
all of which require the computation of the greatest common divisor (GCD) of two polynomials. In all cases, the signal of interest is processed as a polynomial. Embodiments of the present invention could be used in all areas expressed in this paragraph and more.
Although embodiments of the present invention have been described within the context of determining greatest common factors of polynomials representing signals or data such as, for example, video data or audio data, embodiments are not limited thereto. Embodiments can be realised in which the polynomials represent 3 dimensional objects being designed in a CAD system or the like. The polynomial or signal of interest can represent the curve of the intersection between two bodies, which is usually a very high order polynomials such as polynomials have a degree 0(64). One skilled in the art will appreciate that floating point environments operate within a constrained environment, which is the precision with which numbers can be represented. Clearly such precision is limited by the word length used by the computer system. The limited word length operates, in effect, as quantisation noise, which causes the exact polynomial to become a noisy polynomial.
One skilled in the art will appreciate that polynomial equations arise frequently in geometric computations within the context of several problems including:
Calculating the points of intersection of curves and/or surfaces;
Ray-tracing curves and surfaces;
Inverse kinematics and parallel mechanisms;
Calculating a point on the bisector between two curves;
Calculating a point equidistant from three curves and
Solving geometric constraint systems.
One skilled in the art will be familiar with publications by D.A. Bini and A. Marco, "Computing curve intersections by means of simultaneous iterations", Numer. Algor., 43:151 -175, 2006, and R.P. Markot and R.L. Magedson, Solutions of tangential surface and curve intersections.
Computer Aided Design, 21 :421 -427, 1989, which are incorporated herein by reference.
Throughout the description and claims of this specification, the words "comprise" and "contain" and variations of them mean "including but not limited to", and they are not intended to (and do not) exclude other moieties, additives, components, integers or steps. Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
It will be appreciated that embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape or the like. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs comprising instructions that, when executed, implement embodiments of the present invention. Accordingly, embodiments provide machine executable code for implementing a system, device or method as described herein or as claimed herein and machine readable storage storing such a code. Still further, such programs may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.
Features, integers, characteristics, compounds, chemical moieties or groups described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
The reader's attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.
Appendix 2 : ex_rootsfinder3.m
% This program computes the roots of a polynomial, using an algorithm
% developed by Gauss.
% The algorithm is designed to compute multiple roots of a polynomial,
% and the multiplicities of the roots are computed by the approximate
% factorisation of the polynomial and its derivative.
% Xin Lao and Joab Winkler, July 2010.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clear all;
warning 'off'
% Set the noise level, between le-9 and le-6, using a random number.
rand ( ' seed ' , 23 ) ;
ec=10A (-6-round (3*rand) ) ;
%%%%%%%%%%%%%%%%%%%%%%%
% This method of setting random numbers should be used
% for Matlab 7 and later.
% stream = RandStream ( ' mcgl6807 ' , ' Seed ' , 23 ) ;
% rand ( stream, 1 ) ;
%%%%%%%%%%%%%%%%%%%%%%%
disp
disp
disp
disp Enter the source of the polynomial ' ) ;
disp
polysource = input (' 1 — from the database: 2 — a random polynomial:
switch polysource
case 1 % Generate a polynomial from the database
display ( ' ' ) ;
exnum=input (' Input the number of the polynomial, between 1 and 56:
' ) ;
display ( ' ' ) ;
a=samplepoly3 (exnum) ;
[rs,cs] = size (a) ;
no_root = rs; % the number of distinct roots of the polynomial case 2 % Generate a random polynomial
display ( ' ' ) ;
no_root=round ( l+6*rand) ; % The number of roots
a ( : , 1 ) =10 * ( 2 *rand (no_root , 1 ) -ones (no_root , 1 ) ) ; % The values of the roots
a ( : , 2 ) =l+round ( 10*rand (no_root, 1 ) ) ; % The multiplicities of roots end % switch
% Write the roots and the multiplicities to the screen,
disp ( ' ' ) ;
disp (' exact root multiplicity');
disp (' ');
for i=l : 1 : rs
fprintf('% 16.8e %10. Of\n ' , a (i, : ) ) ; end
disp ( ' — ' ) ;
disp ( ' ' ) ;
% Sort the roots in ascending order of their multiplicities. If two % or more roots have the same multiplicity, arrange them in ascending % value of the roots.
a=sortrows ( a, [ 2 1 ] ) ;
% Form the vector fx of the coefficients of the polynomial.
fx=poly (root_in2 (a) ) ;
ml=length ( fx) ;
m=ml-l; % the degree of fx
% Add random noise to the exact polynomial fx
rand (' seed ', 12 ) ; % a seed for the noise added to the polynomials rf=2 *rand ( 1 , ml ) -ones ( 1 , ml ) ;
origfx_n=fx+fx . *rf*ec ; % The noisy polynomial
% Differentiate the noisy polynomial and denote the derivative by gx . gx_n=polyder ( origfx_n ) ;
% Normalise fx and gx by the geometric means of their coefficients. fx_n=geomecoeff (origfx_n) ;
gx_n=geomecoeff (gx_n) ;
% Initialise some arrays that are required for the AGCD computations. degree = zeros (0);
degree2 = zeros (0);
degree3 = zeros (0);
alpha_up = zeros (0);
theta_up = zeros (0);
% Initialise the figure numbers when the manual rank option is ised. fignum_manrank = 0;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Define the variables start and manual:
% start = 'yes' : Start the root solving procedure
% start = 'no' : Stop the root solving procedure
% manual = 'yes' : The rank is determined manually
% manual = 'no' : The rank is determined automatically
start= ' yes ' ;
manual= 'no' ;
% Compare the strings 'start' and 'yes', and return logical 1 (true) % if they are identical, and return logical 0 (false) otherwise.
while strcmp ( start , ' yes ' )
degree ( 1 ) =m; % the degree of fx_n
% Determine degree (2), the degree of an AGCD of fx_n and its
% derivative, manually or automatically.
if strcmp (manual , ' yes ' ) % manual determination
% Update the figure numbers when manual = yes.
fignum_manrank = fignum_manrank+l ;
[degree ( 2 ), stoploop] = manualRank (origfx_n, fignum_manrank) ; else % automatic determination
[degree ( 2 ), stoploop] = autoRank ( origfx_n )
end
% Write to the screen that the first AGCD computation (between % fx_n and its derivative) is being performed,
disp ( ' ' ) ;
disp ('AGCD computation number ')
agcdcn = 1; % this is the first AGCD computation
fprintf (' %10. Of\n ', agcdcn) ;
% Calculate the AGCD of fx_n and its derivative, using the optimal % column calculated by the residual of an approximate equation. The % AGCD is stored in com_dw.
[ alpha_up ( 1 ) , theta_up(l), com_fw, com_gw, com_dw, com_dw2]=...
SNTLNinCoefficentofAGCD (fx_n, gx_n, degree (2), 2); % The last argument (2) in the inputs can be changed to 1 if the % first principal angle is used for the determination of the
% optimal column of the Sylvester matrix.
% Initialise the cell poly_f, where poly_f{l} stores the corrected % form of the given polynomial in the y variable, and
% poly_f { i } , i=2 , 3 , ... store the polynomials, in the y variable, % from the AGCD computations.
% Note: Each polynomial poly_f(i) is stored as a vector, but the % lengths of these vectors differ.
poly_f = zeros (0);
% Note that com_fw is the same polynomial as poly_f{l},
% but in the w variable.
y=m : -1 : 0 ;
poly_f { 1 }=com_fw. / (theta_up ( 1 ) . Ay) ;
% Initialise the cell hx that stores the polynomials hx{i}, where % hx { 1 } =com_fw/q_{ 1 } and hx { i } =q_{ i-1 } /q_{ i } , i=2 , 3 , ...
% The polynomials q_{i}(x) store the AGCDs in the first stage
% of Gauss' algorithm.
% Also, initialise the polynomials wx, where wx_{ i } =hx_{ i } /hx_{ i+1 } . hx = zeros ( 0 ) ;
wx = zeros ( 0 ) ;
% Calculate the polynomial division com_fw/com_dw, where com_dw is % computed in SNTLNinCoefficentofAGCD . Both polynomials are
% expressed in the variable w. Use a simple least squares solution. Qx=cauchy ( com_dw, m-degree ( 2 ) ) ; % the coefficient matrix
[ul , si , vl ] =svd (Qx) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
hx{ 1 }=vl*sl ' *ul ' *com_fw' ; % the polynomial com_fw/com_dw
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Perform the other AGCD computations. A simple least squares
% solution is used to compute the polynomials wx_{i} and hx_{i}. % Another method is used later on.
ite=l; % initialise the counter for these computations
% Compare the strings stoploop and 'no', and return logical l(true) % if they are identical, and logical 0 (false) otherwise,
while strcmp ( stoploop, ' no ' )
ite=ite+l ; increment the counter for AGCD computations disp ( ' ' ) ;
disp ( 'AGCD computation number ')
fprintf ( ' %10. Of\n ' , ite) ;
% Determine if the degree of the GCD of com_dw and its
% derivative is to be computed manually or automatically.
% degree ( ite+1 ) is the degree of the AGCD.
if strcmp (manual , 'yes') % manual determination
% Update the figure numbers when manual = yes.
fignum_manrank=fignum_manrank+1 ;
[degree (ite+1) , stoploop] =manualRank ( com_dw, fignum_manrank ) ; else % automatic determiantion
[degree (ite+1) , stoploop] =autoRank ( com_dw) ;
end
% The degrees of the polynomials hx{i} must form a strictly % decreasing sequence. The following condition must therefore % be satisfied:
% degree ( ite-1 ) -degree ( ite ) >= degree ( ite ) -degree ( ite+1 ) % If this condition is not satisfied, return control to the % user. See Xin ' s code [page 2 of ex_rootdfinder3.m]
% for more details.
% Terminate the loop if com_dw and its derivative are coprime % because SNTLNinCoefficientofAGCD cannot be called,
if degree ( ite+1 ) == 0
ite=ite-l; % Reset the number of AGCD computations break ;
end
% Calculate the derivative of com_dw, and then normalise it and % its derivative by the geometric means of their coefficients. deri_dw=polyder (com_dw) ;
com_dw=geomecoeff (com_dw) ;
deri_dw=geomecoeff (deri_dw) ; % the normalised derivative
% Calculate the AGCD of com_dw and deri_dw. Use the residual % (Method 2) to calculate the optimal column. Change the last % argument to 1 if the first principal angle is to be used. % com_dw is the AGCD of com_dw and deri_dw.
[ alpha_up ( ite ) , theta_up ( ite ) , com_fw, com_gw, com_dw, com_dw2 ] = ... SNTLNinCoefficentofAGCD (com_dw, deri_dw, degree ( ite+1 ) , 2 ) ;
% com_fw and com_gw are the corrected forms of the polynomials % com_dw and deri_dw respectively. All of them are expressed % in the w variable.
% Calculate the corrected polynomial com_fw when it is
% expressed in the y variable.
y=degree ( ite ) :-l:0;
poly_f { ite } =com_fw . / (prod ( theta_up) . Ay ) ;
% Perform the division hx_{ i } =q_{ i-1 } /q_{ i } . The polynomials % q_{i} and q_{i-l} are expressed in the w variable. Use the % least squares method.
Qx2=cauchy ( com_dw, degree ( ite ) -degree (ite+1 ) ) ;
[ul , si , vl ] =svd (Qx2 ) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
hx{ite}=vl*sl ' *ul ' *com_fw' ; % the polynomial h_{i} % Perform the polynomial division wx_{ i } =hx_{ i } /hx_{ i+1 } . Note % that hx_{i} and hx_{i+l} have been calculated using different % values of theta. Conversion to the scale of hx(ite-l) is used. % Use the least squares solution.
y2=(length(hx{ite})-l) :-l:0;
Hx=cauchy (hx { ite } . /theta_up ( ite ) . Ay2 ' , length (hx{ite-l}) ...
-length (hx { ite } ) ) ;
[ul , si , vl ] =svd (Hx) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
% The polynomial wx_{i} in the scale of hx(ite-l) . Each polynomial % is therefore expressed in a different scale (value of theta) . wx{ ite-1 }=vl*sl ' *ul ' *hx { ite-1 } ;
end % while loop strcmp
% The calculation of the polynomials hx_{i} and wx_{i} must be
% repeated in a slightly different form if com_dw and its derivative.
% are coprime. com_dw is calculated in SNTLNinCoefficientofAGCD .
% Calculate com_dw in the y scale.
y=degree ( ite+1 ) :-l:0;
poly_f { ite+1 }=com_dw. / (prod (theta_up) . Ay) ;
% The last division h_{ i } =q_{ i-1 } /q_{ i } is equal to q_{i-l}
% because q_{i} is constant,
hx { ite+1 } =com_dw;
Hx=cauchy (hx {ite+1 } , length (hx { ite } ) -length (hx {ite+1 } ) ) ;
[ul , si , vl ] =svd (Hx) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
% Calculate the polynomials wx for the last two AGCD computations. wx{ite}=vl*sl ' *ul ' *hx{ite} ;
wx{ ite+1 }=hx{ ite+1 } ;
% Determine the multiplicites of the roots.
% The array degree holds the degrees of the polynomials q_{i}.
degree ( ite+2 ) =0 ; % the last AGCD is constant (degree = 0)
% The array degree2 holds the degrees of the polynomials hx_{i}, % where hx_{ 1 } =fx_n/q_{ 1 } and h_{ i } =q_{ i-1 } /q_{ i } , i=2,3,...
degree2=degree ( 1 : end-1 ) -degree ( 2 : end) ;
% The array degree3 holds the degrees of the polynomials
% wx_{i}=hx_{i}/hx_{i+l} .
degree3=degree2 ( 1 : end-1 ) -degree2 ( 2 : end) ;
degree3 (end+1 ) =degree2 (end) ; % from wx { ite+1 } =hx { ite+1 }
% The multiplicties of the roots are the non-zero entries of degree 3. multi=find (degree3 ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Compute the values of the roots.
% The theta_up on the right is a vector of the optimal values of % theta. The vector theta_up on the left is obtained by appending % one to the end of the vector on the right. The 1 is appended % because scaling is not applied, that is, theta =1, in the last % AGCD calculation.
theta_up= [theta_up, 1] ;
% The array b has two colums. b(i,l) stores the value of the ith % distinct root and b(i,2) stores its multiplicity. The number of % rows is equal to the number of distinct roots, which is equal % to the sum of the elements of degree3.
b=zeros ( sum (degree3 ) , 2 ) ;
i=l; % initialise this counter for later.
% length (multi ) is the number of distinct roots
for k=l : 1 : length (multi )
g=multi (k) ; % Note: some roots may have the same multiplicity
% Calculate the roots of wx(g) when wx is a polynomial in y. % Note that not different polynomials wx(g) have different % values of theta associated with them.
ro=roots (wx { g } ) ' . *prod ( theta_up ( 1 : g) ) ;
% If there is more than one root with the same multiplicity, % sort the roots in ascending order,
if length(ro)>l
ro=sort (ro) ;
for j=l : 1 : length (ro)
b (i, 1 ) =ro ( j ) ;
b ( i , 2 ) =g ;
i=i+l ;
end
else % there is only one root ( length (ro ) =1 )
b(i, l)=ro;
b ( i , 2 ) =g ;
i=i+l ;
end % if length (ro)
end % k loop
% Improve the values of the roots using the method of non-linear % least squares, b is the matrix specified above and fx_n is the % given noisy polynomial, d is the vector of the improved roots. d=nonlinearLS (b, fx_n) ;
% Calculate the residual of the noisy polynomial at the improved root % estimates stored in d. Note that d only stores the distinct roots % and b(:,2) is the vector of multiplicities of the roots.
val_f=sum ( abs (polyval ( origfx_n, d(:,l))).*b(:,2))/norm ( origfx_n ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Now perform the deconvolutions another way.
% (1) Use simple least squares deconvolution to calculate the
% polynomials hx_{i}. This is the same as the method above.
% (2) Use structured matrix methods to calculate the polynomials % w_{i}. % Initialise the polynomials h_{i} as the empty cell.
poly_h = zeros (0);
% poly_f is a cell that contains the corrected form of the given
% polynomial, and the AGCD polynomials, expressed in the y variable.
If=length (poly_f ) ; % the number of polynomials stored in the cell
% Calculate the polynomials h_{i} and store them in poly_h. Use % the least squares solution,
for k=l:l:lf-l
Q=cauchy (poly_f { k+1 } , length (poly_f { k } ) -length (poly_f { k+1 } ) ) ; [ul, sl,vl]=svd(Q) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
poly_h{k}= (vl*sl ' *ul ' *poly_f {k} ' ) ' ;
end
poly_h { If } =poly_f { If } ; % the last polynomial h_{i}
% Normalise the polynomials poly_h by the geometric means of their % coefficients,
for k=l:l:lf
poly_h { k } =geomecoeff (poly_h{k} ) ;
end
% The polynomials w_{i} are computed by structured matrix methods. % Apply the transformation y=(theta) (w) and use linear programming % to calculate the optimal value of theta.
theta=optimal_multipoly (poly_h) ;
% Transform the polynomials h_{i} from y to w using theta.
for k=l:l:lf
deg_h=length (poly_h { k } ) -1 ;
w=deg_h : -1 : 0 ;
poly_h { k } =poly_h { k } . *theta . Aw;
end
% Compute the polynomials w_{i} using a linear structure preserving % matrix method. poly_w and poly_h are in the w variable.
poly_w=deconvinLSM (poly_h) ;
% The array b has two colums. b(i,l) stores the value of the ith % distinct root and b(i,2) stores its multiplicity. The number of % rows is equal to the number of distinct roots, which is equal % to the sum of the elements of degree3.
b2=zeros (sum (degree3 ) , 2 ) ;
i=l; % initialise this counter for later
% length (multi ) is the number of distinct roots
for k=l : 1 : length (multi )
g=multi(k); % Note: some roots may have the same multiplicity
% Compute the roots of the polynomials wg and transform
% to the y variable by multiplying by theta, where thete
% is computed by optimal_multipoly .
ro2=roots (poly_w{g} ' ) *theta;
% If there is more than one root with the same multiplicity, % sort the roots in ascending order,
if length (ro2 ) >1
ro2=sort (ro2 ) ; for j=l : 1 : length (ro2 )
b2 (i, l)=ro2 ( j) ;
b2 (i, 2)=g;
i=i+l ;
end
else % there is only one root with multiplicity g
b2 ( i , 1 ) =ro2 ;
b2 (i, 2)=g;
i=i+l ;
end % if
end % for
% Improve the values of the roots using the method of non-linear % least squares. b2 is the matrix specified above and fx_n is the % given noisy polynomial. d2 is the vector of the improved roots. d2=nonlinearLS (b2, fx_n) ; % fx_n is the original noisy polynomial.
% Calculate the residual of noisy polynomial at the improved root
% estimates stored in d2. Note that d2 only stores the distinct roots
% and b2 ( : , 2 ) is the vector of multiplicities of the roots.
val_f2=sum ( abs (polyval ( origfx_n, d2(:,l))).*b2(:,2))/norm ( origfx_n ) ;
% Calculate the residual of the noisy polynomial f (x) with
% respect to the theoretically exact roots.
val_f3=sum ( abs (polyval ( origfx_n, a(:,l))).*a(:,2))/norm ( origfx_n ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Two solutions have been computed. The residual associated with % these solutions are stored in the vectors val_f and val_f2.
% Choose the solution that yields the smallest residual,
if val_f < val_f2
% Compare the multiplicities of the exact roots and the
% computed roots. First check the number of distinct roots.
if length (a ( : , 2) )==length(b( : , 2) )
% The number of distinct roots is the same. Compare the
% multiplicities of the exact roots and the computed roots. if ~any(a(:,2)-b(:,2) )
% The multiplicities of the roots in a and b are the
% same. Compute the relative error of the computed roots. c=abs( (a(:,l)-d) ./a(:,l) ) ;
% Print the results.
disp( ' ' ) ;
dispC ');
lines_l = ' ' ;
lines_2 = ' ' ;
alllines = [ lines_l , lines_2 ] ;
disp (alllines);
disp (alllines);
dispC ');
titlestg_l = 'multiplicity ';
titlestg_2 = 'exact root ';
titlestg_3 = 'computed root '; titlestg_4 = 'error';
alltitle = [titlestg_l , titlestg_2 , titlestg_3 , titlestg_4 ] ; disp (alltitle) ;
disp (amines);
dispC ');
for i=l : 1 : rs
fprintf ( ' %7. Of %22.8e %18.8e %18.8e\n ' , ...
a(i,2),a(i,l),d(i),c(i));
end
else
% The number of distinct computed roots is correct, but the % multiplicities of the computed and exact roots differ.
% Loop over each distinct root,
for k=l : 1 : length (b( :, 2) )
if k==l
poly=struct ( 'multiplicity ' , b (k, 2 ) , 'root ' , d (k, 1 ) ) ; else
poly (k) . multiplicity=b (k, 2 ) ;
poly (k ) . root=d (k, 1 ) ;
end
% Print the results.
disp( ' ' ) ;
dispC ');
lines = ' ' ; disp (lines ) ;
disp (lines ) ;
dispC ');
display ( ' ' ) ;
display (' computed root multiplicity');
display ( ' ' ) ;
for i=l : 1 : rs
fprintf ('% 16.8e %12. Of\n ' , ...
poly(i) . root , poly ( i ) .multiplicity) ;
end
display ( ' ' ) ;
display ( ' ' ) ;
end % k loop
disp ( ' Residual of given polynomial using the computed roots' disp (val_f ) ;
disp ( ' Residual of given polynomial using the exact roots') disp (val_f3 ) ;
end % ~any
else % the number of computed distinct roots is not equal
% to the number of exact distinct roots
% Loop over each computed distinct root,
for k=l : 1 : length (b( :, 2) )
if k==l
poly=struct ( 'multiplicity ' , b (k, 2 ) , 'root ' , d (k, 1 ) ) ;
else
poly (k) . multiplicity=b (k, 2 ) ; poly (k ) . root=d (k, 1 ) ;
end
% Print the results.
disp( ' ' ) ;
dispC ');
lines = ' ' ; disp (lines ) ;
disp (lines ) ;
dispC ');
display ( ' ' ) ;
display (' computed root multiplicity');
display ( ' ' ) ;
for i=l : 1 : rs
fprintf('% 16.8e %12. Of\n ' , ...
poly(i) . root , poly ( i ) .multiplicity) ;
end
display ( ' ' ) ;
display ( ' ' ) ;
end % k loop
disp (' Residual of given polynomial using the computed roots' disp (val_f ) ;
disp (' Residual of given polynomial using the exact roots ') disp (val_f3 ) ;
end % end of the condition val_f < val_f2
else % come here if val_f >= val_f2
% Compare the multiplicities of the exact roots and the
% computed roots. First check the number of distinct roots.
if length (a ( : , 2) )==length(b2 ( : , 2) )
% The number of distinct roots is the same. Compare the
% multiplicities of the exact roots and the computed roots. if ~any ( a ( : , 2 ) -b2 ( : , 2 ) ) %find the exact solution
% The multiplicities of the roots are the same.
% Compute the relative error of the computed roots.
c2=abs( (a(:,l)-d2) ./a(:,l) ) ;
% Print the results.
disp( ' ' ) ;
dispC ');
lines_l = ' ' ;
lines_2 = ' ' ;
alllines = [ lines_l , lines_2 ] ;
disp (alllines);
disp (alllines);
dispC ');
titlestg_l = 'multiplicity ';
titlestg_2 = 'exact root ';
titlestg_3 = 'computed root ';
titlestg_4 = 'error';
alltitle = [titlestg_l , titlestg_2 , titlestg_3 , titlestg_4 ] ; disp (alltitle) ;
disp (alllines);
dispC '); for i=l : 1 : rs
fprintf ( ' %7. Of %22.8e %18.8e %18.8e\n ' , ...
a(i,2),a(i,l),d2(i),c2(i));
end
else
% The number of distinct computed roots is correct, but the % multiplicities of the computed and exact roots differ.
% Loop over each computed distinct root
for k=l : 1 : length (b2 (:, 2) )
if k==l
poly=struct ( 'multiplicity' ,b2 (k, 2) , 'root ' ,d2 (k, 1) ) else
poly (k) . multiplicity=b2 (k, 2 ) ;
poly (k ) . root=d2 (k, 1 ) ;
end
% Print the results.
disp( ' ' ) ;
dispC ');
lines = ' ' ; disp (lines ) ;
disp (lines ) ;
dispC ');
display ( ' ' ) ;
display (' computed root multiplicity');
display ( ' ' ) ;
for i=l : 1 : rs
fprintf ('% 16.8e %12. Of\n ' , ...
poly(i) . root , poly ( i ) .multiplicity) ;
end
display ( ' ' ) ;
display ( ' ' ) ;
end % k loop
disp ( ' Residual of given polynomial using the computed roots disp (val_f2 ) ;
disp ( ' Residual of given polynomial using the exact roots') disp (val_f3 ) ;
end % if ~any
else % the number of computed distinct roots is not equal
% to the number of exact distinct roots
% Loop over each computed distinct root
for k=l : 1 : length (b2 (:, 2) )
if k==l
poly=struct ( 'multiplicity ' , b2 (k, 2 ) , 'root ' , d2 (k, 1 ) ) ; else
poly (k) . multiplicity=b2 (k, 2 ) ;
poly (k ) . root=d2 (k, 1 ) ;
end
% Print the results.
disp( ' ' ) ; dispC ');
lines = ' ' ;
disp (lines ) ;
disp (lines ) ;
dispC ');
display ( ' ' ) ;
display (' computed root multiplicity');
display ( ' ' ) ;
for i=l : 1 : rs
fprintf('% 16.8e %12. Of\n ' , ...
poly(i) . root , poly ( i ) .multiplicity) ;
end
display ( ' ' ) ;
display ( ' ' ) ;
end % k loop
disp (' Residual of given polynomial using the computed roots' disp (val_f2 ) ;
disp (' Residual of given polynomial using the exact roots') disp (val_f3 ) ;
end % length loop
end % val f < val f2
disp ( ' ' ) ;
start=input ( ' Do you want to try it again manually (yes/no)
disp ( ' ' ) ;
if strcmp ( start , 'yes')
manual= ' yes ' ;
% Clear the following variables because they are
% growing inside a loop,
clear degree degree2 degree3 multi
clear poly_f poly_h hx wx theta_up alpha_up
end % if
end
Appendix 3 : samplepoly3
function [a] = samplepoly3 (k)
% This function contains a database of polynomials.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% a is a vector with two columns, where the first column contains % value of the root and the second column defines its multiplicity
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% switch k
case 1
a =[
-1.6640e+000 3. OOOOe+000
3.1372e+000 1.OOOOe+000
2.5595e+000 1.OOOOe+000
-4.1603e+000 2.OOOOe+000
-1.3670e+000 1.OOOOe+000
-9.6903e+000 2.OOOOe+000] ;
case 2
a =[
7.9282e+000 OOOOe+000
-3.5539e-001 OOOOe+000
-9.7181e+000 OOOOe+000
2.4576e+000 OOOOe+000
-5.3781e+000 OOOOe+000
case 3
a =[
1.8839e+000 6.OOOOe+000
-4.8242e-001 4. OOOOe+000
-2.6338e+000 2.OOOOe+000
3.1122e+000 2.OOOOe+000] ;
no_root=4 ;
case 4
a =[
2.6911e+000 3.OOOOe+000
-7.7785e+000 4. OOOOe+000
2.1469e+000 3.OOOOe+000
7.7952e+000 2.OOOOe+000
-1.8629e-001 2.OOOOe+000
-3.7298e+000 2.OOOOe+000] ;
case 5 Isensi ve to perturbat
a =[
-5.8213e+000 2.OOOOe+000
4.1856e+000 2.OOOOe+000
-5.2754e+000 2.OOOOe+000
-7.6121e+000 3.OOOOe+000
2.1461e+000 3.OOOOe+000
-9.9725e-001 2.OOOOe+000
-8.2549e-001 2.OOOOe+000
3.2389e+000 2.OOOOe+000
5.4057e+000 2.OOOOe+000] ; case 6
a=[
-6.5180e-001 2. OOOOe+000
3.1339e+000 5.0000e+000
-4.1963e+000 5. OOOOe+000
5.0907e+000 5. OOOOe+000
1.1624e+000 3.OOOOe+000
-1.4441e+000 6.OOOOe+000] ;
case 7
a=[
-2.1309e + 000 2.OOOOe+000
3.4286e+000 1.OOOOe+000
4.8252e+000 5. OOOOe+000
4.0105e-001 2.OOOOe+000
-3.0457e+000 3.OOOOe+000
-7.0001e+000 4. OOOOe+000
1.7218e+000 3.OOOOe + 000] ;
case 8
a=[
-2.0640e + 000 6.OOOOe+000
-8.5201e+000 3.OOOOe+000
3.6819e+000 4. OOOOe+000
-1.9522e+000 2.OOOOe + 000] ;
case 9
a=[
-2.7873e + 000 3.OOOOe+000
5.1302e+000 5. OOOOe+000
-1.7220e+000 5. OOOOe+000
-1.5310e-001 6.OOOOe+000
3.8949e + 000 1.OOOOe+000
9.4547e+000 3.OOOOe + 000] ;
case 10
a=[
6.6673e + 000 OOOOe+000
-1.9274e + 000 OOOOe+000
-2.1965e+000 OOOOe+000
-2.7910e+000 OOOOe+000
-7.1949e+000 OOOOe+000]
case 11
a =[
-9.3480e + 000 2.OOOOe+000
1.2240e + 000 3.OOOOe+000
7.6373e+000 3.OOOOe+000
3.3835e + 000 6.OOOOe + 000] ;
case 12
a =[
-3.6385e+000 2.OOOOe + 000
-7.6157e+000 2.OOOOe+000
8.7966e+000 2.OOOOe+000
2.9110e+000 2.OOOOe+000
-4.1074e-001 2.OOOOe + 000] ;
case 13
a =[
-7.257287870897766e-001 1.OOOOOOOOOOOOOOOe + 000 1.796349792094862e+000 4. OOOOOOOOOOOOOOOe+000
-6.256558589203227e+000 5. OOOOOOOOOOOOOOOe+000 2.226602932759030e+000 3.OOOOOOOOOOOOOOOe+000] ; case 14
a =[
8.1273e+000 2.OOOOe+000
2.5785e+000 6.OOOOe+000
-7.9693e+000 5. OOOOe+000
-2.1829e+000 1. lOOOe+001] ;
case 15
a =[
3.9253 6.0000
-8.1236 9.0000
0.5081 5.0000];
case 16
a =[
2.1573 6.0000
4.8251 5.0000
-7.9037 9.0000
-7.4422 8.0000];
case 17
a =[
7.0453 1.0000
0.1127 2.0000
2.7132 3.0000
9.0179 4.0000
-1.1207 5.0000
-8.7996 6.0000];
case 18 % works in the initial estimate but not in non-linear a =[
-1. .3708 1. .0000
-3. .2431 2. .0000
4. .4145 3. .0000
-9. .7269 4. .0000
-2. .5188 5. .0000
8. .4537 6. .0000
0. .9296 7. .0000
-0. .5223 8. .0000]
case 19
a =[
3.1020e+000 2.OOOOe+000
-6.7478e+000 5. OOOOe+000
-7.6200e+000 2.OOOOe+000
-3.2719e-002 4. OOOOe+000
9.1949e+000 4. OOOOe+000
-3.1923e+000 5. OOOOe+000
1.7054e+000 6.OOOOe+000] ;
case 20
a=[
5.3687 10
6.6625 9
-8.6324 3
-6.0083 5];
case 21
a=[
6.7939 4.0000
0.6525 5.0000
1.0777 3.0000
3.6013 2.0000 -2.6562 3.0000
-5.2142 3.0000];
case 22
a =[
8.1031 2.0000
3.5078 8.0000
-0.6306 8.0000
-5.8211 9.0000];
case 23
a =[
-7.3132 1.0000
9.0183 2.0000
4.4470 3.0000
-1.9984 4.0000
6.6374 4.0000
-8.7907 6.0000];
case 24
a =[
-1.7223 3.0000
3.0024 4.0000
9.4967 4.0000
-8.4807 9.0000
1.7404 11.0000];
case 25
a =[
-1.1539 4.0000
4.0809 5.0000
-2.1059 6.0000
3.6683 7.0000
-9.6084 13.0000];
case 26
a =[
-6.7547e-001 OOOOe+000
5.7335e+000 OOOOe+000
2.1747e+000 OOOOe+000
-9.5568e+000 OOOOe+001
-6.5553e+000 lOOOe+001]
case 27
a =[
-4.2031e+000 OOOOe+000
-1.2866e+000 OOOOe+000
-3.5314e+000 OOOOe+000
7.2748e+000 OOOOe+000
8.8544e+000 lOOOe+001]
case 28
a =[
6.4292e+000 OOOOe+000
-5.2243e+000 OOOOe+000
1.4379e+000 OOOOe+000
7.3540e+000 OOOOe+000
-9.2810e-001 OOOOe+000]
case 29
a =[
-3.462362379516644e+000 2.OOOOOOOOOOOOOOOe+000 2.689149618469715e+000 2.OOOOOOOOOOOOOOOe+000 8.468940238686715e+000 2.OOOOOOOOOOOOOOOe+000 -2.521408392359227e+000 8.OOOOOOOOOOOOOOOe+000 -1.626199265767913e+000 9.OOOOOOOOOOOOOOOe+000 6.161621574387710e+000 l.lOOOOOOOOOOOOOOe+001]; case 30
a =[
-8.3983e+000 1. OOOOe+000
-4.4232e+000 1.OOOOe+000
-5.9623e+000 3.OOOOe+000
5.3234e+000 3.OOOOe+000
-9.7284e+000 4. OOOOe+000
9.7680e+000 4. OOOOe+000
-6.6012e-001 6.OOOOe+000] ;
case 31
a=[
6.3371e-001 5. OOOOe+000
4.2244e-001 2.OOOOe+000
5.4862e+000 4. OOOOe+000
-7.5947e+000 6.OOOOe+000
2.5090e+000 2.OOOOe+000
-3.0670e+000 2.OOOOe+000
-3.3076e+000 3.OOOOe+000
1.4923e+000 5. OOOOe+000] ;
case 32
a =[
-9.7177e+000 2.OOOOe+000
-5.7885e+000 2.OOOOe+000
-4.5993e+000 3.OOOOe+000
-6.8623e+000 4. OOOOe+000
1.9438e+000 4. OOOOe+000
5.6878e+000 5. OOOOe+000] ;
case 33
a =[
-9.1213e+000 1.OOOOe+000
6.0129e+000 2.OOOOe+000
-1.7959e+000 3.OOOOe+000
2.2209e+000 3.OOOOe+000
-2.6889e+000 4. OOOOe+000
-2.9605e+000 5. OOOOe+000
8.4650e+000 6.OOOOe+000] ;
case 34
a =[
-4.2031e+000 OOOOe+000
-1.2866e+000 OOOOe+000
-8.6549e+000 OOOOe+000
-2.3140e+000 OOOOe+000
8.8544e+000 OOOOe+000]
case 35
a =[
-3.5314e+000 2.OOOOe+000
7.2748e+000 2.OOOOe+000
-9.6667e+000 7. OOOOe+000
7.8414e+000 7. OOOOe+000] ;
case 36
a =[
-4.2031e+000 OOOOe+000
-1.2866e+000 OOOOe+000
-8.6549e+000 OOOOe+000 -2.3140e+000 l.lOOOe+001
8.8544e + 000 1.100 Oe+001 ] ;
case 37 Does not work for linear structure matrix method. a =[
6.2126e+000 1 OOOOe+000
-8.2000e+000 4 OOOOe+000
1.5211e+000 4 OOOOe+000
9.7688e+000 4 OOOOe+000
-1.9231e+000 5 OOOOe+000
-3.5812e+000 6 OOOOe+000] ;
case 38
a =[
8.3467e + 000 2.OOOOe+000
1.5548e+000 3.OOOOe+000
2.7865e+000 3.OOOOe+000
-6.7685e+000 5. OOOOe+000
4.3127e+000 5. OOOOe+000
-1.3340e+000 6.OOOOe+000
-7.7536e-001 6.OOOOe + 000] ;
case 39
a =[
-9.4973e + 000 5. OOOOe+000
4.5155e+000 7. OOOOe+000
-6.3180e+000 8.OOOOe+000
-1.5778e+000 9.OOOOe + 000] ;
case 40
a =[
-2.2205e-001 1.OOOOe+000
-2.6513e+000 2.OOOOe+000
-2.0897e + 000 9.OOOOe+000
2.4812e+000 1. OOOOe+001
3.5827e+000 1.OOOOe+001
-5.3681e+000 1. lOOOe + 001] ;
case 41 A good example for testing ex_autoRank
a=[
-3.3321 0000
9.1549 0000
-7.2690 0000
-1.4508 0000
-8.4041 0000
-3.0207 0000
5.8748 0000] ;
case 42 % A good example for testing ex_autoRank
a =[
1.7218 3.0000
-4.7571 5.0000
-9.1109 7.0000
-7.0001 8.0000] ;
case 43
a = L
-0. .9221 2. .0000
6. .3884 4. .0000
-5. .9585 6. .0000
-1. .4418 6. .0000
0. .6378 6. .0000
0. .0768 7. .0000
1. .8672 7. .0000 2.2562 7.0000
9.3211 7.0000] ;
case 44
a =[
-9.3279e + 000 OOOOe+000
3.9978e+000 OOOOe+000 -3.6080e+000 OOOOe+000
2.7706e+000 OOOOe+000 -8.6239e+000 OOOOe+000] case 45
a =[
-5.8308e+000 3.OOOOe+000 -4.5941e + 000 5. OOOOe+000 7.0600e+000 6.OOOOe+000 7.4785e+000 7. OOOOe + 000] ; case 46
a=[
2.0396e+000 2.OOOOe+000 -4.7406e+000 3.OOOOe+000 -6.6870e+000 5. OOOOe+000 -3.7757e+000 7. OOOOe+000 5.7066e-001 8.OOOOe+000 3.0816e+000 9.OOOOe + 000] ; case 47
a=[
-5.8308e+000 3.OOOOe+000 -4.5941e+000 5. OOOOe+000 7.0600e+000 6.OOOOe+000 7.4785e+000 7. OOOOe+000] ; case 48
a=[
1.5479e+000 OOOOe+000 -6.7298e+000 OOOOe+000 5.8932e+000 OOOOe+000 8.6523e+000 OOOOe+000 8.4219e+000 OOOOe+000] case 49
a =[
8.9812e-001 1.OOOOe+000 4.7193e+000 7. OOOOe+000 5.8936e+000 9.OOOOe+000] ; case 50
a =[
1.0708e+000 2.OOOOe+000
1.4168e+000 2.OOOOe+000 -1.4000e+000 5. OOOOe+000
3.0917e-001 5. OOOOe+000 -1.6387e-001 7. OOOOe+000 -3.3864e+000 8.OOOOe+000
9.9370e+000 9.OOOOe + 000] ; case 51
a =[
8.2039e+000 4. OOOOe+000 2.1727e+000 6.OOOOe+000 8.1820e+000 9.OOOOe + 000] ; case 52
a =[
-4.2796e+000 4. OOOOe+000 1.4137e + 000 5. OOOOe+000 1.4366e + 000 5. OOOOe+000 3.9827e+000 7. OOOOe+000
-1.1682e+000 9.OOOOe+000 5.9252e+000 9.OOOOe + 000] ; case 53
a =[
-7.6516e+000 OOOOe+000 -4.0665e+000 OOOOe+000
4.2243e+000 OOOOe+000 -5.5651e+000 OOOOe+000 -3.6244e + 000 OOOOe+000] case 54
a =[
-8.3306e + 000 2.OOOOe+000 -7.3366e+000 5. OOOOe+000 -9.2405e-001 5. OOOOe+000 -6.5322e+000 6.OOOOe+000 -1.3522e+000 8.OOOOe+000
6.5063e+000 8.OOOOe + 000] ; case 55
a =[
-9.3279e + 000 OOOOe+000
3.9978e+000 OOOOe+000 -3.6080e+000 OOOOe+000
2.7706e+000 OOOOe+000 -8.6239e+000 OOOOe+000] case 56
a =[
5.6275e+000 OOOOe+000 3.8506e+000 OOOOe+000 4.1451e+000 OOOOe+000 1.1334e+000 OOOOe+000 -4.2405e+000 OOOOe+000] end
Appendix 4 : geomecoeff
function g = geomecoeff (f )
% This function normalises the polynomial whose coefficients are stored % in the vector f by the geometric mean of its coefficients. The % normalised polynomial is stored in the vector g, which is of the same % length as f.
productf=1 ;
for k=l : 1 : length (f)
if f(k) ~=0
productf=abs ( f (k ) ) A ( 1 /length ( f ) ) *productf ;
end
end
g=f/productf ;
Appendix 5 manualRank .m
function [degree, stoploop] manualRank ( fx_n, fignum)
This function calculates manually the rank of the Sylvester matrix of the polynomial fx_n and its derivative.
fx_n the vector of coefficients of a polynomial
fignum the figure number for the graphs
degree the degree of an AGCD of fx_n and its derivative stoploop used for continuing or stopping the algorithm
% Three methods are used to calculate the degree of an AGCD of
% fx_n and its derivative:
% Method 1: The first principal angle (angle between two subspaces) % Method 2: The residual of an approximate linear algebraic equation % Method 3: A constraint between the polynomial and its derivative
% Methods 1 and 2 are valid for two arbitrary polynomials, and not
% restricted to a polynomial and its derivative. Method 3 is, however,
% only applicable to a polynomial and its derivative.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%
% METHODS 1 and 2 %
%%%%%%%%%%%%%%%%%%%
% Define the length of fx_n and the degree of the polynomial.
ml=length ( fx_n ) ;
m=ml-l ;
% Calculate the derivative of fx_n and denote it gx_n . Normalise % fx_n and gx_n by the geometric means of their coefficients.
gx_n=polyder (fx_n) ;
fx_n=geomecoeff (fx_n) ;
gx_n=geomecoeff (gx_n) ;
% Calculate the optimal values of theta and alpha. Denote
% these optimal values theta_opt and alpha_opt respectively.
[theta_opt, alpha_opt] =optimal_linprog (fx_n, gx_n) ;
% Define the polynomials fw_n and gw_n, which are the polynomials
% fx_n and gx_n, respectively, after transformation to the w variable. fw_n=zeros ( 1 , ml ) ;
gw_n=zeros ( 1 , m) ; % gw_n is of degree m-1
for k=l : 1 : ml
fw_n (k) =fx_n (k) *theta_opt A (ml-k) ;
end
for k=l : 1 : m
gw_n (k) =gx_n (k) *theta_opt A (m-k) ;
end
% Initialise some vectors that are required later,
minang = zeros ( 1 , m-1 ) ;
col_a = zeros ( 1 , m-1 ) ;
minres = zeros ( 1 , m-1 ) ;
col_r = zeros ( 1 , m-1 ) ;
% Form the loop for all the subresultant matrices for k=l...m-l.
for k=l : 1 : m-1 % Form the kth subresultant matrices. Include alpha_opt . Sk=KthSylvester (fw_n, alpha_opt*gw_n, k) ;
[rn, cn] =size (Sk) ;
% Initialise two vectors that are required later.
angle=zeros ( 1 , cn ) ;
res=zeros ( 1 , cn ) ;
% Start the loop for searching across all the columns of Sk.
for g=l : 1 : cn
% Remove the gth column from Sk and denote it by ck . The % matrix formed from the remaining columns of Sk is Tk .
Tk= [Sk ( : , 1 :g-l) , Sk ( : , g+1 :end) ] ;
ck=Sk ( : , g) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Calculate the rank of Sk by using the first principal angle. % This is Method 1.
ckl=ck/norm ( ck ) ;
[Nkl,Rkl]=qr (Tk, 0) ;
% The columns of Nkl form an orthonormal basis for
% the columns of Tk .
[Ua, Sa,Va]=svd(Nkl) ;
[ra,ca]=size(Nkl) ;
% Calculate an orthonormal basis for the orthogonal
% complement of Tk .
Nk2=Ua ( : , ca+1 :ra) ;
sigmal=svd (ckl ' *Nk2 ) ; % The smallest singular value of ckl ' *Nk2 angle (g) =asin ( sigmal ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Calculate the rank of Sk by using the normalised residual. % This is Method 2.
[ul, sl,vl]=svd(Tk) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
xo=vl*sl ' *ul ' *ck;
r=ck-Tk*xo;
res (g) =norm (r ) /norm ( ck ) ; % the normalised residual
end % g loop for the columns of each subresultant matrix
[minang (k) , col_a (k) ] =min (angle) ;
[minres (k) , col_r (k) ] =min (res) ;
end % k loop for subresultant matrices
% This ends the calculation using Methods 1 and 2.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Calculate the degree of an AGCD using Method 3.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % METHOD 3
% Define a constant that is required for determining the rank of
% the Sylvester matrix of fx_n and its derivative.
lamder= ( abs ( fx_n (ml ) ) /factorial (m) ) A ( 1 /m) /prod ( abs ( fx_n ) . A ( 1 / (m*ml ) ) ) ;
% This method requires that estimates of the common divisor
% polynomial be calculated from Methods 1 and 2.
% Initialise this array,
error = zeros ( 2 , m-1 ) ;
for method=l : 1 : 2
% Define the vector col, which contains the optimal column for
% each subresultant matrix.
switch method
case 1 % the first principal angle
col=col_a;
case 2 % the residual of an approximate equation
col=col_r ;
end
for k=l:l:m-l % loop for all the subresultant amtrices
% Follow the procedure above. Form the kth Sylvester
% subresultant matrix, and remove col(k), which is
% the optimal column. Denote this optimal column ak,
% and the remaining matrix Ak .
Sk=KthSylvester (fw_n, alpha_opt*gw_n, k) ;
Ak= [Sk ( : , 1 :col (k) -1) , Sk ( : , col (k) +1 :end) ] ;
ak=Sk ( : , col (k) ) ;
% Calculate the least squares solution xk of (Ak) (xk)=ak.
[ul, sl,vl]=svd(Ak) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
xk=vl*sl ' *ul ' *ak;
% Form the coefficients of the coprime polynomials.
vecx= [xk(l:col(k)-l) ;-l;xk(col(k) : end) ] ;
vk=vecx ( 1 : m-k ) ;
uk=-vecx (m-k+1 : end) ;
% An estimate of a common divisor dk of degree k is obatined % solving the approximate equations
% fw_n approx.= (uk) (dk) and (alpha_opt) (gw_n) approx.= (vk) (dk)
% Form the coefficient matrix.
Qkl=cauchy (uk, k) ; %dk is of degree k
Qk2=cauchy (vk, k) ;
Bk= [Qkl;Qk2] ;
% Form the right hand side vector and obtain the least
% squares solution.
bk= [fw_n, alpha_opt*gw_n] ' ; % the parameter alpha_opt is included [ul, sl,vl]=svd(Bk) ; for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
dk=vl*sl ' *ul ' *bk;
% Form the Cauchy matrices Uk and Vk . The constant lamber is % required in the matrix Vk .
Lk=cauchy (dk, m-k-1 ) ;
Vk=theta_opt/ (alpha_opt*lamder ) *Lk;
veck=k : -1 : 1 ;
Uk=cauchy (veck ' . *dk ( 1 : end-1 ) , m-k ) ;
% Form the diagonal matrix R whose non-zero entries lie % on the leading diagonal.
vecmk=m-k : -1 : 1 ;
R= [diag (vecmk, 0 ) , zeros (m-k, 1 ) ] ;
% The error measure for Method 3.
eln = norm( (Vk*vk) - (Lk*R+Uk) *uk) ;
eld = norm (Vk*vk) +norm ( (Lk*R+Uk) *uk) ;
error (method, k ) = eln/eld;
end % k loop for subresultant matrices
end % method loop for Methods 1 and 2 loop in calculation for Method 3
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Plot the figures that enable the rank to be determined manually.
figure ( fignum)
x=l : 1 : m-1 ;
subplot (2,2,1)
plot (x, logl 0 (minang) , '—ko ' , ' LineWidth
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize' , 6) ,
xlabelC k ' , ' FontSize ' , 16 )
ylabelC log_{10} \it \phi_{ k }', ' FontSize ', 16 )
subplot (2,2,2)
plot (x, logl 0 (minres ) , '—ks ' , ' LineWidth
' MarkerEdgeColor ' , ' b ' , ...
' MarkerFaceColor ' , ' b ' , ...
'MarkerSize' , 6) ,
xlabelC k ', ' FontSize ', 16 )
ylabelC log_{10} \it r_{ k }', ' FontSize ', 16 )
subplot (2,2,3)
plot (x, logl 0 (error ( 1, : ) ) , '—kd ' , ' LineWidth ' , 1, ...
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize' , 6) ,
xlabelC k ', ' FontSize ', 16 )
ylabelC log_{10} \it error_{ k }', ' FontSize ', 16 )
subplot (2, 2, 4)
plot (x, logl 0 (error (2, : ) ) , '—kd ' , ' LineWidth ' , 1, ...
' MarkerEdgeColor ' , ' b ' , ...
' MarkerFaceColor ' , ' b ' , ...
'MarkerSize' , 6) ,
xlabelC k ', ' FontSize ', 16 )
ylabelC log_{10} \it error_{ k }', ' FontSize ', 16 ) % Read in the rank.
dispC ')
degree=input (' Specify the value of k: ');
dispC ')
% If the degree of the GCD of the polynomial and its derivative % is equal to 0, the polynomial does not contain multiple roots.
% If the degree of the GCD of the polynomial and its derivative % is equal to 1, terminate the loop because the derivative of the % GCD is constant,
if degree==0 | | degree==l
stoploop= ' yes ' ;
else
stoploop= 'no' ;
end
Appendix 6 : autoRank.m
function [degree, stoploop] =autoRank (fx_n)
% This function calculates automatically the rank of the Sylvester % matrix of the polynomial fx_n and its derivative.
% degree : the degree of an AGCD of fx_n and its derivative
% stoploop : used for continuing or stopping the algorithm
% Three methods are used to calculate the degree of an AGCD of
% fx_n and its derivative:
% Method 1: The first principal angle (angle between two subspaces) % Method 2: The residual of an approximate linear algebraic equation % Method 3: A constraint between the polynomial and its derivative
% Methods 1 and 2 are valid for two arbitrary polynomials, and not
% restricted to a polynomial and its derivative. Method 3 is, however,
% only applicable to a polynomial and its derivative.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%
% METHODS 1 and 2
%%%%%%%%%%%%%%%%%
ml=length ( fx_n ) ;
m=ml-l; % the degree of fx_n
gx_n=polyder ( fx_n ) ; % the derivative of fx_n
% Normalise fx_n and gx_n by the geometric means of their coefficients. fx_n=geomecoeff (fx_n) ;
gx_n=geomecoeff (gx_n) ;
% Calculate the optimal values of alpha and theta.
[theta_opt, alpha_opt] =optimal_linprog (fx_n, gx_n) ;
% Define the polynomials in the w variable.
fw_n=zeros ( 1 , ml ) ;
gw_n=zeros ( 1 , m) ; % gw_n is of degree m-1 and length m.
for k=l : 1 : ml
fw_n (k) =fx_n (k) *theta_opt A (ml-k) ;
end
for k=l : 1 : m
gw_n (k) =gx_n (k) *theta_opt A (m-k) ; % alpha_opt is not inlcuded end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Loop over all the subresultant matrices
% Initialise some arrays,
minang = zeros ( 1 , m-1 ) ;
col_a = zeros ( 1 , m-1 ) ;
minres = zeros ( 1 , m-1 ) ;
col_r = zeros ( 1 , m-1 ) ;
for k=l:l:m-l % gw_n is of degree (m-1) .
% Form the kth subresultant matrix. Include the parameter
% alpha_opt with the polynomial gw_n . Sk=KthSylvester (fw_n, alpha_opt*gw_n, k) ;
[rn, cn] =size (Sk) ;
% Initialise these arrays, which are required later.
angle=zeros ( 1 , cn ) ;
res=zeros ( 1 , cn ) ;
% Loop over all the columns of each subresultant matrix,
for g=l : 1 : cn
% Remove the gth column from Sk. Denote this column ck
% and the rest of the matrix Tk .
Tk= [Sk ( : , 1 :g-l) , Sk ( : , g+1 :end) ] ;
ck=Sk ( : , g) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Calculate the rank using the first principal angle.
% This is Method 1.
ckl=ck/norm ( ck ) ;
% The columns of Nkl define an orthonormal basis for the space % spanned by the columns of Tk .
[Nkl,Rkl]=qr (Tk, 0) ;
% Use the orthogonal complements of the vector ckl and the space % panned by the columns of Tk to calculate the first principal % angle.
[Ua, Sa,Va]=svd(Nkl) ;
[ra,ca]=size(Nkl) ;
Nk2=Ua ( : , ca+1 :ra) ;
sigmal=svd (ckl ' *Nk2 ) ;
angle (g) =asin ( sigmal ) ; % the first principal angle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Calculate the rank using the residual of an approximate % linear algebraic equation. This is Method 2. Use the least % squares solution xo of this approximate equation,
[ul, sl,vl]=svd(Tk) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
xo=vl*sl ' *ul ' *ck;
r=ck-Tk*xo; % the residual
res (g) =norm (r ) /norm ( ck ) ; % the normalised residual
end % loop for the columns of the subresultant matrices
[minang (k) , col_a (k) ] = min (angle);
[minres (k) , col_r (k) ] = min (res);
end % loop for the subresultant matrices
% This ends the calculation using Methods 1 and 2.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Calculate the degree of an AGCD using Method 3.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% METHOD 3
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % This constant is required for estimating the degree of an AGCD
% of fx_n and its derivative.
lamder= ( abs ( fx_n (ml ) ) /factorial (m) ) A ( 1 /m) /prod ( abs ( fx_n ) . A ( 1 / (m*ml ) ) ) ;
% This method requires that estimates of the common divisor polynomial % be calculated from Methods 1 and 2 above.
% Initialise this array,
error = zeros ( 2 , m-1 ) ;
for method=l : 1 : 2 % loop for Methods 1 and 2
% Define the vector col, which contains the optimal column for
% each subresultant matrix,
switch method
case 1 % first principal angle
col=col_a;
case 2
col=col_r; % the residual of an approximate equation end
for k=l:l:m-l % loop for all the subresultant matrices
% Follow the procedure above. Form the kth Sylvester
% subresultant matrix, and remove col(k), which is
% the optimal column. Denote this optimal column ak,
% and the remaining matrix Ak .
Sk=KthSylvester (fw_n, alpha_opt*gw_n, k) ;
Ak= [Sk ( : , 1 :col (k) -1) , Sk ( : , col (k) +1 :end) ] ;
ak=Sk ( : , col (k) ) ;
% Calculate the least squares solution xk .
[ul, sl,vl]=svd(Ak) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
xk=vl*sl ' *ul ' *ak;
% Form the estimates of the quotient polynomials vk and uk .
vecx= [xk(l:col(k)-l) ;-l;xk(col(k) : end) ] ;
vk=vecx ( 1 : m-k ) ;
uk=-vecx (m-k+1 : end) ;
% An estimate of a common divisor dk of degree k is obtained % solving the approximate equations
% fw_n approx.= (uk) (dk) and (alpha_opt) (gw_n) approx.= (vk) (dk)
% Form the coefficient matrix.
Qkl=cauchy (uk, k) ;
Qk2=cauchy (vk, k) ;
Bk= [Qkl;Qk2] ;
% Form the right hand side vector and obtain the least
% squares solution.
bk= [fw_n, alpha_opt*gw_n] ' ;
[ul, sl,vl]=svd(Bk) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
dk=vl*sl ' *ul ' *bk; % the estimate of a common divisor
% Form the Cauchy matrices Uk and Vk . The constant lamder is % required for the calculation of Vk . Lk=cauchy (dk, m-k-1 ) ;
Vk=theta_opt/ (alpha_opt*lamder ) *Lk;
veck=k : -1 : 1 ;
Uk=cauchy (veck ' . *dk ( 1 : end-1 ) , m-k ) ;
% Form the diagonal matrix R whose non-zero entries lie
% on the leading diagonal.
vecmk=m-k : -1 : 1 ;
R= [diag (vecmk, 0 ) , zeros (m-k, 1 ) ] ;
% The error measure for Method 3.
eln = norm( (Vk*vk) - (Lk*R+Uk) *uk) ;
eld = norm (Vk*vk) +norm ( (Lk*R+Uk) *uk) ;
error (method, k ) = eln/eld;
end % loop for the subresultant matrices
end % method loop for Methods 1 and 2 that are required for Method 3
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Use the results of Methods 1, 2 and 3 to determine the
% degree of an AGCD of fx_n and its derivative.
% Calculate the log of the first principal angles (Method 1) .
Igang=logl0 (minang) ;
% If the degree m of the AGCD is greater than 2, calculate the gradients % of the line segemnts and arrange them in descinding order,
if m>2
[tempi , index1 ] =sort ( lgang ( 2 : end) -lgang ( 1 : end-1 ) , ' descend ' ) ;
else % m=2, which implies there is only one point
templ=lgang;
indexl=l ;
end
% Calculate the log of the residuals (Method 2)
lgres=loglO (minres) ;
% As above, calculate the gradients of the line segments and arrange % them in descending order. Also, calculate the global minimum, if m>2
[temp2 , index2 ] =sort ( lgres (2 : end) -lgres (1 : end-1 ) , ' descend ' ) ;
[temp, d_res ] =min ( lgres ) ; % the global minimum
d_first=index2 ( 1 ) ;
% Calculate the value of k for which the gradient is greater
% than one.
for k=2 : 1 : m-2
if temp2(k)>l && d_first>index2 (k)
d_first=index2 (k) ; %the first bigger gap ( >1 in log scale) end
end
else % m=l
temp2=lgres ;
index2=l ;
end % if m>2
% Arrange the errors from Method 3, using both the first principal angle % and residual in ascending order.
lger_ang=logl 0 ( error ( 1 , : ) ) ; [temp3 , index3 ] =sort ( lger_ang) ;
lger_res=logl 0 ( error ( 2 , : ) ) ;
[temp4 , index4 ] =sort ( lger_res ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Compute the tests to determine the rank. Some of the criteria
% have been determined empirically.
% First determine tests if f (x) and f ' (x)are either coprime or
% have a common divisor of degree one.
% f (x) and f ' (x) do not have common roots, and thus stop the loop, if temp3(l)>-3 && temp4(l)>-3
stoploop= ' yes ' ;
degree=0 ;
% f (x) and f ' (x) do not have common roots, and thus stop the loop, elseif m==2 && templ>-4 && temp2>-4
stoploop= ' yes ' ;
degree=0 ;
% f (x) and f ' (x) have one common roots, and stop the loop,
elseif m==2 && templ<-4 && temp2<-4
stoploop= ' yes ' ;
degree=l ;
else
stoploop= ' no ' ; % continue execution
% Come here if f (x) and f ' (x) have a common divisor of degree
% greater than or equal to two.
% The four graphs yield the same result
if indexl ( 1 ) ==index2 ( 1 ) && index2 ( 1 ) ==index3 ( 1 ) && index3 ( 1 ) ==index4 ( 1 ) degree=indexl ( 1 ) ;
% The next four elseif statements deal with the situations when
% three of the four graphs yield the same result,
elseif indexl ( 1 ) ==index2 ( 1 ) && index2 ( 1 ) ==index3 ( 1 )
degree=indexl ( 1 ) ;
elseif indexl ( 1 ) ==index2 ( 1 ) && index2 ( 1 ) ==index4 ( 1 )
degree=indexl ( 1 ) ;
elseif indexl ( 1 ) ==index3 ( 1 ) && index3 ( 1 ) ==index4 ( 1 )
degree=indexl ( 1 ) ;
% The next elseif has an added condition in order to exclude
% the situation for f (x) = (x-i ) A {m} .
elseif index2 ( 1 ) ==index3 ( 1 ) && index3 ( 1 ) ==index4 ( 1 ) && temp2(l)>l
degree=index2 ( 1 ) ;
else
% The two global minima from Method 3 are the same. Include a % test on the gradients on the results from Method 3.
if (index3 (l)==index4 (1) && (temp3 (2 ) -temp3 ( 1 ) ) >1 && ...
(temp4 (2) -temp4 (1) ) >1)
degree=index3 ( 1 ) ;
% The global minima from Method 3 are at the last point. This % situation occurs for f (x) = (x-i ) A {m} .
elseif index3 ( 1 ) ==m-l && index4 ( 1 ) ==m-l
degree=m-l ;
% One of the global minima from Method 3 is at the last point, % and the difference, for Method 2, between the maximum value % (max ( lgres ) ) and minimum value (d_res) is less than 2. This % occurs for f (x) = (x-i ) A {m} .
elseif ( index3 ( 1 ) ==m-l | | index4 ( 1 ) ==m-l ) && (max ( lgres ) -d_res ) <2 degree=m-l ;
% The difference, for both results in Method 3, between the last % point and the minimum is less than 0.5, and the difference, for % Method 2, between the last point and the minimum is less than % 0.5 OR the difference, for Method 2, between the maximum and % minimum is less than 2. This occurs for f (x) = (x-i ) A {m} .
elseif ( lger_ang (m-1 ) -temp3 ( 1 ) ) <0.5 && ...
( lger_res (m-1 ) -temp4 ( 1 ) ) <0.5 && ...
( ( lgres (m-1 ) -d_res ) <0.5 | | (max ( lgres ) -d_res ) <2 )
degree=m-l ;
% The point at which the global minumum occurs and the point of % maximum gradient are, for Method 2, the same, and the points % of maximum gradient for Methods 1 and 2 are the same,
elseif index2 ( 1 ) ==d_res && indexl ( 1 ) ==index2 ( 1 )
degree=indexl ( 1 ) ;
% The points of global minima in Methods 1 and 2 are the same, % and the difference, for Method 3, between the angle (residual) % at the minimum point of Method 1 (2) is less than 1.
elseif indexl ( 1 ) ==index2 ( 1 ) && ( lger_ang ( indexl ( 1 )) -temp3 ( 1 )) <1 ...
&& ( lger_res ( indexl ( 1 ) ) -temp4 ( 1 ) ) <1
degree=indexl ( 1 ) ;
% The point of global minimum using the first principal for
% Method 3, and the point of global minimum of Method 2, are the
% same, and the gradient at the minimum for Method 3 using the
% first principal angle is greater than one OR the point of global
% minimum using the residual for Method 3, and the point of
% global minimum of Method 2, are the same, and the gradient at
% the minimum for Method 3 using the residual is greater than one. elseif ( index3 ( 1 ) ==d_res && (temp3 ( 2 ) -temp3 ( 1 ) ) >1 ) ...
I I (index4 (l)==d_res && (temp4 (2 ) -temp4 ( 1 ) ) >1 )
degree=d_res ;
% The first big gap in Method 1 occurs for two points in both % tests in Method 3 for which the gradient is less than one.
elseif ( lgang (d_first+1 ) -lgang (d_first ) ) >1 && ...
( lger_ang (d_first ) -temp3 ( 1 ) ) <1 && ...
( lger_res (d_first ) -temp4 ( 1 ) ) <1
degree=d_first ;
else
% Plot the graphs
figure ( 1 )
subplot (2,2,1)
x=l : 1 : m-1 ;
plot (x, logl 0 (minang) , '—ko ' , ' LineWidth
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize' , 6) ,
xlabelC k ' , ' FontSize ' , 16 )
ylabelC log_{10} \it \phi_{ k }', ' FontSize ', 16 )
subplot (2,2,2)
plot (x, logl 0 (minres ) , '—ks ' , ' LineWidth
' MarkerEdgeColor ' , ' b ' , ...
' MarkerFaceColor ' , ' b ' , ...
'MarkerSize' , 6) , xlabelC k ' , ' FontSize ' , 16 )
ylabelC log_{10} \it r_{ k }',' FontSize ', 16 ) subplot (2,2,3)
plot (x, loglO (error ( 1, : ) ) , '—kd ' , ' LineWidth ' , 1, .
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize ' , 6) , xlabelC k ',' FontSize ', 16 )
ylabelC log_{10} \it error_{ k }',' FontSize ', 16 ) subplot (2, 2, 4)
plot (x, loglO (error (2, : ) ) , '—kd ' , ' LineWidth ' , 1, .
' MarkerEdgeColor ' , ' b ' , ...
' MarkerFaceColor ' , ' b ' , ...
'MarkerSize' , 6) , xlabelC k ',' FontSize ', 16 )
ylabelC log_{10} \it error_{ k }',' FontSize ', 16 ) dispC ');
degree=input ( ' Read in the value of k: ' ) ;
end % if
end % if
end % if
if degree==l
stoploop= ' es ' ;
end
Appendix 7 : SNTLNinCoefficenttoAGCD
function [ alpha_up, theta_up, com_fw, com_gw, com_dw, com_dw2 ] = ....
SNTLNinCoefficentofAGCD ( fx_n, gx_n, rankloss, method)
This function computes an AGCD of fx_n and its derivative gx_n
using the method of SNTLN.
the vector of coefficients of fx_n
the vector of coefficients of gx_n, the derivative of fx_n
the degree of an AGCD of fx_n and gx_n
this variable determines the method used to calculats the column from which initial estimates of the quotient polynomials and AGCD are computed:
method = 1 : first principal angle
method = 2 : residual of an approximate equation alpha_up the final value of alpha after SNTLN has terminated theta_up the final value of theta after SNTLN has terminated com_fw the corrected form of the polynomial fx_n,
expreseed in the w variable
com_gw the corrected form of the polynomial gx_n,
expressed in the w variable
com_dw, comdw_2 two estaomates of an AGCD, calculated two different methods
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Define the degrees of the polynomials fx_n and gx_n .
ml=length ( fx_n ) ;
nl=length (gx_n) ;
m=length ( fx_n ) -1 ;
n=length (gx_n) -1 ;
% Calculate the optimal values of alpha and theta.
[theta_opt, alpha_opt] =optimal_linprog (fx_n, gx_n) ;
% Transform the polynomials fx_n and gx_n to polynomials in the
% independent variable w.
fw_n=zeros ( 1 , ml ) ;
gw_n=zeros ( 1 , nl ) ;
for k=l : 1 : ml
fw_n (k) =fx_n (k) *theta_opt A (ml-k) ;
end
for k=l : 1 : nl
gw_n (k) =gx_n (k) *theta_opt A (nl-k) ;
end
% Calculate, for every subresultant matrix, the column to move to
% the right hand side. Perform this calculation for Methods 1 and 2.
% The index of the optimal column is stored in the vector col.
switch method
case 1 % the first principal angle
col=ColumnAngle ( fw_n, gw_n, alpha_opt ) ;
case 2 % the residual of an approximate equation
col=ColumnRes ( fw_n, gw_n, alpha_opt ) ;
end % Calculate an AGCD of degree rankloss, using the optimal column in col % calculated in the switch command above,
k = rankloss;
% Calculate the kth subresultant matrix
Sk=KthSylvester (fw_n, alpha_opt*gw_n, k) ;
% Remove the column col (rankloss ) fron Sk and denote it by ck . The % remaining part of the matrix Sk is denoted Ak . Obtain the least
% squares solution of (Ak)x=ck.
ck=Sk ( : , col (k) ) ;
Ak= [Sk ( : , 1 :col (k) -1) , Sk ( : , col (k) +1 :end) ] ;
[ul, sl,vl]=svd(Ak) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
xk=vl*sl ' *ul ' *ck; % the least squares solution
% Obtain initial estimates of the coprime polynomials from xk .
vecx= [xk(l:col(k)-l) ;-l;xk(col(k) : end) ] ;
vk=vecx ( 1 : n-k+1 ) ;
uk=-vecx (n-k+2 : end) ;
% Calculate the AGCD dk, which is of degree k, by solving in the least % squares sense, the approximate equations
% fw_n approx. equal (uk) (dk) and (alpha_opt) (gw_n) approx. equal (vk) (dk)
% Form the coefficient matrix Bk and right hand side vector bk .
Ckl=cauchy (uk, k) ;
Ck2=cauchy (vk, k) ;
Bk= [Ckl;Ck2] ;
bk= [fw_n, alpha_opt*gw_n] ' ;
[ul, sl,vl]=svd(Bk) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
dk=vl*sl ' *ul ' *bk;
rk=bk-Bk*dk ; % the residual of the solution dk
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Initialise the variables for the method of SNTLN.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Define lk, the AGCD in the independent variable y by using the
% substitution w = y/theta.
veck=k : -1 : 0 ;
lk=dk . / (theta_opt . Aveck ' ) ;
vecm=m : -1 : 0 ;
vecn=n : -1 : 0 ;
vecmk=m-k : -1 : 0 ;
vecnk=n-k : -1 : 0 ;
% Define the square diagonal matrices S and T.
S=diag (theta_opt . Avecm, 0 ) ;
T=diag (theta_opt . Avecn, 0 ) ;
% Define a2 and b2, the initial values of uk and vk, expressed in
% the independent variable y.
okl=theta_opt . Avecmk ;
ok2=theta_opt . Avecnk ;
a2=uk . /okl ' ; b2=vk. /ok2 ' ;
% Set the perturbation vector zk equal
zk=zeros (m+n-2 *k+2 , 1 ) ;
% Define a matrix Yk, where Yk*zk=Ek*lk;
Yl=cauchy (dk, m-k) *diag ( okl , 0 ) ;
Y2=cauchy (dk, n-k) *diag(ok2, 0) ;
Y3=zeros (ml , nl-k ) ;
Y4=zeros (nl, ml-k) ;
Yk=[Yl,Y3;Y4,Y2] ;
% Evaluate some partial derivatives.
partial_f=vecm . *fx_n . *theta_opt . A (vecm-1 ) ; % fw_n wrt theta
partial_g=vecn . *gx_n . *theta_opt . A (vecn-1 ) ; % gw_n wrt theta
partial_Ckl=cauchy (vecmk ' . *uk . /theta_opt , k ) ; % the matrix Ckl wrt theta partial_Ck2=cauchy (vecnk ' . *vk . /theta_opt , k ) ; % the matrix Ck2 wrt theta partial_dk=veck ' . *lk . *theta_opt . A (veck-1 ) ' ; % dk wrt theta
% Initialise some variables.
g=rk; % the initial value of the residual of Bk*dk=bk
beta=0 ;
theta=theta_opt ;
pk=zeros (ml , 1 ) ;
qk=zeros (nl, 1 ) ;
tk=zeros (nl, 1 ) ;
% Calculate the matrix C
C_temp= [ (-1 ) *S, zeros (ml , nl ) , zeros (ml, 1 ) , ...
-partial_f ' +partial_Ckl *dk+Ckl *partial_dk ;
zeros (nl , ml ) , (-alpha_opt-beta) *T, -gw_n ' -tk, ...
(-alpha_opt-beta) *partial_g ' +partial_Ck2 *dk+Ck2 *partial_dk ] ; C= [Yk, C_temp] ;
% Initialise ek as bk, which is defined above and equal to the right % side vector, the coefficient matrix E of the objective function and % the vector f, where the objective function is | |Ey-f | | .
ek=bk ;
E=eye ( 2 *m+2 *n-2 *k+6 ) ;
f=zeros (2*m+2*n-2*k+6, 1) ;
% Initialise the counter for the number of iterations.
ite=0;
% Set the criterion for termination of the iterative
% solution of the LSE problem,
while norm (rk ) /norm ( ek ) >=le-16
ite=ite+l ;
% Break the loop if more than 50 iterations are required. In
% this case, the program jumps out of the while loop and
% continues execution,
if ite>50
break ;
end
% Use QR decomposition to solve the LSE problem.
y=LSE(E,f,C,g) ;
% Update some variables.
delta_zk=y ( 1 :m+n-2*k+2 ) ;
delta_pk=y (m+n-2*k+3 : 2*m+n-2*k+3 ) ; delta_qk=y (2*m+n-2*k+4 : 2 *m+2 *n-2 *k+4 ) ;
delta_beta=y ( end-1 ) ;
delta_theta=y (end) ;
zk=zk+delta_zk ;
pk=pk+delta_pk ;
qk=qk+delta_qk ;
beta=beta+delta_beta;
theta=theta+delta_theta;
% Update the coefficients of the polynomials f and g in the
% w variable.
fw2=fx_n . *theta . Avecm;
gw2=gx_n . *theta . Avecn ;
S=diag (theta . Avecm, 0 ) ;
T=diag (theta . Avecn, 0 ) ;
okl=theta . Avecmk ;
ok2=theta . Avecnk ;
% Update the matrices Bk, Ek and Yk .
Ckl=cauchy (a2. *okl ' , k) ;
Ck2=cauchy (b2. *ok2 ' , k) ;
Bk= [Ckl;Ck2] ;
Ekl=cauchy (zk (1 :m-k+l) . *okl ' , k) ;
Ek2=cauchy (zk (m-k+2 : m+n-2 *k+2 ) . *ok2 ' , k) ;
Ek= [Ekl;Ek2] ;
Yl=cauchy (dk, m-k) *diag ( okl , 0 ) ;
Y2=cauchy (dk, n-k) *diag(ok2, 0) ;
Yk=[Yl,Y3;Y4,Y2] ;
% Update the matrix C.
sk=pk . *theta . Avecm ' ;
tk=qk . *theta . Avecn ' ;
dk=lk . *theta . Aveck ' ; % the coefficients of the GCD
partial_sk=vecm ' . *pk . *theta . A (vecm-1 ) ' ;
partial_tk=vecn ' . *qk . *theta . A (vecn-1 ) ' ;
partial_f=vecm . *fx_n . *theta . A (vecm-1 ) ;
partial_g=vecn . *gx_n . *theta . A (vecn-1 ) ;
partial_Ckl=cauchy (vecmk ' . *a2. *okl ' ./theta, k ) ;
partial_Ck2=cauchy (vecnk ' . *b2. *ok2 ' . /theta, k) ;
partial_Ekl=cauchy (vecmk ' . *zk ( 1 :m-k+l ) . *okl ' /theta, k) ;
partial_Ek2=cauchy (vecnk ' . *zk (m-k+2 : m+n-2 *k+2 ) . *ok2 ' /theta, k) ;
partial_dk=veck ' . *lk . *theta . A (veck-1 ) ' ;
C_temp= [ (-1 ) *S, zeros (ml , nl ) , zeros (ml , 1 ),...
-partial_f ' -partial_sk+partial_Ckl*dk+partial_Ekl*dk+ ...
Ckl*partial_dk+Ekl*partial_dk;
zeros (nl , ml ) , (-alpha_opt-beta) *T, -gw2'-tk,...
(-alpha_opt-beta) * (partial_g ' +partial_tk) +partial_Ck2 *dk+ ... partial_Ek2*dk+Ck2*partial_dk+Ek2*partial_dk] ;
C= [Yk, C_temp] ;
% Compute the residual rk, and update g, f and ek .
rk= [fw2 ' +sk; (alpha_opt+beta) * (gw2 ' +tk) ] - (Bk+Ek) *dk;
g=rk ;
f=- [ zk ; pk ; qk ; beta; theta-theta_opt ] ;
ek= [fw2 ' +sk; (alpha_opt+beta) * (gw2 ' +tk) ] ;
end
% Calculate the corrected polynomials fw and gw that have a non-constant % GCD.
com_fw=fw2+sk ' ;
com_gw=gw2+tk ' ; % Calculate the final values of alpha and theta. alpha_up=alpha_opt+beta;
theta_up=theta;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Calculate the GCD in two different ways.
% Method 1: Compute d (w) =lk . *theta . A [k : 1 ] , which is calculated com_dw=dk ' ;
% Method 2: Solve in the least squares sense (Bk+Ek*d (w) =ek [ul, sl,vl]=svd(Bk+Ek) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
com_dw2= (vl*sl ' *ul ' *ek) ' ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
dispC ');
disp( 'Number of iterations required in LSE problem:');
fprintf ( ' %10. Of\n ' , ite) ;
disp( ' ' ) ;
Appendix 8 : cauchy.m
function c = cauchy (f,n)
% This procedure forms a Cauchy matrix c whose entries are the
% elements of the vector f. If f is of length m+1, so that the
% entres of f are the coefficients of a polynomial of degree m,
% then c is of order (m+n+1 ) x (n+1 ) .
% If g is a polynomial of degree n, and c(f) is a Cauchy matrix
% whose entries are the elements of f, then c(f)*g is a vector
% of length (m+n+1) whose elements are the coefficients of the
% polynomial f*g. m=length ( f ) -1 ;
c=zeros (m+n+1 , n+1 ) ;
for k=l:l:n+l
for h=k : 1 : m+k
c(h,k)=f (h-k+1) ;
end
end
Appendix 9 : nonlinearLS
function a = nonlinearLS (b, co)
% This procedure uses the method of non-linear least squares to refine % the roots of a polynomial.
% b : a matrix of two columns, where b(i,l) stores an estimate of % the ith root, and b(i,2) stores the multiplicity of the root
% co : the vector of coefficients of the polynomial
% a : the vector of the updated roots
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Set the counter for the number of iterations, and the termination % criterion.
ite=0;
er=le-l 0 ;
% Set delta_r to a large value for initialisation of the iterative % procedure.
delta_r=l ;
degco = length ( co ) -1 ; % the degree of co
m_temp=size (b) ;
m=m_temp ( 1 ) ; % the number of distinct roots of co
J=zeros (degco, m) ; % initialise the Jacobian matrix for later
% Form the vector a of the estimates of the roots.
a=b ( : , 1 ) ;
% Obtain the coefficients of the polynomial s by convolving the estimated % roots. If the roots in the vector a are good estimates of the roots of % the polynomal co, the difference in the vectors a and co is small. s=l;
for i=l : 1 : m
for j=l:l:b(i,2)
s=conv ( s , [ 1 , -a ( i ) ] ) ;
end
end
% Normalise co and s by the geometric means of their coefficients
% and then compare them.
co_norm = geomecoeff (co) ;
s_norm = geomecoeff ( s ) ;
normdiff = norm ( co_norm-s_norm) / (0.5* (norm ( co_norm) +norm ( s_norm) ) ) ;
% If normdiff > er, the estimates of the roots in a must be refined, if normdiff > er
% Normalise s so that its leading coefficient is one. This is
% required for compatibility with the convolution expansion above. g=s ( 2 : end) ;
c=co ( 2 : end) /co ( 1 ) ;
% Calculate the initial error vector of the coefficients. Note that % this vector compares two polynomials that have been normalised % to be monic.
r=g-c;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Start the loop for the iterative procedure,
while delta_r>=er
ite=ite+l; % increment the counter for the iterations
% Break out of the loop if more than 50 iterations are needed, if ite>50
break ;
end
% Form the Jacobian matrix J.
u=l;
for i=l : 1 : m
for j=l : 1 :b(i, 2) -1
u=conv ( u , [ 1 , -a ( i ) ] ) ;
end
end
for i=l : 1 : m
v=-b(i, 2) *u;
for j=l : 1 :m
if j~=i
v=conv (v, [ 1, -a ( j ) ] ) ;
end
end
J ( : , i ) =v ' ; % v' is equal to the ith column of J end
% Form the least squares solution and then update the vector a. p=-inv(J' *J) *J' *r ' ;
a=a+p;
% Compute the coefficients of the polynomial whose coefficients
% are stored in the vector a.
s=l;
for i=l : 1 : m
for j=l:l:b(i,2)
s=conv ( s , [ 1 , -a ( i ) ] ) ;
end
end
g=s ( 2 : end) ;
% Compute the residual, c contains the coefficients of the % given polynomial co.
r2=g-c ;
% Compare the residuals r2 and r, and then update r.
delta_r=norm (r2-r ) /norm (r ) ;
r=r2 ;
end % while loop for delta_r
else
% come here if normdiff < er
end % if normdiff Appendix 10 : deconvinLSM.m
function hx = deconvinLSM ( fx)
% This function performs the deconvolutions
% h_{i} (x)=f_{i} (x) /f_{i+l} (x) , i=l,...,d-l, and h_{d} (x) =f_{d} (x)
% where f_{i} are polynomials, using a linear structure preserving % matrix method.
% fx : a cell that contains the polynomials f_{i}(x}
% f (x)=[f_{l} (x) ,f_{2} (x) , ...,f_{d} (x) ]
% hx : a cell that contains the polynomials h_{i}(x}
% h(x)=[h_{l} (x) ,h_{2} (x) , ...,h_{d} (x) ]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Calculate the total number of polynomials f_{i}.
d=length (fx) ;
% Define the degree mi of the polynomial f_{i}.
mi=zeros ( 1 , d) ;
for k=l : 1 : d
mi (k) =length (fx{k} ) -1 ;
end
% Calculate the degree ni of the polynomial h_{ i } =f_{ i } /f_{ i+1 } .
ni=zeros ( 1 , d-1 ) ;
for k=l:l:d-l
ni (k) =length (fx{k} ) -length ( fx { k+1 } ) ;
end
% Define the coefficient matrix C(f) . The matrix is built up one row % at a time. Each row is of the form [ zero_l | cauchy matrix | zero_2 ] , % where the zero matrix zero_l is empty for the first row, and the % zero matrix zero_2 is empty for the last row.
Cf=zeros ( 0 ) ;
for k=l:l:d-l
ck=cauchy (fx{k+l},ni(k));
0a=zeros (mi (k ) +1 , sum (ni(l:k-l)+l) ) ;
0b=zeros (mi (k ) +1 , sum (ni (k+1 : end) +1 ) ) ;
Cf=[Cf;0a ck Ob]; % place the new row under the existing rows end
% Initialise the vector z of perturbations added to the coefficients
% of the polynomials f_{i}.
zi=zeros ( 0 ) ;
z=zeros ( sum (mi+1 ) , 1 ) ;
for k=l : 1 : d
zi{k}=z(sum(mi(l:k-l)+l)+l:sum(mi(l:k)+l) ) ;
end
% Define the matrix Ez of structured perturbations added to C(f) .
% Use the same method as above for the matrix Cf.
Ez=zeros ( 0 ) ;
for k=l:l:d-l
zk=cauchy (zi{k+l},ni(k) ) ;
0a=zeros (mi (k ) +1 , sum (ni(l:k-l)+l) ) ; Ob=zeros (mi (k ) +1 , sum (ni (k+1 : end) +1 ) ) ;
Ez= [Ez;Oa zk Ob] ;
end
% The matrix P that is required for the corrected form of the
% right hand side vector.
P= [ eye ( sum (mi ( 1 : d-1 ) +1 ) ) , zeros ( sum (mi ( 1 : d-1 ) +1 ) , mi (d) +1 ) ] ;
% Form the vector of the given polynomials f_{i}.
f=fx{l} ' ;
for k=2:l:d-l
f=[f ;fx{k} ' ] ;
end
% Calculate the initial estimate of the solution h using least squares, [ul, sl,vl]=svd(Cf ) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
hO=vl*sl ' *ul ' *f ;
rk=f-Cf*hO; % the initial value of the residual
h=hO;
% Define the matrix Yh . Use the same method as above for the matrix Cf. hi=zeros ( 0 ) ;
Yh=zeros ( 0 ) ;
for k=l:l:d-l
hi{k}=h(sum(ni(l:k-l)+l)+l:sum(ni(l:k)+l) ) ;
yk=cauchy (hi { k } , mi (k+1 ) ) ;
0c=zeros (mi (k ) +1 , mi ( 1 ) +1 ) ;
0d=zeros (mi (k ) +1 , sum (mi ( 2 : k ) +1 ) ) ;
0e=zeros (mi (k ) +1 , sum (mi (k+2 : end) +1 ) ) ;
Yh= [Yh;0c Od yk Oe] ;
end
% Form (a) the coefficient matrix G and right hand side vector of the
% equality constraint and (b) the coefficient matrix I and right hand
% side vector s of the function to be minimised, in the LSE problem.
G= [Cf+Ez Yh-P] ;
I=eye ( sum (ni+1 ) +sum (mi+1 ) ) ;
h=zeros ( size (hO ) ) ;
s=-[h-hO; z];
t=rk;
% These two quantities are required for the solution of the LSE problem. ek=f+P*z;
res=norm (rk ) /norm ( ek ) ;
% Start the iterative solution of the LSE problem.
ite=0; % the counter for the number of iterations
while res>=le-15
ite=ite+l ;
if ite>50
break; % exit the loop if more than 50 iterations are needed end
% Solve the LSE problem by the QR decomposition.
y=LSE(I,s,G,t) ;
% Update h and z.
delta_h=y ( 1 : sum (ni+1 ) ) ; delta_z=y ( sum (ni+1 ) +1 : end) ;
h=h+delta_h;
z=z+delta_z ;
% Update the contents of the cell zi.
for k=l : 1 : d
zi{k}=z(sum(mi(l:k-l)+l)+l:sum(mi(l:k)+l) ) ; end
% Update the matrix Ez.
Ez=zeros ( 0 ) ;
for k=l:l:d-l
0a=zeros (mi (k ) +1 , sum (ni(l:k-l)+l) ) ;
0b=zeros (mi (k ) +1 , sum (ni (k+1 : end) +1 ) ) ;
zk=cauchy (zi{k+l},ni(k) ) ;
Ez= [Ez;0a zk Ob] ;
end
% Update the matrix Yh.
Yh=zeros ( 0 ) ;
for k=l:l:d-l
hi{k}=h(sum(ni(l:k-l)+l)+l:sum(ni(l:k)+l) ) ; yk=cauchy (hi { k } , mi (k+1 ) ) ;
0c=zeros (mi (k ) +1 , mi ( 1 ) +1 ) ;
0d=zeros (mi (k ) +1 , sum (mi ( 2 : k ) +1 ) ) ;
0e=zeros (mi (k ) +1 , sum (mi (k+2 : end) +1 ) ) ;
Yh= [Yh;0c Od yk Oe] ;
end
% Recalculate the matrix G, and vectors s and t. G= [Cf+Ez Yh-P] ;
s=-[h-hO; z];
t=rk;
% Calculate the normalised residual.
rk=f+P*z- (Cf+Ez) *h;
ek=f+P*z;
res=norm (rk ) /norm ( ek ) ;
end % while
% Form the polynomials hx_{i}.
hx=zeros ( 0 ) ;
for k=l:l:d-l
hx{k}=h(sum(ni(l:k-l)+l)+l:sum(ni(l:k)+l) ) ' ; end
hx{d}=fx{d} ;
Appendix 11 : KthSylvester .m
function s = KthSylvester ( f, g, K)
% This function forms the Kth Sylvester subresultant matrix s of the % polynomials whose coefficients are stored in the vectors f and g. m=length ( f ) -1 ;
n=length (g) -1 ;
a=zeros (m+n-K+1 , n-K+1 ) ;
b=zeros (m+n-K+1 , m-K+1 ) ;
for k=l:l:n-K+l
for h=k : 1 : m+k
a(h,k)=f (h-k+1) ;
end
end
for k=l:l:m-K+l
for h=k : 1 : n+k
b(h,k)=g(h-k+l) ;
end
end
s= [ a, b] ;
Appendix 12 : ColumnAngle
function col=ColumnAngle ( fw_n, gw_n, alpha_opt )
% This function computes, for each subresultant matrix, the column c % that is most nearly linearly dependent on the other columns. The % first principal angle is used to determine the column c.
% fw_n : the vector of the coefficients of the polynomial fw.
% where the polynomial fw_n is of degree m
% gw_n : the vector of the coefficients of the polynomial gw.
% where the polynomial gw_n is of degree n
% alpha_opt : the optimal value of alpha
% col : a vector of length min(m,n) that stores, for each
% subresultant matrix, the index of the optimal column
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Determine the degrees of the polynomials fw_n and gw_n .
m=length ( fw_n ) -1 ;
n=length (gw_n) -1 ;
% Initialise two vectors that are required later.
minangle=zeros ( 1 , min (m, n ) ) ;
col=zeros ( 1 , min (m, n ) ) ;
% Loop over the subresultant matrices
for k=l : 1 : min (m, n )
% Form the kth Sylvester subresultant matrix
Sk=KthSylvester (fw_n, alpha_opt*gw_n, k) ;
[rn, cn] =size (Sk) ;
angle=zeros ( 1 , cn ) ;
% Loop over the columns of the kth subresultant matrix
for g=l : 1 : cn
% Remove the gth column from Sk. Denote this column ck and % the remaining portion of Sk by Tk .
Tk= [Sk ( : , 1 :g-l) , Sk ( : , g+1 :end) ] ;
ck=Sk ( : , g) ;
ckl=ck/norm ( ck ) ;
% The columns of matrix Nkl form an orthogonal basis for the % column space of Tk .
[Nkl,Rkl]=qr (Tk, 0) ;
% The angle is calculated by considering the orthogonal % complements of the spaces spanned by ckl and the % columns of Tk .
[Ua, Sa,Va]=svd(Nkl) ;
[ra,ca]=size(Nkl) ;
Nk2=Ua ( : , ca+1 :ra) ;
sigmal=svd (ckl ' *Nk2 ) ;
angle (g) =asin ( sigmal ) ;
end
[minangle (k) , col (k) ] =min (angle) ;
end Appendix 13 : ColumnRes .m
function col=ColumnRes ( fw_n, gw_n, alpha_opt )
% This function computes, for each subresultant matrix, the column c
% that is most nearly linearly dependent on the other columns. The
% residual of an approximate equation is used to determine the column c.
% fw_n : the vector of the coefficients of the polynomial fw_n,
% where the polynomial fw_n is of degree m
% gw_n : the vector of the coefficients of the polynomial gw_n,
% where the polynomial gw_n is of degree n
% alpha_opt : the optimal value of alpha
% col : a vector of length min(m,n) that stores, for each
% subresultant matrix, the index of the optimal column
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Determine the degrees of fw_n and gw_n .
m=length ( fw_n ) -1 ;
n=length (gw_n) -1 ;
% Initialise two polynomials that are required.
minres=zeros ( 1 , min (m, n ) ) ;
col=zeros ( 1 , min (m, n ) ) ;
% Loop for the subresultant matrices,
for k=l : 1 : min (m, n )
% Calculate Sk, the kth subresultant matrix.
Sk=KthSylvester (fw_n, alpha_opt*gw_n, k) ;
[rn, cn] =size (Sk) ;
res=zeros ( 1 , cn ) ;
% Loop over the columns of each subresultant matrix
for g=l : 1 : cn
% Remove the gth column from Sk. Denote this column ck and
% the rest of Sk by Tk .
Tk= [Sk ( : , 1 :g-l) , Sk ( : , g+1 :end) ] ;
ck=Sk ( : , g) ;
% Obtain the least squares solution of (Tk) (xo)=ck.
[ul, sl,vl]=svd(Tk) ;
for i=l : 1 : min ( size ( si ) )
sl(i,i)=l/sl(i,i) ;
end
xo=vl*sl ' *ul ' *ck;
r=ck-Tk*xo;
res (g) =norm (r ) ; % the residual of the solution
end
[minres (k) , col (k) ] =min (res) ;
end Appendix 14 : LSE.m
function y=LSE (E, f , C, g)
% This program uses the QR decomposition to solve the LSE problem % min I I Ey f | | subject to Cy=g.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [ml , p] =size (C ) ;
[Q, R] =qr (C ' ) ; % the QR decomposition of CA{T}
R1=R(1 :ml, : ) ;
% Partition EQ as EQ=[E1 E2].
T=E*Q;
E1=T ( : , 1 :ml) ;
E2=T ( : ,ml+l :p) ;
w=Rl'\g; % calculate w
[ul, sl,vl]=svd(E2) ;
for g=l : 1 : min ( size ( si ) )
si (g, g) =l/sl (g, g) ;
end
invE2=vl*sl ' *ul ' ;
z=invE2* (f-El*w) ;
y=Q*[w;z]; % the solution
Appendix 15 : optimal_linprog.m
function [theta, alpha] =optimal_linprog ( fx, gx)
% This procedure uses linear progamming to calculate the
% optimal values of alpha and theta.
f= [1 -1 0 0] ;
ml=length ( fx) ;
nl=length (gx) ;
Ta=zeros (ml, 4) ;
Tb=zeros (nl, 4) ;
Da=zeros (ml, 4) ;
Db=zeros (nl, 4) ;
for k=l : 1 : ml
Ta(k, : )=[1, 0,k-ml, 0] ;
Da(k, :)=[0,-l,ml-k,0] ;
end
for k=l : 1 : nl
Tb(k, : )=[1, 0,k-nl,-l] ;
Db(k, : )=[0,-l,nl-k, 1] ;
end
A= (-1) * [Ta;Tb;Da;Db] ;
b= [-loglO (abs (fx) ) , -loglO (abs (gx) ) , loglO (abs (fx) ) , loglO (abs (gx) ) ] ' ; x=linprog ( f , A, b) ;
theta=10Ax(3) ;
alpha=10Ax(4) ;
Appendix 16 : optimal_multipoly .m
function theta = optimal_multipoly ( fx)
% This program uses linear programming to calculate the optimal % value of theta when the deconvolution of several polynomials % is performed simultaneously.
% fx : a cell whose entries are the polynomials fx ( i ) , i=l , 2 , ... % theta : the optimal value of theta
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% f=[l -1 0] ;
d=length ( fx) ; % the total number of polynomials
% Initialise some variables.
Al=zeros (0) ;
A2=zeros (0) ;
bl=zeros ( 0 ) ;
b2=zeros ( 0 ) ;
m=zeros ( 1 , d) ;
for i=l : 1 : d
m ( i ) =length ( fx { i } ) ;
for j=l : 1 : m ( i )
vl(j,:)=[l, 0,j-m(i)];
v2 ( j, : ) = [0,-l,m(i)-j] ;
end
Al= [Al;vl] ;
A2= [A2;v2] ;
clear vl v2
bl= [bl;logl0 (abs (fx{i} ) ' ) ] ;
b2= [b2;-logl0 (abs (fx{i} ) ' ) ] ;
end
A=(-l) * [A1;A2] ;
b=(-l) * [bl;b2] ;
x=linprog ( f , A, b) ;
theta=10Ax(3) ; Appendix 17 : root_in2
function y = root_in2(a)
% This function reads in the matrix a, and returns the vector y, which
% contains the roots of the polynomial, repeated according to their
% multiplicities. The vector y is required in the MATLAB function poly.
% Example: If
% [ -1.5 3
% a = 1.4 2
% 2.7 4 ]
% then
% y = [-1.5 -1.5 -1.5 1.4 1.4 2.7 2.7 2.7 2.7]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
n=size (a) ;
b=n ( 1 ) ; % The number of distinct roots
m=l ;
d=0;
for k=l:l:b % loop for the distinct roots
for g=l : 1 : a (k, 2 ) % loop for the multiplicty of each root
y (m) =a (k, 1 ) ;
m=m+l ;
end
d=d+a(k,2); % The degree of function f (x)
end
Appendix 18: RootSolver - Method 2
function RootSolver (Ex, mue, Threshold, GCDl_method, bestCol, Type)
% This programme calculates the roots of a polynomial, where it is
% assumed that the theoretically exact form of the given inexact
% polynomial has multiple roots. The method of structured non-linear
% total least norm applied to the Sylvester resultant matrix is used.
% Madina Hasan and Joab Winkler, August 2010
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Ex The number of the polynomial in the file samplepoly2r . m
% mue The ratio (noise amplitude )/( signal amplitude), where % the noise is applied in the componentwise sense
Threshold : The stopping threshold in the least squares problem
with an equality constraint (the LSE problem)
% GCDl_method This parameter is equal to 1 or 2, depending upon the method used for the approximate greatest common divisor (AGCD) computations:
The Sylvester matrix is used for all the AGCD
computations
Approximate polynomial factorisation is used for the first AGCD computation, and the Sylvester matrix is used for all the other AGCD computations
bestCol This parameter is equal to 1 or 2, depending upon the method to be used for the calculation of the optimal column for the computation of a structured low rank approximation of the Sylvester resultant matrix
The optimal column is chosen using the first
principal angle (angle between subspaces)
The optimal column is chosen using the residual of an approximate linear algebraic equation
Type This parameter is equal to 1 or 2, depending upon the method used to calculate the multiplicities of the roots :
1 : The multiplicities are computed
2 : The multiplicities are exact
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
warning off
% Define the theoretically exact form of the polynomial,
[a] = samplepolyr2 (Ex) ;
% Rearrange the roots in descending multiplicity, and if two or
% more roots have the same multiplicity, arrange the roots in
% ascending order. a=sortrows ( a, [ -2 1 ] ) ;
rs=size (a ( : , 1 ) ) ; % the number of distinct roots
% Write the roots and the multiplicities to the screen,
disp ( ' ' ) ;
disp (' exact root multiplicity');
disp (' ');
for i=l : 1 : rs
fprintf('% 16.8e %10. Of\n ' , a (i, : ) ) ;
end
disp (' ');
disp ( ' ' ) ;
% Calculate the degree of the polynomial, and the degree of the GCD % of it and its derivative.
degree_f = sum (a ( : , 2 ) ) ;
ErankL = degree_f-size (a, 1 ) ;
% Form the polynomial fx from the matrix a.
fx=creatPoly (a) ;
m=length ( fx) -1 ; % the degree of fx
% Add noise to the given polynomial f and then differentiate
rand ( ' seed ' , 2 )
rf=2 *rand ( 1 , m+1 ) -ones ( 1 , m+1 ) ;
f=fx+fx. *rf*mue;
givenPoly=f ;
g=polyder ( f ) ; % the derivative of f
n=m-l; % the degree of g
% Normalise f and g by the geometric means of their coefficients.
f=geomecoeff (f ) ;
g=geomecoeff (g) ;
% Calculating the initial values of alpha and theta for the method
% of SNTLN. These values are alphaO and thetaO .
[thetaO, alphaO] =optimal_linprog ( f , g) ;
alpha=alphaO ;
theta=thetaO;
% Transform the polynomials f and g from the variable y to the
% variable w.
m_vect=m : -1 : 0 ;
n_vect=n : -1 : 0 ;
fw=f .* (thetaO . Am_vect ) ; % the vector of coefficients of f (w) gw=g .* (thetaO . An_vect ) ; % the vector of coefficients of g(w)
% The polynomials fwr and gwr are required for the rank computation fwr=fw;
gwr=gw;
% Normalise fw and gw by the geometric means of their coefficients. [fw,fscaler] = GMnorm_denoscaler ( fw) ;
[gw,gscaler] = GMnorm_denoscaler (gw) ; % alpha is not included
fscaler is the reciprocal of the geometric mean of the coefficients of the entry vector fw.
gscaler is the reciprocal of the geometric mean of the coefficients of the entry vector gw.
% On exit, fw is normalised by the geometric mean of its coefficients. % On exit, gw is normalised by the geometric mean of its coefficients. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% These polynomials are used in the iterative scheme for the
% calculationo of the AGCD.
f_bar=fscaler . *f ;
g_bar=gscaler . *g;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Calculate the degree of the AGCD of fw and gw.
ABSRC3_info provides the information required to estimate the rank of the Sylvester matrix of fwr and gwr . givenPoly is the given inexact polynomial in the y variable, and fwr and gwr the inexact polynomial and its derivative in the w variable, after normalisation by the geometric means of their coefficients.
The matrix errorM contains the error measure using Method 3.
minangles, min_residuals, q_column, q_col, C3_errorM] =...
ABSRC3_info (givenPoly, fwr, alphaO*gwr, alphaO, thetaO) ;
% Calculate the gradient of the entries in minangles and determine
% the index at which the gradient is a maximum. This is equal to the
% degree of the AGCD. The same calculation can be performed with the
% variable min_residuals .
Gradient = zeros ( length (minangles ), 1 ) ;
for i=l : 1 : length (minangles ) -1
Gradient ( i ) =logl 0 (minangles (i+1) ) -log10 (minangles ( i ) ) ;
end
[Angvalue, Crankloss ] =max (Gradient ) ;
% The rankloss (the degree of the AGCD of f and g) is computed if
% Type = 1, and it is defined exactly if Type=2.
if Type==l
optRankL=Crankloss ;
else
optRankL=ErankL ;
end
k=optRankL; % the degree of the AGCD of f and g
q=q_col ( optRankL ) ; % the optimal column using the residual criterion
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Print out the exact rank loss and the computed rank loss.
dispC ')
disp( ' ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ' dispC ')
disp(' Degree of the GCD of the exact polynomial and its derivative:') disp (ErankL)
disp(' Degree of the AGCD of the inexact polynomial and its derivative:' disp (Crankloss )
disp( ' ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ' dispC ')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Plot the graphs for the calculation of the degree of an AGCD
% of f and g.
x=l : 1 : min (m, n ) ;
figure (1)
subplot ( 2 , 2 , 1 ) % Method 1 (first principal angle)
plot (x, logl 0 (minangles ) , ' -ko ' , ' LineWidth
' MarkerEdgeColor ' , ' r ' , ... ' MarkerFaceColor ' , ' r ' , ...
'MarkerSize ' , 6)
xlabel('\it k ' , ' FontSize ' , 16 )
ylabel ( ' log_{10}\it \phi_{k} ' , 'FontSize' ,16)
subplot ( 2 , 2 , 2 ) % Method 2 (residual of an approximate equation) plot (x, logl 0 (min_residuals ) , ' -ko ' , ' LineWidth
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize' , 6)
xlabel('\it k ', ' FontSize ', 16 )
ylabel ( ' log_{10}\it r_{k} ' , 'FontSize' ,16)
% The next two subplots are for Method 3.
C3_error=C3_errorM ( 1 , : ) ; % the error using the first principal angle subplot (2,2,3)
plot (x, logl 0 (C3_error ) , ' -ko ' , ' LineWidth
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize' , 6)
xlabel('\it k ', ' FontSize ', 16 )
ylabel ( ' log_{ 10 } \it error_{k} ' , 'FontSize ' , 16)
% The next two lines place a blue star at the correct value of k.
% Remove the comments if the star is to be drawn.
% hold on
% plot (ErankL, loglO (C3_error (ErankL) ) , 'b* ' ) ;
C3_error=C3_errorM ( 2 , : ) ; % the error using the residual
subplot (2, 2, 4)
plot (x, logl 0 (C3_error ) , ' -ko ' , ' LineWidth
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize' , 6)
xlabel('\it k ', ' FontSize ', 16 )
ylabel ( ' log_{ 10 } \it error_{k} ' , 'FontSize ' , 16)
% The next two lines place a blue star at the correct value of k.
% Remove the comments if the star is to be drawn.
% hold on
% plot (ErankL, loglO (C3_error (ErankL) ) , 'b* ' ) ;
% end of plot
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Compute the first AGCD, which is between f and its derivative.
% The degree of the AGCD and the index of optimal column are
% optRankL and q, respectively.
% GCDl_method=l : First AGCD is computed using the Sylvester matrix % GCDl_method=2 : First AGCD is computed using approximate polynomial % factorization.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
if GCDl_method==l
% Start the iterative scheme and the method of SNTLN to calculate % the AGCD of f and its derivative.
% Form the kth Sylvester subresultant matrix, the matrix Ak and the % vector ck, where k=optRankL.
[Ak, Ck] =KthSylvester (fw, alphaO*gw, optRankL, q) ;
% Solve (Ak) (xk)=ck and calculate the residual r. x=Ak\Ck;
xO=x ;
r= (Ck-Ak*x) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Initialise some variables for the method of SNTLN.
% The structured perturbations of the coefficient matrix.
z=zeros (m+n+2 , 1 ) ;
% The perturbations of the right hand side vector.
hk=zeros (size (Ck) ) ;
% The derivative of hk with respect to theta.
hk_dt=hk;
% The derivative of hk with respect to alpha.
dh_da=hk ;
% The error matrix.
Ek=zeros (size (Ak) ) ;
% The derivative of the error matrix with respect to alpha.
Ek_da=Ek ;
% The derivative of the error matrix with respect to theta.
Ek_dt=Ek;
% Initialise the derivatives of Ak and ck with respect to alpha. Ak_da=Creat_dAkda (n, k, q, alphaO, Ak) ;
dck_da=Creat_dckda (n, k, q, alphaO, Ck) ;
% Initialise the derivatives of f and g with respect to theta. df_dt=Creat_dt ( fw, thetaO, m) ;
dg_dt=Creat_dt ( alphaO*gw, thetaO, n ) ;
% Form the derivatives of Ak and ck with respect to theta.
[Ak_dt, Ck_dt] =KthSylvester (df_dt, dg_dt, k, q) ;
% Form the matrices Y and P from the vector x.
Y=Creat_Y_k (m, n, k, q, x, alphaO, thetaO) ;
P=Creat_P (m, n, k, q, thetaO) ;
% Form the y vector.
y= [ z ; x; alpha; theta] ;
% Define the initial value of the threshold in the method of SNTLN % for stopping the iterative procedure. This variable is set to a % large value so that SNTLN will be executed once, but the value is % changed later.
TH=3 ;
% The counter for the number of iterations.
iet_conut=0 ;
% The scale factor alpha multiplies the polynomial g, and thus if % the optimal column contains the coefficients of f, alpha is % equal to one.
if q < (n-k+2)
alphaq=l ;
else
alphaq=alpha ;
end
% Form the matrix C.
H_z=Y-alphaq*P ;
H_x=Ak+Ek ;
H_a= (Ak_da+Ek_da) *x- (dck_da+dh_da) ;
H_t= (Ak_dt+Ek_dt) *x- (Ck_dt+hk_dt ) ; C=[H_z H_x H_a H_t ] ;
% Initialise the matrix E and right hand side vector s of the
% function to be minimised.
E=eye ( 2 *m+2 *n-2 *k+5 ) ;
s=- [ z ; x-xO; alpha-alphaO; theta-thetaO] ;
while (TH>Threshold) % loop for iterative solution of LSE problem
% Terminate the loop if the number of iterations
% is greater than 100.
if iet_conut>l 00
break
end
% Increment the counter for the number of iterations.
iet_conut=iet_conut+l ;
% Solve the LSE problem and update the solution.
y_lse=LSE (E, s, C, r) ;
y=y+y_lse ;
% Calculate the parts of y that define z, x, alpha and theta. z=y ( 1 : m+n+2 ) ;
x=y(m+n+3:2*m+2*n-2*k+3) ;
alpha=y ( 2 *m+2 *n-2 *k+4 ) ;
theta=y ( 2 *m+2 *n-2 *k+5 ) ;
% Update the vector s.
s=- [ z ; x-xO; alpha-alphaO; theta-thetaO] ;
% Update all the variables before the next iteration.
[C, r, TH, telda_f , telda_g] =...
LSE_Updating ( z, x, alpha, theta, f_bar, g_bar, k, q) ;
% telda_f and telda_g are the corrected forms of the given
% inexact polynomials f and g, respectively. The polynomials
% telda_f and ctelda_g are not normalised by the geometric
% means of their coefficients, and alpha is included in telda_g. end % while
% Draw graphs of the normalised singular values of some pairs
% of polynomials,
figure (2)
% (1) The normalised singular values of S(f,g), the Sylvester matrix % of the given inexact polynomials.
[xAxis, yAxis] =Draw_SV(f , g) ;
plot (xAxis , yAxis , ' -kd ' , ' MarkerEdgeColor ' , ' b ' , ....
'MarkerFaceColor ' , 'b' , 'MarkerSize ' , 6 ) ;
hold on
% (2) The normalised singular values of S (telda_f , telda_g) , the % Sylvester matrix of the corrected polynomials.
% rd is the degree of the GCD of the theoretcially exact forms of f % and g. This is equal to the rank loss of their Sylvester matrix. rd=m+n-ErankL ;
[ xAxis , yAxis , yValue ]= ...
Draw_SV_I (geomecoeff (telda_f ) , alpha*geomecoeff (telda_g) , rd) ; xlabel ( ' i ' ) ylabel ( 'log_{ 10 } \sigma_{\iti } / \sigma_{ \it 1 } ' , ' FontSize ' , 14 ) plot (xAxis , yAxis , ' -ko ' , ' MarkerEdgeColor ' , ' r ' , ....
' MarkerFaceColor ' , ' r ' , ' MarkerSize ' , 6 ) ;
% (3) Place a black square at the last non-zero singular value, hold on
plot (rd, yValue, ' ks ' , 'MarkerFaceColor ' , ' k ' , 'MarkerSize ' , 8 ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Compute telda_GCD, the GCD of the corrected polynomials using the % Sylvester matrix. telda_f and alpha*telda_g are the corrected % polynomials, which are normalised by the geometric means of their % coefficients. The input variable k is the degree of telda_GCD. telda_GCD=getAGCD_PolyFact (geomecoeff (telda_f ) , ...
alpha*geomecoeff (telda_g) , k) ;
else % come here if GCDl_method=2
% Compute the AGCD of the given inexact polynomial f and its
% derivative g, using approximate polynomial factorisation.
% f and g are expressed in the variable y, fwr and gwr are the same
% as f and g respectively, but they are expressed in the variable w.
[telda_f , telda_GCD] =...
polyFactSNTL ( f , g, fwr, gwr, thetaO, alphaO, optRankL, bestCol ) ;
end % if GCDl_method=l
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Thus far, the first AGCD computation has been performed. The
% function squareFree2 performs the other AGCD computations,
[ca, ca2 ] =squareFree2 (optRankL, telda_f , telda_GCD, ...
theta, Threshold, bestCol, Type, a) ;
% ca are the initial root estimates using simple deconvolution
% ca2 are the initial root estimates using a structure prserving
% matrix method for the deconvolution
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Print the results. Three sets of results are printed:
% Result set 1: The roots before refinement by non-linear least squares. % Deconvolution is performed using division by least squares.
% Result set 2: The roots after refinement by non-linear least squares. % Deconvolution is performed using division by least squares.
% Result set 3: The roots after refinement by non-linear least squares. % Deconvolution is performed using structured matrix methods.
% Result set 1
%%%%%%%%%%%%%%
disp( ' ' ) ;
dispC ');
lines_l = '
lines_2 = '
alllines = [ lines_l , lines_2 ] ;
disp (alllines);
disp (alllines);
disp ( ' ' ) ;
disp ('RESULT SET 1 : ' ) ;
disp ( ' ' ) ; disp (' (a) Roots before refinement by non-linear least squares');
disp (' (b) Deconvolution is performed using the least squares solution'); dispC ');
titlestg_l = 'multiplicity ';
titlestg_2 = 'exact root ';
titlestg_3 = 'computed root ';
titlestg_4 = 'error';
alltitle = [titlestg_l , titlestg_2 , titlestg_3 , titlestg_4 ] ;
disp (alltitle) ;
disp (alllines);
dispC ');
sizea=size (a) ;
rs=sizea ( 1 ) ;
% Calculate the relative error of each root.
d=abs ( a ( : , 1 ) -ca ' ) . /abs ( a ( : , 1 ) ) ;
for i=l : 1 : rs
fprintf ( ' %7. Of %22.8e %18.8e %18.8e\n ' , ...
a(i,2),a(i,l),ca(i),d(i));
end
% Result set 2
% Refine the roots using non-linear least squares and calculate
% the relative error of each root.
z=refineRoots ( f, ca ' , a ( : , 2 ) ) ;
error_NLLroots=abs ( z-a ( : , 1 ) ) . /abs ( a ( : , 1 ) ) ;
disp( ' ' ) ;
dispC ');
lines_l = ' ' ;
lines_2 = ' ' ;
alllines = [ lines_l , lines_2 ] ;
disp (alllines);
disp (alllines);
disp ( ' ' ) ;
disp ('RESULT SET 2:');
disp ( ' ' ) ;
disp (' (a) Roots after refinement by non-linear least squares');
disp (' (b) Deconvolution is performed using the least squares solution'); dispC ');
titlestg_l = 'multiplicity ';
titlestg_2 = 'exact root ';
titlestg_3 = 'computed root ';
titlestg_4 = 'error';
alltitle = [titlestg_l , titlestg_2 , titlestg_3 , titlestg_4 ] ;
disp (alltitle) ;
disp (alllines);
dispC ');
sizea=size (a) ;
rs=sizea ( 1 ) ;
for i=l : 1 : rs
fprintf (' %7. Of %22.8e %18.8e %18.8e\n ' , ...
a(i,2),a(i,l),z(i), error_NLLroots ( i ) ) ;
end
% Result set 3 % This is the same as Result set 2 above, but deconvolution is performed % using structured matrix methods. Thus ca2 is used instaed of ca.
z2=refineRoots (f,ca2',a(:,2));
error_NLLroots2=abs (z2-a(:,l)). /abs ( a ( : , 1 ) ) ;
disp( ' ' ) ;
dispC ');
lines_l = ' ' ;
lines_2 = ' ' ;
alllines = [ lines_l , lines_2 ] ;
disp (alllines);
disp (alllines);
disp ( ' ' ) ;
disp ('RESULT SET 3:');
disp ( ' ' ) ;
disp (' (a) Roots after refinement by non-linear least squares');
disp (' (b) Deconvolution is performed using strcutured matrix methods'); disp ( ' ' ) ;
titlestg_l = 'multiplicity ';
titlestg_2 = 'exact root ';
titlestg_3 = 'computed root ';
titlestg_4 = 'error';
alltitle = [titlestg_l , titlestg_2 , titlestg_3 , titlestg_4 ] ;
disp (alltitle) ;
disp (alllines);
dispC ');
sizea=size (a) ;
rs=sizea ( 1 ) ;
for i=l : 1 : rs
fprintf ( ' %7. Of %22.8e %18.8e %18.8e\n ' , ...
a(i,2),a(i,l),z2(i), error_NLLroots2 ( i ) ) ;
end
disp ( ' ' )
Appendix 19: samplepolyr2
function [a] = samplepolyr2 (N)
This procedure defines the multiplicities and roots of polynomials. Each polynomial is defined by a matrix a with two columns, where the first column of a stores the root and the second column of a stores the multiplicity of the root. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% switch N
case 1
a =[
8.482051221095027e+000 3.00000000000000 Oe+000
4.996692656288282e-002 3.00000000000000 Oe+000
] ;
case 2
a =[
-9.333727837377303e+000 6. OOOOOOOOOOOOOOOe+000
-7.647877657393483e+000 6.OOOOOOOOOOOOOOOe+000
-4.773934602373629e+000 3.OOOOOOOOOOOOOOOe+000
] ;
case 3
a =[
3.234008738298687e+000 3.OOOOOOOOOOOOOOOe+000
8.784263127419138e+000 1.OOOOOOOOOOOOOOOe+000
-9.195092514700846e+000 5. OOOOOOOOOOOOOOOe+000
-6.726830018999149e+000 3.OOOOOOOOOOOOOOOe+000
] ; case 4
a =[
-4.753364439368538e+000 3.OOOOOOOOOOOOOOOe+000
7.650824691654272e+000 6.OOOOOOOOOOOOOOOe+000
-7.938778561850812e+000 4. OOOOOOOOOOOOOOOe+000
] ; case 5
a =[
4.964272538928450e+000 3.OOOOOOOOOOOOOOOe+000
-3.781296809024626e+000 4. OOOOOOOOOOOOOOOe+000
-9.845894304656929e+000 5. OOOOOOOOOOOOOOOe+000
] ; case 6
a =[
-7.546273965465264e+000 4. OOOOOOOOOOOOOOOe+000
6.270964494638449e+000 5. OOOOOOOOOOOOOOOe+000
9.552087050332720e+000 4. OOOOOOOOOOOOOOOe+000
] ; case 7 a =[
9.091522155438575e+000 5.000000000000000e+000 6.291289184491447e+000 5.000000000000000e+000 2.465055668048352e+000 1. OOOOOOOOOOOOOOOe+001 -3.434624992037798e+000 1. OOOOOOOOOOOOOOOe+000 case 8
a =[
-9.574742463075101e+000 6.OOOOOOOOOOOOOOOe+000 -4.203965196733453e+000 1.OOOOOOOOOOOOOOOe+001
8.629576625462189e+000 5. OOOOOOOOOOOOOOOe+000
] ;
case 9
a =[
-4.035120562244723e+000 6.OOOOOOOOOOOOOOOe+000
-9.072974702036385e+000 9.OOOOOOOOOOOOOOOe+000 ] ; case 10
a =[
-1.847616059176948e+000 6.OOOOOOOOOOOOOOOe+000 6.399624455638811e+000 4. OOOOOOOOOOOOOOOe+000 4.367178864117673e+000 2.OOOOOOOOOOOOOOOe+000 ] ; case 11
a =[
5.368680652283363e+000 1.OOOOOOOOOOOOOOOe+000
6.662528541336066e-001 l.lOOOOOOOOOOOOOOe+001 ] ; case 12
a =[
1.905858966894636e+000 5. OOOOOOOOOOOOOOOe+000 1.222151023175560e-001 4. OOOOOOOOOOOOOOOe+000 5.527985105556258e+000 4. OOOOOOOOOOOOOOOe+000
] ;
case 13
a =[
4.248296115790433e+000 6.OOOOOOOOOOOOOOOe+000 -9.666505741195351e+000 4. OOOOOOOOOOOOOOOe+000 6.018417641156965e+000 5. OOOOOOOOOOOOOOOe+000 ] ;
case 14
a =[
4.107585592918303e+000 4. OOOOOOOOOOOOOOOe+000
-6.389034926683146e+000 2.OOOOOOOOOOOOOOOe+000 case 15
a =[
5.668388608785575e+000 4. OOOOOOOOOOOOOOOe+000
9.448001194244913e+000 5. OOOOOOOOOOOOOOOe+000] ;
case 16
a =[
-4.828354971624553e+000 7. OOOOOOOOOOOOOOOe+000 7.957313696805983e+000 9.OOOOOOOOOOOOOOOe+000 1.867237207722178e+000 6.OOOOOOOOOOOOOOOe+000 7.680163693695796e-002 3.OOOOOOOOOOOOOOOe+000
case 17
a =[
7.1943 6.0000
3.5545 10.0000
6.1168 2.0000
] ; case 18
a =[
6.4763 1.0000
-7.4780 10.0000
-3.9977 8.0000] ; case 19
a =[
-2.0088 0000
3.1063 0000
1.3369 0000
-9.9376 0000
] ;
case 20
a =[
0.0448 2.0000
-8.6866 10.0000
-5.3992 2.0000] ; case 21
a =[
3.6222 10.0000
-6.7495 10.0000
-1.2621 9.0000
-3.8036 6.0000
] ; case 22
a =[
-0.5926 11.0000
2.6760 8.0000 0.6290 5.0000
-9.7181 4.0000 case 23
a =[
2.8015 9.0000
5.7511 8.0000
] ; case 24
a =[
5.5393 0000
7.2900 0000
-3.3284 0000
] ; case 25
a =[
-3.7823 10.0000
6.2527 11.0000
] ;
case 26
a =[
9.5736 6.0000
4.2539 5.0000
] ; case 27
a =[
-3.137078629492169e + 000 1.00000000000000 Oe + 001
-8.731543254808678e + 000 3.OOOOOOOOOOOOOOOe + 000 ] ; case 28
a =[
6.184065303756388e-001 7. OOOOOOOOOOOOOOOe+000 7.177726894420271e+000 4. OOOOOOOOOOOOOOOe+000 -1.073083237811010e+000 2.OOOOOOOOOOOOOOOe+000 case 29
a =[
-9.293458759455689 7.000000000000000
4.838629828225183 2.000000000000000
2.851522980654391 8.000000000000000
5.546735858333641 4.000000000000000
case 30 5.630453026185098e+000 5. OOOOOOOOOOOOOOOe+000 6.973556663133948e+000 9.OOOOOOOOOOOOOOOe+000 -9.824523942258949e-001 7. OOOOOOOOOOOOOOOe+000 2.106505354673420e+000 7. OOOOOOOOOOOOOOOe+000
case 31
a =[
2.000803571194785 6.000000000000000
7.505621070743363 5.000000000000000
6.973335983684910 7.000000000000000
] ;
case 32
a =[
7.444710078431958 10.000000000000000
-8.956156951421981 8.000000000000000
-5.606373771893079 5.000000000000000
-0.807159797772670 4.00000000000000
] ;
case 33
a =[
3.235523935570026 10.000000000000000
-7.648690307564873 10.000000000000000
] ;
1 2
2 1
3 1 case 35
a =[
9.189848527858061 9.000000000000000
3.114813983131737 7.000000000000000
-9.285766428516208 8.000000000000000
6.982586117375542 8.000000000000000 case 36
a =[
8.343873276596201 8.000000000000000
-4.283219623592530 4.000000000000000
5.144004582214425 6.000000000000000
case 37 -2.082659748127540e-001 2. OOOOOOOOOOOOOOOe+000 2.125632608691156e+000 7.000000000000000e+000 4.353746205207930e+000 3.OOOOOOOOOOOOOOOe+000 case 38
a =[
-5.02232620973859 le+000 7. OOOOOOOOOOOOOOOe+000 8.706647618293253e+000 8.OOOOOOOOOOOOOOOe+000
-1.021961971306272e+000 5. OOOOOOOOOOOOOOOe+000 ] ; case 39
a =[
-2.063862097976705e+000 7. OOOOOOOOOOOOOOOe+000 -3.221053514208201e+000 1. OOOOOOOOOOOOOOOe+001 8.453855136794594e+000 3.OOOOOOOOOOOOOOOe+000 6.768052175123522e+000 2.OOOOOOOOOOOOOOOe+000 case 40
a =[
8.497180251016367e+000 1.OOOOOOOOOOOOOOOe+001 -4.638266590784683e+000 6.OOOOOOOOOOOOOOOe+000 1.787946126930894e+000 4. OOOOOOOOOOOOOOOe+000 ] ; case 41
a =[
-8.1802 3.0000
3.4767 8.0000
0.2976 2.0000
] ;
case 42
a =[
0.5375 0000
-1.6640 0000
3.1372 0000
] ;
case 43
a =[
-0.3104 2.0000
-6.9631 4.0000
5.6386 3.0000
] ;
case 44
a =[
-6.5914 7.0000
-4.8442 5.0000
-2.0640 10.0000
-8.5201 5.0000
] ;
case 45
a =[
7.7072 2.0000 7.9801 3.0000
2.5188 3.0000
] ;
case 46
a =[
-1.9984 2.0000
6.6374 2.0000
-7.3132 4.0000
-8.7907 4.0000
] ; case 47
a =[
0.3396 0000
-6.5790 0000
8.7712 0000 case 48
a =[
1.0162 0000
7.4180 0000 case 49
a =[
-4.9198 5.0000
-1.3756 3.0000
4.0506 9.0000
] ; case 50
a =[
-9.293458759455689 7.000000000000000 4.838629828225183 2.000000000000000 2.851522980654391 8.000000000000000 5.546735858333641 4.000000000000000 case 51
a =[
5.8857e+000 OOOOe+000
-3.7757e+000 OOOOe+000
5.7066e-001 OOOOe+000
-6.6870e+000 OOOOe+000
] ; case 52
a =[
-3.6385e+000 OOOOe+000
-7.6157e+000 OOOOe+000
8.7966e+000 OOOOe+000
2.9110e+000 OOOOe+000
-4.1074e-001 OOOOe+000
] ;
case 53
a =[
-3 4. OOOOe+000 OOOOe+000
OOOOe+000
2 OOOOe+000
4 OOOOe+000
] ;
case 54
a =[
1 4. OOOOe+000
3 3. OOOOe+000
5 2. OOOOe+000
case 55
a =[
2 6. OOOOe+000
7 3. OOOOe+000
4 4. OOOOe+000
case 56
a =[
-5 2.0000e+000
6 3.0000e+000
2 4. OOOOe+000
2 6
1 1
case 57
a =[
5 1. OOOOe+000
6 2. OOOOe+000
2 6. OOOOe+000
4 3
case 58
a =[
-5 1. OOOOe+000
6 2. OOOOe+000
2 6. OOOOe+000
-1 4
-4 3
1 ;
case 59
a =[
-8 1. OOOOe+000
7 2. OOOOe+000
-2 5. OOOOe+000
2 6
] ;
case 60
a =[
9.293458759455689 1.000000000000000 4.838629828225183 3.000000000000000 5.150430786959096 2.000000000000000 case 61
a =[
-2 5.000000000000000
2 7.000000000000000
1 6.000000000000000
3 3
9 1
case 62
a=[
3.8966 1
-3.6580 2
9.0044 3
-9.3111 4
-1.2251 5
-2.3688 6
5.31303 7
] ;
case 63
a=[
1.3429 10
-7.8313 8
-9.2777 7
2.3618 6
4.2393 3
case 64
a =[
2.063862097976705e + 000 1. OOOOOOOOOOOOOOOe + 001 -3.221053514208201e+000 7. OOOOOOOOOOOOOOOe+000 8.453855136794594e + 000 2.OOOOOOOOOOOOOOOe+000 6.768052175123522e+000 3.OOOOOOOOOOOOOOOe+000
case 65
a =[
-9.293458759455689 7.000000000000000
4.838629828225183 2.000000000000000
2.851522980654391 8.000000000000000
5.546735858333641 4.000000000000000 case 66
a =[
5.368680652283363e+000 1.OOOOOOOOOOOOOOOe + 001 6.662528541336066e-001 9.OOOOOOOOOOOOOOOe + 000 -8.632395671615029e+000 3.OOOOOOOOOOOOOOOe + 000 -6.008315356528229e+000 5. OOOOOOOOOOOOOOOe + 000 ] ;
case
a=[
1 30 2 18
3 12
] ;
case 68 a=[ 1 10 -2 8
3 6
4 1
] ;
case 69 a=[ 1 2
4 1
] ;
case 70 a=[ 2.3 2 5.6 1 ] ;
case 71 a=[
5 3 10 2 ] ;
case 72 a=[ 5 4 10 3
1 ;
case 73 a=[l 3 -6 2
2 1 ] ;
case 74 a= [
1 3
2 2
3 1
1 ;
case 75 a= [
2 5 -6 2
4 1
] ;
case 76 a=[ 1 5 -2 4
3 3
4 1 case 77 a=[ 1 3 4 2
] ;
case 78 a=[ -2 3
3 2
4 1
] ;
case 79 a=[
4.3517 2 4.5673 1 ] ;
case 80 a=[ -2 4
5 3 3 2 1 1 ] ;
case 81 a= [ 1 3 -2 2 3 1
1 ;
case 82 a= [
2 5
3 3
6 2
4 1
1 ;
case 83 a= [
2 5 4 1
1 ;
case 84 a= [ 6 7
2 5 4 1
1 ;
case 85 a= [ 6 4
2 3 4 2 case 86
a= [
5 3
2 1
8 1 case 87
1
1
5 3
4 2
7.5
-2 1
8 1 case
a=[
10 12
5 10 case 90 a =[
1.1657 9.0000
-7.0225 9.0000
1.9774 8.0000
-0.9921 7.0000
-5.8866 7.0000 7.9943 4.0000
] ; case 91 a =[
5.9792 3.0000 7.8095 3.0000 4.6868 2.0000 -8.5423 2.0000 -0.2954 1.0000 -8.9734 1.000 case
a=[
1 15
2 10
3 10
4 5 case 93 a = [
0.9296 6.0000 -2.5188 6.0000
-9.7269 6.0000
8.4537 4.0000
-0.0693 3.0000
-0.5223 3.0000
-3.8206 2.0000
] ;
case 94
a =[
-2.6562 5.0000
0.6525 3.0000
1.0777 3.0000
1.5785 3.0000
3.6013 3.0000
-5.2142 3.0000
7.3377 3.0000
-1.8645 2.0000
-7.7477 2.0000
] ; case 95
a =[
-5.8211 9.0000
-0.6306 8.0000
3.5078 8.0000
8.1031 2.0000
] ;
case 96
a =[
-8.7907 6.0000
-1.9984 4.0000
6.6374 4.0000
4.4470 3.0000
9.0183 2.0000
-7.3132 1.0000
] ; case 97
a =[
1.7404 11.0000
-8.4807 9.0000
3.0024 4.0000
9.4967 4.0000
-1.7223 3.0000];
case 98
a =[
-9.6084 13.0000
3.6683 7.0000
-2.1059 6.0000
4.0809 5.0000
-1.1539 4.0000
] ; case 99
a =[
1.7054e+000 6.OOOOe+000 -3.1923e+000 5. OOOOe+000 -6.7478e+000 5. OOOOe+000 -3.2719e-002 OOOOe+000 9.1949e+000 OOOOe+000 3.1020e+000 OOOOe+000
-7.6200e+000 OOOOe+000 case 100
a =[
-8.7907 9.0000
-1.9984 4.0000
6.6374 4.0000
4.4470 3.0000
9.0183 2.0000
-7.3132 1.0000 case 101
a =[
-5.8211 9.0000
-0.6306 8.0000
3.5078 8.0000
8.1031 6.0000
9.2281 2.0000
2.6650 1.0000 case 102
a=[
4.8252e+000 5. OOOOe+000 -7.0001e+000 4. OOOOe+000
1.7218e+000 3.OOOOe+000 -3.0457e+000 3.OOOOe+000 -2.1309e+000 2.OOOOe+000
4.0105e-001 2.OOOOe+000
3.4286e+000 1.OOOOe+000
] ;
case 103
a =[
-4.2031e+000 OOOOe+000 -1.2866e+000 OOOOe+000 -3.5314e+000 OOOOe+000
7.2748e+000 OOOOe+000
8.8544e+000 lOOOe+001
] ; case 104
a =[
6.4292e+000 OOOOe+000 -5.2243e+000 OOOOe+000
1.4379e+000 OOOOe+000
7.3540e+000 OOOOe+000 -9.2810e-001 OOOOe+000
] ;
case 105
a=[
-7.5947e+000 6.OOOOe+000 6.3371e-001 5. OOOOe+000 1.4923e+000 5. OOOOe+000 5.4862e+000 4. OOOOe+000
-3.3076e+000 3.OOOOe+000 2244e-001 2.0000e+000 5090e+000 2.0000e+000
0670e+000 2.0000e+000 case 106
a =[
7.0453 1.0000
0.1127 2.0000
2.7132 3.0000
9.0179 4.0000
-1.1207 5.0000
-8.7996 6.0000
] ;
case 107
a =[
-8.7907 4.0000
-1.9984 4.0000
6.6374 4.0000
4.4470 4.0000
9.0183 4.0000
-7.3132 4.0000] ; case 108
a =[
4.0413 0000
4.6991 0000
-2.4509 0000
8.0398 0000
-3.3134 0000
1.9329 0000
-1.4152 11.0000 case 109
a =[
4.6991 0000
-8.7907 0000
-3.3134 0000
-1.9984 0000
6.6374 0000
] ; case 110
a =[
3.076240451915473 2.000000000000000 4.982629262070383 2.000000000000000 1.663714629097523 2.000000000000000 -5.303461705041914 2.000000000000000 case 111
a =[
3.076240451915473 2.000000000000000 12.982629262070383 2.000000000000000 ] ; case 112 3.076240451915473 3.000000000000000 4.982629262070383 3.000000000000000 1.663714629097523 3.000000000000000 -5.303461705041914 3.000000000000000 9.1949e+000 3. OOOOe+000 case 113
a =[
3.076240451915473 5.000000000000000 4.982629262070383 5.000000000000000 1.663714629097523 5.000000000000000 -5.303461705041914 5.000000000000000
Appendix 20 creatPoly
function [X] = creatPoly(x)
% This function forms the vector X of coefficients of the polynomial % whose roots and multiplicities are stored in the matrix x. The % element X(l) is the coefficient of the highest power, that is, % the degree of the polynomial.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% b=x;
[r c] =size (b) ;
X=l;
% Form the vector X from the roots in the first column of x and the % corresponding multiplicity in the second column of x.
for i=l:r
P=[l -b(i, 1) ] ;
for j=l : b ( i , c )
X=conv(X,P) ;
end
end
Appendix 21 geomecoeff
function g = geomecoeff (f )
% This function normalises the elements of the vector f by the geometric % mean of its coefficients, and stores this normalised vector in g. productf=1 ;
for k=l : 1 : length (f)
productf=abs ( f (k ) ) A ( 1 /length ( f ) ) *productf ;
end
g=f/productf ;
Appendix 22 optimal_linprog
function [theta, alpha] =optimal_linprog ( fx, gx)
% This function uses linear programming to calculate the optimal % values of alpha and theta.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% f= [1 -1 0 0] ;
ml=length ( f ) ;
nl=length (gx) ;
Ta=zeros (ml, 4) ;
Tb=zeros (nl, 4) ;
Da=zeros (ml, 4) ;
Db=zeros (nl, 4) ;
for k=l : 1 : ml
Ta(k, : )=[1, 0,k-ml, 0] ;
Da(k, : ) = [ 0 , -1 , ml-k, 0] ;
end
for k=l : 1 : nl
Tb(k, : )=[1, 0,k-nl,-l] ;
Db(k, : )=[0,-l,nl-k, 1] ;
end
A=(-l) * [Ta;Tb;Da;Db] ;
b= [-loglO (abs (fx) ) , -loglO (abs (gx) ) , loglO (abs (fx) ) , loglO (abs (gx) ) ] ' ; x=linprog ( f , A, b) ;
theta=10Ax (3) ;
alpha=10Ax(4) ;
Appendix 23 GMnorm_denoscaler
function [g,gm] = GMnorm_denoscaler ( f )
% This function normalises the vector f by the geometric mean of its % coefficients. The vector g on exit is the normalised form of f, and % the scalar gm is the reciprocal of the geometric mean.
L=length (f ) ;
gm=1 ;
for i=l : L
gm=(abs(f (i) ) )A(1/(L) ) *gm;
end
gm=1 /gm;
g=gm*f ;
Appendix 24 ABSRC3.
function [minangles, min_residuals , q_col , q_res , error ]=...
ABSRC3_info (givenPoly, fw, agw, alphaO, thetaO)
% This function returns the data required to calculate the degree of % an AGCD of the given inexact polynomial and its derivative.
% givenPoly : The given polynomial whose roots are to be computed, % expressed in the variable y. The polynomial is of
% degree m
fw The same polynomial as givenPoly, but expressed in the variable w
agw The derivative of the polynomial fw, multiplied by the scale factor alpha, agw is of degree n=m-l.
% alphaO The optimal value of alpha
% thetaO The optimal value of theta
minangles : A vector of length min(m,n) such that minangles (k) stores the smallest angle between a column of the kth subresultant matrix and the space spanned by the remaining columns
% minresidual A vector of length min(m,n) such that minresidual (k) stores the smallest residual between a column of the kth subresulatnt matrix and the space spanned by the remaining columns
q_col A vector of length min(m,n) such that q_col(k) stores the column for which the angle between it and the space spanned by the other columns is a minimum q_res A vector of length min(m,n) such that q_res(k) stores the column for which the residual between it and the space spanned by the other columns is a minimum
% error : A matrix of order 2 x n, where the first row stores % the errors for Method 3 using the first principal
% angle, and the second row stores the errors for
% Method 3 using the residual.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
m=length ( fw) -1 ; % the degree of fw
n=length ( agw) -1 ; % the degree of agw
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Method 1: Perform the calculations using the angle criterion
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
minangles=zeros ( 1 , min (m, n ) ) ;
q_col=zeros ( 1 , min (m, n ) ) ;
for k=l : 1 : min (m, n ) % loop for all the subresultant matrices
Form the kth subresultant matrix Sk.
Sk=KthSylvesterM(fw, agw, k) ; column_Sk=size ( Sk, 2 ) ; % the number of columns of Sk
angle_Vect=zeros ( 1 , column_Sk ) ;
% Compute the angle between each column of Sk and the space spanned % by the other columns.
for q=l : 1 : column_Sk % loop for each column of Sk
[Ak, ck] =KthSylvester (fw, agw, k, q) ;
[r, c] =size (Ak) ; % the number of rows and columns of Ak
% Calculate the first principal angle between ck, the qth column % of Sk, and the other columns of Sk.
%Stepl: Compute ul.
ul=ck/norm ( ck ) ;
% Step2 : The columns of Nl define an orthonormal basis for the
% space spanned by columns of Ak . Nl is a rectangular matrix with
% orthogonal columns, and R is a square upper triangular matrix.
[Nl, R] =qr (Ak, 0 ) ; % the 'thin' QR factorisation
[P, Seg,Q]=svd(Nl) ;
% The columns of N2 define an orthogonal basis for the
% complement of Nl .
N2=P ( : , c+1 :r) ;
sigma_Vect=svd (ul ' *N2 ) ;
angle_Vect (q) =asin ( sigma_Vect ) ;
end % for loop
% minangles(k) stores the smallest angle and q_col(k) stores
% the column of Sk for which the minimum angle is achieved, for
% the kth subresultant matrix.
[minangles (k) , q_col (k) ] = min ( angle_Vect ) ;
end % subresultant matrix loop
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Method 2: Perform the calculations using the residual criterion
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
min_residuals=zeros ( 1 , min (m, n ) ) ;
q_res=zeros ( 1 , min (m, n ) ) ;
for k=l : 1 : min (m, n )
% Form the kth subresultant matrix Sk
Sk=KthSylvesterM(fw, agw, k) ;
SK_column=size ( Sk, 2 ) ; % the number of columns of Sk
residuals=zeros (1, SK_column ) ;
% Compute the residual between each column of Sk and the space % spanned by the other columns,
for col=l : 1 : SK_column
[Ak, ck] =KthSylvester (fw, agw, k, col) ;
xk=pinv (Ak ) *ck ; % the least squares solution
residuals ( col ) =norm (Ak*xk-ck ) ; % the non-normalised residual end % loop for the columns, for the kth subresultant matrix % min_residuals (k) stores the smallest residual and q_res(k) stores % the column of Sk for which the minimum residual is achieved, % for the kth subresultant matrix.
[min_residuals (k) , q_res (k) ] =min (residuals) ;
end % subresultant matrix loop
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Method 3:
% This method is only applicable for a polynomial and its derivative. It % requires the results from Methods 1 and 2 above.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Store the optimal columns calculated by Methods 1 and 2.
optColumnM= [q_col ; q_res ] ;
% This scalar is required later.
Lamda= (abs (givenPoly (m+1 ) ) / factorial (m) )A(l/m) ...
/prod (abs (givenPoly) . A ( 1 / (m* (m+1 ) ) ) ) ;
% Initialise the matrix that stores the errors using this method, error = zeros ( 2 , m-1 ) ;
for j=l:l:2 % loop for Methods 1 and 2
% Define the vector that stores the optimal column.
optColumn=optColumnM ( j , : ) ;
for k=l : 1 : m-1
% Form the kth Sylvester matrix, the column ck and matrix Ak, % and solve in the least squares sense (Ak) (xk)=ck
[Ak, ck] =KthSylvester (fw, agw, k, optColumn (k) ) ;
xk=pinv (Ak) *ck;
% Form the coprime polynomials vk and uk from the vector x. x_vect= [xk ( 1 : optColumn (k ) -1 ) ; -1 ; xk ( optColumn (k ) : end) ] ;
vk=x_vect ( 1 : m-k ) ;
uk=-x_vect (m-k+1 : end) ;
% Solve, in the least squares sense, the approximate equation: % f approx. equal (uk) (dk) and g approx. equal (vk) (dk)
% Form the coefficient matrix Bk from uk and vk, and the right
% hand side vector bk .
Ckl=cauchy (uk, k) ;
Ck2=cauchy (vk, k) ;
Bk= [Ckl;Ck2] ;
bk= [ fw, agw] ' ;
% The least squares solution.
dk=pinv (Bk) *bk;
% Compute the error measure.
m_k_vec=m-k : -1 : 1 ;
k_vec=k : -1 : 1 ;
Lk=cauchy (dk, m-k-1 ) ;
Vk=thetaO/ (alphaO*Lamda) *Lk;
Uk=cauchy (k_vec ' . *dk ( 1 : end-1 ) , m-k) ;
R= [diag (m_k_vec, 0 ) , zeros (m-k, 1 ) ] ; % The first row of the matrix error contains the error
% using the optimal column from Method 1.
% The second row of the matrix error contains the error
% using the optimal column from Method 2.
error_l = norm (Vk*vk- ( Lk*R+Uk ) *uk ) ;
error_2 = norm (Vk*vk ) +norm ( ( Lk*R+Uk ) *uk ) ;
error (j,k) = error_l /error_2 ;
end % k loop
% j loop
Appendix 25 KthSylvester
function [Ak, ck] =KthSylvester (f , g, K, q)
% This function forms the matrix s, the Kth Sylvester subresultant matrix % of the polynomials f and g. It returns ck, which is the qth column of s, % and Ak, which is the matrix formed from the other columns of s.
m=length ( f ) -1 ;
n=length (g) -1;
a=zeros (m+n-K+1 , n-K+1 ) ;
b=zeros (m+n-K+1 , m-K+1 ) ;
for k=l:l:n-K+l
for h=k : 1 : m+k
a(h,k)=f (h-k+1) ;
end
end
for k=l:l:m-K+l
for h=k : 1 : n+k
b(h,k)=g(h-k+l) ;
end
end
s= [a, b] ;
% Form the vector ck, which is equal to the qth column of s, and
% the matrix Ak from the other colums of s.
ck=s ( : , q) ;
Ak=[s(:,l:q-1) , s ( : , q+1 : end ) ] ;
Appendix 26 Creat_dAkda
function Ak_da = Creat_dAkda (n, k, q, alpha, Ak)
% This function calculates the derivative Ak_da of the matrix Ak % with respect to alpha.
if q < (n-k+2)
Ak_da=zeros (size (Ak) ) ;
else
Ak_da=zeros (size (Ak) ) ;
Ak_da ( 1 : end, n-k+2 : end) =Ak ( 1 : end, n-k+2 : end) /alpha;
end
Appendix 27 Creat_dckda
function dc_da = Creat_dckda (n, k, q, alpha, ck)
% This function calculates the derivative dc_da of the vector ck % with respect to alpha.
if q < (n-k+2)
dc_da=zeros (size (ck) ) ;
else
dc_da=ck/alpha;
end
Appendix 28 Creat_dt
function [f] = Creat_dt ( f, theta, m)
% This function calculates the derivative of the polynomial f % with respect to theta.
f=f/theta;
for i=m+l : -1 : 1
f (i) = (m-(i-l) ) *f (i) ;
end
Appendix 29 Creat_Y_k
function Y_d = Creat_Y_k (m, n, k, q_col, x, alpha, theta)
% This function creates the matrix Y_k from the vector x
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Create a vector Art_x. The zero is placed in the element q_col of % x because this is the optimal column.
Art_x= [x ( 1 : q_col-l ) ; 0 ; x (q_col : end) ] ;
% The coefficients of f occupy the first n-k+1 columns of Sk, and the % coefficients of g occupy the last m-k+1 columns of Sk.
x_f=Art_x ( 1 : n-k+1 ) ;
x_g=alpha*Art_x (n-k+2 : end) ;
Theta_to_m=theta . A (m : -1 : 0 ) ;
Theta_to_n=theta. Λ (n:-l : 0) ;
% Each portion of Y is obtained by multiplying a Cauchy matrix by % a diagonal matrix that contains powers of theta.
Yl=cauchy (x_f , m) *diag (Theta_to_m, 0 ) ;
Y2=cauchy (x_g, n) *diag (Theta_to_n, 0 ) ;
Y_d= [Yl Y2] ;
Appendix 30 Creat_P
function P = Creat_P (m, n, k, q_col, theta)
% This function creates the matrix P.
m_vect=m : -1 : 0 ;
n_vect=n : -1 : 0 ;
Theta_to_m=theta . m_vect ;
Theta_to_n=theta . An_vect ;
if q_col > (n-k+1)
G=diag (Theta_to_n) ;
P= [ zeros (q_col-n+k-2 , m+1 ) , zeros (q_col-n+k-2 , n+1 ) ; ... zeros(n+l,m+l) ,G;zeros (m+n-2*k-q_col+2 , m+1 ) , ... zeros (m+n-2 *k-q_col+2 , n+1 ) ] ;
else
G=diag (Theta_to_m) ;
P= [ zeros (q_col-l , m+1 ) , zeros (q_col-l , n+1 ) ; ...
G zeros (m+1 , n+1 );...
zeros (n-k+l-q_col, m+1 ) , zeros (n-k+l-q_col, n+1 ) ] ; end
Appendix 31 LSE
function x = LSE (A, b, B, d)
% This function solves the LSE problem
% minmise | |Ax-b| |_{2} subject to Bx=d
% Calculate the QR decomposition of the transpose of B. [p, n] =size (B) ;
[Q,R]=qr (B' ) ;
Rl=R(l:p, l:p) ;
% Solve R(l :p, 1 :p) ' y=d for y.
y=Rl'\d;
A=A*Q;
Al=A(:,l:p) ;
A2=A( : , p+1 :n) ;
z=pinv(A2) * (b-Al*y) ;
x=Q* [y;z] ;
Appendix 32 LSE_Updating
function [C, r , TH, telda_f , telda_g] = ...
LSE_Updating ( z, x, alpha, theta, f_bar, g_bar, k, q)
% This function updates all the variables after each iteration of % the LSE problem.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
m=length ( f_bar ) -1 ;
n=length (g_bar ) -1 ;
m_vect=m : -1 : 0 ;
n_vect=n : -1 : 0 ;
% Define the structured perturbations of the polynomial f and
% its derivative g.
zf=z ( 1 :m+l ) ;
zg=alpha*z (m+2 : end) ;
% Update the polynomials f and g. Note that alpha is included in gw, % but it is not included in the structured perturbation vectort zg. fw=f_bar . * (theta . Am_vect ) ;
gw=alpha . *g_bar . * (theta . An_vect ) ;
% Update the vector z of structured perturbations,
zft=zf ' . * (theta . Am_vect ) ;
zgt=zg ' . * (theta . An_vect ) ;
% Calculate the corrected forms of the polynomials f and g. Note % that, from above, alpha is included in the formula for telda_g. telda_f=fw+zft ;
telda_g=gw+zgt ;
% Update the matrix Ak and vector Ck because alpha and theta have % been changed.
[Ak, Ck] =KthSylvester (fw, gw, k, q) ;
% Update the partial derivatives and variables.
% Calculate the derivatives of Ak and Ck with respect to alpha.
Ak_da=Creat_dAkda (n, k, q, alpha, Ak) ;
dck_da=Creat_dckda (n, k, q, alpha, Ck) ;
% Calculate the derivatives of f and g with respect to theta.
[d_f_dt ] =Creat_dt ( fw, theta, m) ;
[d_g_dt] =Creat_dt (gw, theta, n) ;
% Calculate the derivatives of Ak and Ck with respect to theta.
[Ak_dt, Ck_dt] =KthSylvester ( d_f_dt , d_g_dt , k , q) ;
% Calculate the matrix Ek and vector hk, which are the corrections % to be added to Ak and Ck respectively.
[Ek, hk] =KthSylvester (zft, zgt,k,q) ;
% Calculate the derivatives of Ek and hk with respect to alpha.
Ek_da=Creat_dAkda (n, k, q, alpha, Ek) ;
dh_da=Creat_dckda (n, k, q, alpha, hk) ;
% Calculate the derivatives of zft and zgt with respect to theta, and % use them to calculate the derivatives of Ek and hk with respect to % theta.
[ zf_dt ] =Creat_dt ( zft , theta, m) ;
[ zg_dt ] =Creat_dt ( zgt , theta, n ) ; [Ek_dt, hk_dt] =KthSylvester (zf_dt, zg_dt,k,q) ;
% Calculate the normalised residual of the constraint equation in
% the LSE problem.
r= (Ck+hk) - ( (Ak+Ek) *x) ;
normr=norm (r ) ;
TH= (normr ) /norm (Ck+hk) ;
% Update the matrices Y and P.
Y=Creat_Y_k (m, n, k, q, x, alpha, theta) ;
P=Creat_P (m, n, k, q, theta) ;
% Update the matrices H_z, H_x, H_a and H_t, and use them to % update the matrix C.
if q < (n-k+2)
alphaq=l ;
else
alphaq=alpha;
end
H_z=Y-alphaq*P ;
H_x=Ak+Ek ;
H_a= (Ak_da+Ek_da) *x- (dck_da+dh_da) ;
H_t= (Ak_dt+Ek_dt) *x- (Ck_dt+hk_dt ) ;
C= [H_z H_x H_a H_t ] ;
Appendix 33 Draw_sv_i
function [xAxis, yAxis, yValue] =Draw_SV_I (f , g, rd)
% This function prepares the data for the x and y axes
% for plotting the normalised singular values of the
% Sylvester matrix S of the polynomials f and g, and the
% last non-zero singular value yValue of S. rd is the index
% (x-coordinate) of yValue.
% Form the Sylvester matrix and calculate its normalised % singular values.
S=creatSelvester2 ( f , g) ;
sp=svd ( S ) ;
segNorm=logl 0 ( sp/sp ( 1 ) ) ;
xAxis= [1:1: length ( sp) ] ;
yAxis=segNorm;
% Define yvalue, the last non-zero singular value of S. yValue=segNorm (rd) ;
Appendix 34 Draw_sv_i
function [xAxis, yAxis, yValue] =Draw_SV_I (f , g, rd)
% This function prepares the data for the x and y axes
% for plotting the normalised singular values of the
% Sylvester matrix S of the polynomials f and g, and the
% last non-zero singular value yValue of S. rd is the index
% (x-coordinate) of yValue.
% Form the Sylvester matrix and calculate its normalised % singular values.
S=creatSelvester2 ( f , g) ;
sp=svd ( S ) ;
segNorm=logl 0 ( sp/sp ( 1 ) ) ;
xAxis= [1:1: length ( sp) ] ;
yAxis=segNorm;
% Define yvalue, the last non-zero singular value of S. yValue=segNorm (rd) ;
Appendix 35 getAGCD_PolyFact
function telda_GCD=getAGCD_PolyFact (telda_f , telda_g, k)
% This function returns telda_GCD, the GCD of the polynomials telda_f % and telda_g. The degree of the polynomial telda_GCD is k. The Sylvester % matrix of telda_f and telda_g, and k, are used to calculate estimates % of the coprime polynomials of telda_f and telda_g. A least squares % problem is then solved to compute telda_GCD.
n=length (telda_g) -1 ; % the degree of telda_g
% Calculate the vector that stores the optimal column of the Sylvester % matrix and its subresultant matrices to move to the right hand side, % for each subresulatnt matrix, k=l,...,n
q_col=ColumnRes (telda_f , telda_g) ;
% Form the kth Sylvester matrix, and the matrix Ak and vector ck .
[Ak, ck] =KthSylvester (telda_f , telda_g, k, q_col (k) ) ;
% Calculate the least squares solution of the approximate equation
% (Ak) (xk)=ck.
xk=Ak\ck;
% Form the coprime polynomials from xk .
x_vect= [xk ( 1 : q_col (k ) -1 ) ; -1 ; xk (q_col (k ) : end) ] ;
vk=x_vect ( 1 : n-k+1 ) ;
uk=-x_vect (n-k+2 : end) ;
% Solve the equations f=(uk) (dk) and g=(vk) (dk), where dk is the GCD of
% f and g. Form the coefficient matrix Bk and right hand vector bk .
Ckl=cauchy (uk, k) ;
Ck2=cauchy (vk, k) ;
Bk= [Ckl;Ck2] ;
bk= [telda_f , telda_g] ' ;
% Calculate the least squares solution of (Bk) (dk)=bk.
dk=Bk\bk;
telda_GCD=dk ' ;
Appendix 36 polyFactSNTLN
function [fh,GCD]=...
polyFactSNTLN (fx, gx, fw, gw, thetaO, alphaO, optRankL, method)
% This function uses the method of approximate polynomial factorisation % to compute an AGCD of the inexact polynomials fw and gw.
% fx : An inexact polynomial in the variable y
% gx : The derivative of fx
% fw,gw : The polynomials fx and gx respectively, expressed in the
% variable w
% thetaO : The optimal value of theta
% alphaO : The optimal value of alpha
% optRankL : The degree of the GCD of fw and gw
% method : This is equal to 1 or 2, depending upon the method used to
% calculate the best column of the subresultant matrices:
% 1 : The first principal angle is used
% 2 : The residual of an approximate equation is used
% fh : The corrected form of the input polynomial fw, such that
% this corrected polynomial and the corrected form of
% gw have a non-constant GCD
% GCD % The AGCD of the corrected polynomial and its derivative
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Define the degree of the GCD of fx and gx.
k=optRankL ;
m=length ( fx) -1 ; % the degree of fx
n=length (gx) -1 ; % the degree of gx
m_vect=m : -1 : 0 ;
n_vect=n : -1 : 0 ;
% Define some variables that are used later.
m_k_vect=m-k : -1 : 0 ;
n_k_vect=n-k : -1 : 0 ;
k_vect=k : -1 : 0 ;
Theta_to_m_k=thetaO . Am_k_vect ;
Theta_to_n_k=thetaO . An_k_vect ;
Theta_to_k=thetaO . Ak_vect ;
% Calculate the vector col, where col(k) stores the optimal column of the % kth subresultant matrix of the Sylvester matrix of fw and alphaO*gw. switch method
case 1 % use the first principal angle
col=ColumnAngle ( fw, alphaO*gw) ;
case 2 % use the residual of an approximate equation
col=ColumnRes ( fw, alphaO*gw) ; end
% Initialise some variables for the LSE problem.
E=eye ( 2 *m+2 *n-2 *k+6 ) ;
f=zeros (size (E, 1) , 1) ;
zk=zeros (m+n-2 *k+2 , 1 ) ;
pk=zeros (m+1 , 1 ) ;
qk=zeros (n+1 , 1 ) ;
tk=zeros (n+1 , 1 ) ;
theta=thetaO;
beta=0 ;
% Approximate polynomial factorisation requires an estimate of the % coprime polynoials, which are then refines. These estimates are
% calculated from the kth subresultant matrix.
[Ak, ck] =KthSylvester (fw, alphaO*gw, k, col (k) ) ;
% Solve the equation (Ak) (xk)=ck in order to calculate estimates of the % coprime polynomials uk and vk, and dk, the GCD of fw and gw.
xk=pinv (Ak) *ck;
x_vect= [xk(l:col(k)-l) ;-l;xk(col(k) : end) ] ;
vk=x_vect ( 1 : n-k+1 ) ;
uk=-x_vect (n-k+2 : end) ;
% Compute dk from the approximate equations fw=(uk) (dk) and gw=(vk) (dk) . Ckl=cauchy (uk, k) ;
Ck2=cauchy (vk, k) ;
Bk= [Ckl ; Ck2 ] ; % the coefficient matrix
bk= [ fw, alphaO*gw] ' ; % the right hand side vector
dk=pinv (Bk) *bk; % the least squares solution
rk=bk-Bk*dk ; % the residual of the approximate solution
% Form the coefficient vectors of the coprime polynomials and AGCD % in the y variable by eliminating theta.
f_c=uk . /Theta_to_m_k ' ;
g_c=vk . /Theta_to_n_k ' ;
d_c=dk . /Theta_to_k ' ;
% Form the matrix Yk .
Yl=cauchy (dk, m-k) *diag (Theta_to_m_k, 0 ) ;
Y2=cauchy (dk, n-k) *diag (Theta_to_n_k, 0 ) ;
Y3=zeros (m+1 , n+l-k ) ;
Y4=zeros (n+1, m+l-k ) ;
Yk=[Yl,Y3;Y4,Y2] ;
% Calculate some partial derivatives.
% The partial derivative of fw with respect to theta.
partial_f=m_vect . *fx . *thetaO . A (m_vect-l ) ;
% The partial derivative of gw with respect to theta.
partial_g=n_vect . *gx . *thetaO . A (n_vect-l ) ;
% The partial drrivative of Ckl with respect to theta.
partial_Ckl=cauchy (m_k_vect ' . *uk . /thetaO, k ) ;
% The partial derivative of Ck2 with respect to theta.
partial_Ck2=cauchy (n_k_vect ' . *vk . /thetaO, k ) ;
% The partial derivative of the GCD with respect to theta.
partial_dk=k_vect . *d_c ' . *thetaO . A (k_vect-l ) ;
% Define the vector g on the right hand side of the equality constraint % Cy=g in the LSE problem.
g=rk ; % Form the matrices S and T, and then form C.
S=diag ( thetaO . Am_vect , 0 ) ;
T=diag ( thetaO . An_vect , 0 ) ;
C_temp= [ (-1 ) *S, zeros (m+1 , n+1 ) , zeros (ra+1, 1) , ...
-partial_f ' +partial_Ckl *dk+Ckl *partial_dk ' ; ...
zeros (n+1 , m+1 ) , (-alphaO-beta) *T, -gw'-tk,...
(-alphaO-beta) *partial_g ' +partial_Ck2 *dk+Ck2 *partial_dk ' ] ; C= [Yk, C_temp] ;
% This vector is required for the solution of the LSE problem.
ek=bk ;
% Start the iterative solution of the LSE problem.
ite=0; % initialise the counter for the number of iterations
while norm (rk ) /norm ( ek ) >=le-16 % the stopping criterion
ite=ite+l ;
if ite>50
break ;
end
% Solve the LSE problem: min | |Ey-f | | subject to Cy=g.
y=LSE(E,f,C,g) ;
% Calculate the increments of z, p, q, beta and theta.
delta_zk=y ( 1 :m+n-2*k+2 ) ;
delta_pk=y (m+n-2*k+3 : 2*m+n-2*k+3 ) ;
delta_qk=y (2*m+n-2*k+4 : 2 *m+2 *n-2 *k+4 ) ;
delta_beta=y ( end-1 ) ;
delta_theta=y (end) ;
% Use the increments above to update the values of the variables. zk=zk+delta_zk ;
pk=pk+delta_pk ;
qk=qk+delta_qk ;
beta=beta+delta_beta;
theta=theta+delta_theta;
% Calculate the new values of fw, gw and dk, the GCD of fw and gw. % Note they are expressed in the w variable,
fw_UD=fx . *theta . Am_vect ;
gw_UD=gx . *theta . An_vect ;
dk=d_c . *theta . Ak_vect ' ;
% Update the matrices S and T.
S=diag (theta . Am_vect , 0 ) ;
T=diag (theta . An_vect , 0 ) ;
Theta_to_m_k=theta . Am_k_vect ;
Theta_to_n_k=theta . An_k_vect ;
% Update the matrices Ckl and Ck2, and the matrix Bk .
Ckl=cauchy (f_c . *Theta_to_m_k ' , k) ;
Ck2=cauchy (g_c . *Theta_to_n_k ' , k) ;
Bk= [Ckl;Ck2] ;
% Update the matrices Ekl and Ek2, and the matrix Ek . Ekl=cauchy ( zk ( 1 :m-k+l ) . *Theta_to_m_k ' , k) ;
Ek2=cauchy ( zk (m-k+2 :m+n-2*k+2 ) . *Theta_to_n_k ' , k) ;
Ek= [Ekl;Ek2] ;
% Update the matrices Yl and Y2, and the matrix Yk .
Yl=cauchy (dk, m-k) *diag (Theta_to_m_k, 0 ) ;
Y2=cauchy (dk, n-k) *diag (Theta_to_n_k, 0 ) ;
Yk=[Yl,Y3;Y4,Y2] ;
sk=pk . *theta . Am_vect ' ;
tk=qk . *theta . An_vect ' ;
% Update the partial derivatives.
% The partial derivative of sk with respect to theta.
partial_sk=m_vect ' . *pk . *theta . A (m_vect-l ) ' ;
% The partial derivative of tk with respect to theta.
partial_tk=n_vect ' . *qk . *theta . A (n_vect-l ) ' ;
% The partial derivative of fw with respect to theta.
partial_f=m_vect . *fx . *theta . A (m_vect-l ) ;
% The partial derivative of gw with respect to theta.
partial_g=n_vect . *gx . *theta . A (n_vect-l ) ;
% The partial derivative of Ckl with respect to theta.
partial_Ckl=cauchy (m_k_vect ' . *f_c . *Theta_to_m_k ' . /theta, k) ;
% The partial derivative of Ck2 with respect to theta.
partial_Ck2=cauchy (n_k_vect ' . *g_c . *Theta_to_n_k ' . /theta, k) ;
% The partial derivative of Ekl with respect to theta.
partial_Ekl=cauchy (m_k_vect ' . *zk ( 1 :m-k+l ) . *Theta_to_m_k ' /theta, k) ;
% The partial derivative of Ek2 with respect to theta.
partial_Ek2=cauchy (n_k_vect ' . *zk (m-k+2 :m+n-2*k+2 ) ...
. * (Theta_to_n_k ' /theta) , k) ;
% The partial derivative of dk with respect to theta.
partial_dk=k_vect . *d_c ' . *theta . A (k_vect-l ) ;
% Update the matrix C.
C_temp= [ (-1 ) *S, zeros (m+1 , n+1 ) , zeros (m+1 , 1 ),...
-partial_f ' -partial_sk+partial_Ckl*dk+partial_Ekl*dk+ ...
(Ckl+Ekl) *partial_dk' ;
zeros (n+1 , m+1 ) , (-alphaO-beta) *T, -gw_UD ' -tk, ...
(-alphaO-beta) * (partial_g ' +partial_tk) + ...
partial_Ck2*dk+partial_Ek2*dk+ (Ck2+Ek2) *partial_dk' ] ;
C= [Yk, C_temp] ;
% Update the vector in the function | |Ey-f | | to be minimised.
f=- [ zk ; pk ; qk ; beta; theta-thetaO] ;
% Calculate the residual of the approximate equations
% fw=(uk) (dk) and gw=(vk)(dk).
rk= [fw_UD' +sk; (alphaO+beta) * (gw_UD' +tk) ] - (Bk+Ek) *dk;
% These two variables are required for the next iterative loop. g=rk ;
ek= [fw_UD' +sk; (alphaO+beta) * (gw_UD' +tk) ] ;
end % while loop for iterative solution of LSE problem
% Define the GCD of the corrected forms of fw and gw.
GCD=dk ' ;
% Define fh, the corrected form of the input polynomial fw.
fh=fw_UD+sk ' ; Appendix 37 squareFree2
unction [ca, ca2 ] =squareFree2 (optRankL, telda_f , telda_GCD, ...
theta, Threshold, bestCol, Type, a)
optRankL : The degree of the GCD of telda_f and its derivative
This is the computed degree or the exact degree, depending on the value of the input variable Type telda_f The corrected polynomial in the w variable whose roots are to be computed
telda_GCD The GCD of telda_f and its derivative in the w variable
% theta The optimal value of theta from the first AGCD
computation
Threshold : The stopping threshold in the least squares problem with an equality constraint (the LSE problem)
% bestCol This parameter is equal to 1 or 2, depending upon the method to be used for the calculation of the optimal column for the computation of a structured low rank approximation of the Sylvester resultant matrix:
The optimal column is chosen using the first
principal angle (angle between subspaces)
The optimal column is chosen using the residual of an approximate linear algebraic equation
% Type This parameter is equal to 1 or 2, depending upon the method used to calculate the multiplicities of the roots :
The multiplicities are computed
The multiplicities are defined and therefore exact
A two column matrix that defines the exact form of the polynomial whose roots are computed. The first column of a stores the roots in the y variable, and the secori' column of a stores the multiplicities of the roots. ca The initial estimates of the roots of the polynomial telda_f, using the method of least squares to perform the deconvolution
% ca2 The initial estimates of the roots of the polynomial telda_f, using a structure preserving matrix method to perform the deconvolution
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Initialise some vectors.
h_error2=zeros ( ) ;
w_error=zeros ( ) ;
mult=zeros ( ) ;
% Initialise the counter for the AGCD computations. The first AGCD % computation (of q_0 and its derivative) has already been performed. derCount=l ; % Normalise telda_f by the geometric mean of its coefficients.
queO=geomecoeff (telda_f ' ) ;
m=length (queO ) -1 ; % the degree of telda_f
% These vectors are needed later.
m_vect=m : -1 : 0 ;
k_vect=optRankL : -1 : 0 ;
% Store the polynomial queO in the independent variable y by
% dividing queO, which is in the w variable, by theta.
M(l) . f=queO./ (theta . Am_vect ' ) ;
% Perform the first deconvolution hl=q_0/q_l in the independent
% variable w. The polynomial q_l=telda_GCD is the GCD of telda_f and % its derivative. Form the Cauchy matrix Quel from telda_GCD.
Quel=cauchy (telda_GCD, m-optRankL) ;
hl=Quel \queO ; % the least squares estimate of h_l in the variable w
% Calculate the error in this solution.
h_error2 ( 1 ) =norm ( que0-Quel *hl ) /norm (queO ) ;
% Store this polynomial h_l in WM2 ( 1 ) . f .
WM2 (1) . f=hl;
% Thus far, the first AGCD computation has been performed, and
% hence the polynomials q_0 , q_l and hi have been defined. The
% other AGCD computations can now be performed.
polyDeg=m; % the degree of the given polynomial q_0
% The entry RLA ( i ) of the array RLA (Rank Loss Array) stores the degree % of the polynomial that results from the ith AGCD computation,
% RLA(i)=deg q_i=deg (GCD (q_i , q_{ i-1 } ) ) , i=l , 2 , ...
RLA ( derCount ) =optRankL ;
% Prepare the input data for the next AGCD computation.
GCDIn=telda_GCD;
% The variable degree_h(i) stores the degree of the polynomial hi, % where h_i=q_ ( i-1 ) /q_i , i=l,2...
degree_h (derCount ) =length (hi ) -1 ; % the degree of hi
hj_l=hl;
% Initialise thetaN to one. This variable is required for monitoring
% the value of theta for each AGCD computation.
thetaN=l;
% Set THETAS to the value of theta from the first AGCD computation % and place it as the first element in the array THETA_vect .
THETAS=theta;
THETA_vect ( derCount ) =THETAS ;
% Store GCDIn=telda_GCD as a polynomial in y. This is the polynomial % q_l that is the GCD of q_0 and its derivative.
M(2) . f=GCDIn./ (theta . Ak_vect ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % The next few lines operate on the exact form of the polynomial, % which is stored in the matrix a. gcdln_exact_matrix=a;
% Calculate the difference between the degree of q_0, and the degree % of the GCD of q_0 and its derivative. This is equal to the degree % of h_l.
numRoots=polyDeg-RLA (derCount ) ;
NRj_l=numRoots ;
% If the remainder of the division polyDeg/NRj_l is equal to zero,
% the roots of q_0 may have the same multiplicities. If the
% remainder is not equal to zero, then the roots cannot have the
% same multiplicities.
divRem=mod (polyDeg, Rj_l ) ;
if divRem==0
SAME=1; % roots may have the same multiplicities
else
SAME=0; % roots cannot have the same multiplicities
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Initialise the variable for the sum of the multiplicities of the % computed roots. It is used to check that the sum of these
% multiplicities is equal to the degree of the given polynomial.
Accomulator=0 ;
% Continue the AGCD computations if the degree of the AGCD is
% greater than zero,
while RLA (derCount ) >0
% Increment the counter for the AGCD computations.
derCount=derCount+l ;
% Calculate the multiplicities of the roots of the exact polynomial % and its derivative. Note that this computation is only valid if % if the degree of the AGCD is specified exactly.
gcdIn_exact_matrix=getEGCDM (gcdln_exact_matrix) ;
% Update the AGCD from the previous calculations. The input of this % AGCD computation is the output of the previous AGCD computation. GCD_j_l=GCDIn;
AGCDj_l=GCDIn ;
% Define NR, the number of the distinct roots of the AGCD. This % is also equal to the degree of h (derCount-1 ) because the
% roots of the polynomials h_i are distinct.
NR=polyDeg-RLA(derCount-l) ;
% The variable spAccomulator assumes all the roots have the same % multiplicities. Thus spAccomulator is equal to the variable % Accomulator if all the roots have the same multiplicities.
spAccomulator=NR*derCount ;
% Predict the degree of the next AGCD computation using the
% exact data. CheckZE denotes 'CHECK Zeros of Exact data'.
checkZE=RLA (derCount-1 ) -size (gcdln_exact_matrix, 1 ) ;
% Check if the number of distinct roots of the current AGCD is % equal to the number of distinct roots of the previous AGCD.
% This is required for the threshold that is used to determine if % the AGCD computations are to be stopped or continued,
if NR~=NRj_l
SAME=0; end
% Check if this is the last AGCD computation and update the
% variable checkZ ('check Zeros').
[STOP2]=StopCheckHazard(GCD_j_l,RLA(derCount-l) ,m, SAME, spAccomulator ) ; if Type==2 % the degree of the AGCD is defined exactly
checkZ=checkZE ;
else
checkZ=STOP2 ; % the degree of the AGCD is computed
end
% Update the degree of the polynomial h_i .
NRj_l=NR;
% Continue the AGCD computations if checkZ is greater than zero. If % it is, define the degree of the polynomial q_i, where the degree % of this polynomial and its derivative is to be computed,
if checkZ>0
polyDeg=length (GCDIn) -1 ;
[xAxis, yAxis, yValue , AGCDj_l , GCDIn, RLA (derCount ) , thetaN] = ...
nextGCD2 (GCDIn, RLA ( derCount-1 ) , size (gcdln_exact_matrix, 1 ) , ... derCount , m, Accomulator , Threshold, bestCol , Type ) ;
% If deg(h_i)=l, then the degrees of all the remaining
% polynomials h_i must be less than or equal to one. This
% property allows the entry in the array RLA to be overwritten. if degree_h (derCount-1 ) ==1
RLA (derCount) =length (GCDIn) -1 ;
end
% Update the variables for theta.
THETAS=THETAS*thetaN;
THETA_vect ( derCount ) =THETAS ;
% Evaluate the AGCD in the y variable.
drl=RLA (derCount) :-l:0;
M(derCount+l) . f=GCDIn./ ( THETAS . Adr1 ) ;
% Update the GCD for the next AGCD computation.
GCD_j=GCDIn;
% Calculate the polynomial hj using least squares. Also,
% calculate the error.
hRepX=RLA(derCount-l) -RLA (derCount ) ;
Que_j=cauchy (GCD_j , hRepX) ;
hj=Que_j\AGCDj_l ' ;
degree_h (derCount ) =length (hj ) -1 ; % the degree of hj
h_error2 (derCount ) =norm (AGCDj_l ' -Que_j *hj ) /norm (AGCDj_l ' ) ;
% Calculate the polynomial w_j_l= h_j_l/h_j.
% repX is the degree of wj .
repX=degree_h (derCount-1 ) -degree_h (derCount ) ;
% Form the Cauchy matrix with coefficients from h_j .
H_c=cauchy (hj , repX) ;
% Note that h_j_l & h_j come from different computations,
% and therefore different w scales. Update h_j_l to be in the
% same scale as the current h_j .
drl=degree_h (derCount-1 ) :-l:0;
thetaNvect=thetaN . Adrl ;
hj_l=hj_l . *thetaNvect ' ; w_j=H_c\hj_l ; % the polynomial w_j_l
% Calculatet the error in w_j_l .
w_error (derCount-1 ) =norm (hj_l-H_c*w_j ) /norm (hj_l ) ;
% If w_j is of length 1, then it is the constant polynomial, % and there are no roots with multiplicity derCount-1.
if length (w_j ) ==1
mult (derCount-1 ) =0 ;
else
% There is at least one root with multiplicity derCount-1. % Compute the roots and transform them to the y variable, rootw (derCount-1 ) .value=roots (w_j ) *THETAS;
% Update the multiplicity vector and Accomulator, which stores % the degrees of the roots that have been computed,
mult (derCount-1 ) =derCount-l ;
Accomulator=Accomulator+ ...
(derCount-1 ) * length (rootw (derCount-1 ) .value) ;
end
hj_l=hj ; % Prepare the data for the next AGCD computation else % come here if checkZ=0
% The AGCD computations have finished and thus the
% last two polynomials w_i and w_i+l must be computed.
% Set RLA (derCount ) to zero because the AGCD computations
% have finished. The next AGCD is a constant.
RLA (derCount )=0;
M(derCount+l) . f=l;
% The last polynomial h_j is euql to AGCDj_l, and thus
% the error is zero.
hj=AGCDj_l ' ;
degree_h (derCount ) =length (hj ) -1 ;
h_error2 (derCount ) =0 ;
% Calculate w_j_l=h_j_l /h_j . Note that h_j_l and h_j are
% in different w scales.
repX=degree_h (derCount-1 ) -degree_h (derCount ) ;
H_c=cauchy (hj , repX) ;
w_j=H_c\hj_l ;
% If the length of w_j is one, then w_j is the constant
% polynomial and thus there are no roots with this
% multiplicity,
if length (w_j ) ==1
mult (derCount-1 ) =0 ;
else
% Compute the roots of w_j in the y scale, update the % multiplicity vector and Accomulator.
t=roots (w_j ) *THETAS;
rootw (derCount-1 ) .value=t;
mult (derCount-1 ) =derCount-l ;
Accomulator=Accomulator+ ...
( derCount-1 ) * length (rootw ( derCount-1 ) .value) ;
end
% Define the last polynomial w_i, and repeat the % calculations above.
w_H=hj ;
if length (w_H) ==1
mult (derCount ) =0 ;
else
t=roots (w_H) *THETAS;
rootw (derCount ) .value=t;
mult (derCount ) =derCount ;
Accomulator=Accomulator+ (derCount ) *length (t ) ;
end % if length (w_H) =1
% Break out of the loop because the AGCD computations
% have finished.
break
end % if checkZ > 0
end % while RLA (derCount ) >0
% Combine all the roots into one vector ca.
i=l;
P=l;
for t=l : 1 : derCount
% Only perform these computations if there is a root
% of multiplicity derCount
if length (rootw (t ). value ) >0
% Store the number of roots with multiplicity t, and
% sort them in increasing order.
MultVect= [ t , length (rootw ( t ) .value ) ] ;
multroots=sortrows ( (rootw ( t ) .value ) ) ;
roots_vect ( i : 1 : length (rootw ( t ) . value )+i-l)=fliplr (multroots ) ; i=i+length (rootw (t ) .value) ;
MultMat(P, :)=MultVect;
P=P+1;
end
end
% Flip the order of the roots so that thet are in descending
% multiplicity order, ca is the vector of computed roots when
% the polynomial division is performed by least squares and
% refinmement by non-linear least squares is not used.
ca=fliplr (roots_vect ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Perform both sets of polynomial division using structured
% matrix methods.
% Compute the polynomials h_i using structured matrix methods
% to perform the deconvolution . The output is the cell M.h
% that stores the polynomials h_i . M=Deconvolution2 (M) ;
j=size (M, 2 ) -1 ; % j is the number of polynomials h_i
for i=l : 1 : j
WM(i) . f=M(i) .h;
end
% Use structured deconvolution to compute the polynomials w. The % cell WM on input contains the polynomials h, and the cell WM % on output contains the polynomials w_i .
WM=Deconvolution2 (WM) ;
j=size (WM, 2 ) -1 ; % j is the number of polynomials w_i for i=l : 1 : j
Roots Imp (i) .v=roots (WM(i) .h) ;
end
% The last polynomial w_i is equal to the lasy polynomial h_i . Roots Imp ( j+1) .v=roots (WM( j+1) . f ) ;
%Putting all the roots in the vector ca2.
i=l;
for t=l : 1 : derCount
if length (Rootslmp (t) .v) >0
multrootsImp=sortrows ( (Rootslmp (t) .v) ) ; rootsImp_vect ( i : 1 : length (Rootslmp (t) . v) +i-l ) = ... multrootslmp ( end : -1 : 1 ) ;
i=i+length (Rootslmp (t ) .v) ;
end
end
ca2=fliplr (rootsImp_vect ) ;
Appendix 38 refineRoots
function roots=refineRoots ( a, roots , mult )
% This function uses the method of non-linear least squares to
% refine the roots of the polynomial a.
% Input variables:
% a : A vector of coefficients of a polynomial whose roots are
% to be computed. This is the given inexact polynomial
% roots : A vector of the initial estimates of the roots of the % polynomial in a. The vector roots stores the distinct
% roots, and thus all roots, be they simple or multiple,
% have one entry in the vector roots.
% mult : A vector that stores the multiplicities of the roots
% in the vector roots.
% Output variable:
% roots : Improved estimates of the roots of the polynomial
% whose coefficients are stored in the vector a
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Normalise a so that the polynomial is monic.
a=a ( 2 : end) /a ( 1 ) ;
% Set the initial value of the tolerance and the counter for the
% number of iterations.
tol=l;
ite=0;
% Continue the refinement of the roots if the tolerance is less
% than le-14 and fewer than 50 iterations have been performed,
while tol>10A(-14) && ite<50
ite=ite+l; % increment the iteration counter
% Call the procedure PM in order to calculate the polynomial defined % by the vectors roots and mult, that is, the polynomial formed % from the computed roots and multiplicities. Then subtract the % polynomial in the vector a, thereb yielding an error polynomial. r=PM (roots , mult ) -a;
rj=r ;
% Form the Jacobian matrix from the given roots and multiplicities. J=creatJacobian ( [roots, mult] ) ;
% Calculate an improved estimate of the roots, and the error
% with respect to the polynomial defind by a.
roots=roots-pinv (J) *r ' ;
r=PM (roots , mult ) -a;
% Calculate the tolerance.
tol=norm (r-rj ) /norm (rj ) ;
end Appendix 39 KthSylvesterM
function s = KthSylvesterM ( f, g, K)
% This function forms the matrix s, which is equal to the Kth Sylvester % subresultant matrix of the polynomials f and g.
m=length ( f ) -1 ;
n=length (g) -1 ;
a=zeros (m+n-K+1 , n-K+1 ) ;
b=zeros (m+n-K+1 , m-K+1 ) ;
for k=l:l:n-K+l
for h=k : 1 : m+k
a(h,k)=f (h-k+1) ;
end
end
for k=l:l:m-K+l
for h=k : 1 : n+k
b(h,k)=g(h-k+l) ;
end
end
Appendix 40 cauchy
function Q = cauchy (f,n)
% This function forms the Cauchy matrix Q from the entries of the vector % f . If f is of length m+1, then Q is of order (m+n+1 ) x (n+1 ) .
% If the entries of f are the coefficients of a polynomial of degree m, % and the vector u stores the coefficients of a polynomial of degree n, % the product Q(f)*u is a vector of length (m+n+1) that stores the % coefficients of the polynomial product f*u.
m=length ( f ) -1 ;
Q=zeros (m+n+1 , n+1 ) ;
for k=l:l:n+l
for h=k : 1 : m+k
Q(h,k)=f (h-k+1) ;
end
end
Appendix 41 creatSelvester2
function S=creatSelvester2 ( f , g)
%This file takes the polynomials as a vector of cofficients
F=f ;
G=g;
[sr, sc] =size (F) ;
m=sc-l ;
[sr, sc] =size (G) ;
n=sc-l ;
k=l;
S=zeros (m+n-k+1 , m+n-2 *k+2 ) ;
for i=l : n
S ( : , i ) = [ zeros ( i-1 , 1 ) ; F ' ; zeros (m+n-k+1- (m+1 ) - ( i-1 ) , 1 ) ] ; end
for i=n+l : n+m
S ( : , i ) = [ zeros ( i-n-1 , 1 ) ; G ' ; zeros (m+n-k+1- (n+1 ) - ( i-n-1 ) , 1 ) ] ; end
Appendix 42 ColumnRes
function q_res=ColumnRes ( fw, agw)
% This function forms the Sylvester matrix S(fw,agw) of the polynomials % fw and agw. It returns the vector q_res, where q_res(k) is the index of % the optimal column to move to the right hand side of the kth
% subresultant matrix of S(fw,agw).
% m and n are the degrees of fw and agw respectively.
m=length ( fw) -1 ;
n=length ( agw) -1 ;
min_residuals=zeros ( 1 , min (m, n ) ) ;
q_res=zeros ( 1 , min (m, n ) ) ;
for k=l : 1 : min (m, n ) % loop for the subresultant matrices
% Define the number of columns of the kth subresultant matrix.
SK_column=m+n- (2*k) +2;
residuals=zeros (1, SK_column ) ;
% Loop over the columns of each subresultant matrix,
for col=l : 1 : SK_column
% Form the matrix Ak and the column ck, where col is the
% index of the column ck .
[Ak, ck] =KthSylvester (fw, agw, k, col) ;
% Solve the equation (Ak) (xk)=ck in the least squares sense, % and calculate the residual of the solution.
xk=pinv (Ak) *ck;
residuals (col) =norm (Ak*xk-ck ) ;
end % col loop
[min_residuals (k) , q_res (k) ] =min (residuals) ;
end % k loop
Appendix 43 ColumnAngle
function q_col=ColumnAngle ( fw, agw)
% This function forms the Sylvester matrix S(fw,agw) of the polynomials
% fw and agw. It returns the vector q_col, where q_col(k) is the index of
% the optimal column to move to the right hand side of the kth
% subresultant matrix of S(fw,agw).
m=length ( fw) -1 ; % the degree of fw
n=length ( agw) -1 ; % the degree of gw
minangles=zeros ( 1 , min (m, n ) ) ;
q_col=zeros ( 1 , min (m, n ) ) ;
for k=l : 1 : min (m, n ) % loop for the subresultant matrices
% Calculate the number of columns of the kth subresultant matrix. column_Sk=m+n- ( 2 *k ) +2 ;
angle_Vect=zeros ( 1 , column_Sk ) ;
% Loop over the columns of each subresultant matrix,
for q=l : 1 : column_Sk
% Form the matrix Ak and the column ck, where col is the % index of the column q.
[Ak, ck] =KthSylvester (fw, agw, k, q) ;
[r, c] =size (Ak) ;
% Calculate the angle between the column q and the columns of % Form the unit vector from ck .
ul=ck/norm ( ck ) ;
% The columns of Nl form an orthogonal basis for the column % space of Ak .
[Nl,R]=qr (Ak, 0) ;
% The angle is calculated from the orthogonal complements % of the spaces spanned by ul and the columns of Ak .
[P, Seg,Q]=svd(Nl) ;
N2=P ( : , c+1 :r) ;
sigma_Vect=svd (ul ' *N2 ) ;
angle_Vect (q) =asin ( sigma_Vect ) ;
end % q loop
[minangles (k) , q_col (k) ] =min (angle_Vect) ;
end % k loop Appendix 44 getEGCDM
function gcdln_exact_matrix = getEGCDM (gcdln_exact_matrix)
% This function reads in a two column matrix, where the first column % stores the roots and the second column stores their multiplicities.
% On exit, gcdln_exact_matrix is also a two column matrix with the % same structure as gcdln_exact_matrix on entry, with the following % properties:
% (1) The first column is unaltered.
% (2) Every entry of the second column of the matrix gcdln_exact_matrix % on entry is reduced by one. If this causes the multplicity of the
% root to become zero, then that row is deleted from
% gcdln_exact_matrix .
% Define the matrix a_less to be of the same dimensions as
% gcdln_exact_matrix . Place zeros in the first column and
% ones in the second column.
a_less= [ zeros ( size (gcdln_exact_matrix, 1 ) , 1 ) ...
ones (size (gcdln_exact_matrix, 1 ) , 1 ) ] ;
% Subtract the matrix a_less from gcdln_exact_matrix . All the
% multplicities of the roots are reduced by one, which occurs when
% the GCD of a polynomial and its derivative is considered.
% The roots, that is, the entries of the first column, are unchanged. gcdln_exact_matrix=gcdln_exact_matrix-a_less ;
% Check if any roots are simple.
[multZero, minlndex] =min (gcdln_exact_matrix ( : , 2 ) ) ;
% If there are simple roots, remove them from gcdln_exact_matrix .
% Note that the definition of the function min implies that all
% simple roots will be deleted from gcdln_exact_matrix .
if multZero==0
discardIndex=minIndex;
gcdln_exact_matrix=gcdln_exact_matrix ( 1 : discardlndex-l , :) ;
end
Appendix 45 StopCheckHazard
function STOP=StopCheckHazard ( f , preRank, M, same, accomulator )
% This procedure determines if the current AGCD computation is
% the last AGCD computation.
% f : The AGCD polynomial from the previous computation
% preRank : The rank loss from the previous AGCD computation
% M : The degree of the given inexact polynomial whose
% roots are to be computed
% same : This can take the value 0 or 1 :
% 0 : The roots of the given inexact polynomial cannot
% have the same multiplicities
% 1 : The roots of the given inexact polynomial may
% have the same multiplicities
% accomulator : An integer for checking that the sum of the % multiplicities of the roots is equal to the
% degree of the given polynomial.
% STOP : This can take the value 0 or 1 :
% 0 : This is the last AGCD computation
% 1 : This is not the last AGCD computation
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% If preRank=l, the degree of the previous AGCD is one, and
% thus this AGCD computation is not performed,
if preRank==l
STOP=0;
else % the current AGCD computation is performed
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Define the degrees of f and its derivative.
m=length ( f ) -1 ;
n=m-l ;
% Define g as the derivative of f, and normalise f and g by the % geometric means of their coefficients.
g=polyder ( f ) ;
f=geomecoeff (f ) ;
g=geomecoeff (g) ;
% Calculate the optimal values of alpha and theta.
[thetaO, alphaO] =optimal_linprog ( f , g) ;
% Transform the polynomials f and g to the w variable.
fw=f . *thetaO . A (m : -1 : 0 ) ; % the vector of coefficients of f (w) gw=g . *thetaO . A (n : -1 : 0 ) ; % the vector of coefficients of g(w) agw=alphaO*gw; % define agw = (alpha) x(gw) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Calculate the degree of the AGCD of fw and agw using two methods.
% Method 1: The first principal angle
% Method 2: The residual of an approximate equation
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Method 1: The first principal angle
minAngles=zeros ( 1 , min (m, n ) ) ;
for k=l : 1 : min (m, n ) % loop for all the subresultant matrices
% Form the kth subresultant matrix Sk.
Sk=KthSylvesterM(fw, agw, k) ;
column_Sk=size ( Sk, 2 ) ; % the number of columns of Sk
angle_Vect=zeros ( 1 , column_Sk ) ;
% Compute the angle between each column of Sk and the space % spanned by the other columns.
for q=l : 1 : column_Sk % loop for each column of Sk
[Ak, ck] =KthSylvester (fw, agw, k, q) ;
[r, c] =size (Ak) ; % the number of rows and columns of Ak
% Calculate the first principal angle between ck, the qth % column of Sk, and the other columns of Sk.
%Step 1: Compute ul
ul=ck/norm ( ck ) ;
% Step 2: The columns of Nl define an orthonormal basis % for the space spanned by columns of Ak .
% Nl is a rectangular matrix with orthogonal columns, and % R is a square upper triangular matrix.
[Nl, R] =qr (Ak, 0 ) ; % the 'thin' QR factorisation
[P, Seg,Q]=svd(Nl) ;
% The columns of N2 define an orthognal basis for the % complement of Nl .
N2=P ( : , c+1 :r) ;
sigma_Vect=svd (ul ' *N2 ) ;
angle_Vect (q) =asin ( sigma_Vect ) ;
end % q loop
% minAngles(k) stores the smallest angle for the kth
% subresultant matrix.
minAngles(k) = min ( angle_Vect ) ;
end % k loop for subresultant matrices
% Method 2: The residual of an approximate linear algebraic equation minResiduals=zeros ( 1 , min (m, n ) ) ;
for k=l : 1 : min (m, n ) % Form the kth subresultant matrix Sk.
Sk=KthSylvesterM(fw, agw, k) ;
SK_column=size ( Sk, 2 ) ; % the number of columns of Sk
residuals=zeros (1, SK_column ) ;
% Compute the residual between each column of Sk and the space % spanned by the other columns,
for col=l : 1 : SK_column
[Ak, ck] =KthSylvester (fw, agw, k, col) ;
xk=pinv (Ak ) *ck ; % the least squares solution
residuals ( col ) =norm (Ak*xk-ck ) ; % the non-normalised residual end % loop for the columns, for the kth subresultant matrix
% minResiduals (k) stores the smallest residual for the kth % subresultant matrix.
minResiduals (k) =min (residuals) ;
end % subresultant matrix loop
% Calculate the minimum angle from Method 2.
Mangles=logl 0 (minAngles) ;
miniAngles=min (Mangles ) ;
% Use the results of Method 1 to determine if this is the last % AGCD computation. Note that the results from Method 2 are
% not used because the the thresholds are harder to determine.
% The threshold of -3 is determined empirically,
if miniAngles >= -3
STOP=0; % this is the last AGCD computation
else % more AGCD computations may need to be performed
if same==l % the roots may have the same multiplicities
% Check if the sum of the multiplicities of the roots % computed thus far is equal to the degree of the given % polynomial,
if accomulator==M
STOP=0; % this is the last AGCD computation else
ST0P=1; % more AGCD computations must be performed end
else % if same=l
ST0P=1; % more AGCD computations must be performed end % if same=l
end % if miniAngles
% end preRank=l Appendix 46 nextGCD2
function [xAxis, yAxis, yValue, telda_f , GCD, optRankL, thetaN] =...
nextGCD2 (GCDw_comp, optRankL, o_roots, derCount, ... DegM, A, Threshold, bestCol, Type)
% This function returns the degree of the AGCD of telda_f and its
% derivative.
% GCDw_comp : The polynomial, such that degree of the AGCD of this % polynomial and its derivative is returned. It is
% expressed in the w variable.
% optRankL : The rank loss of the Sylvester matrix from the % previous AGCD computation
% No_roots : The number of distinct roots of the exact form % of the polynomial p, where the GCD of p and its
% derivative is to be computed
% derCount : The number of the AGCD computation
% DegM : The degree of the given polynomial whose roots
% are to be computed
% A : An integer that checks the sum of the multiplicities
% of the roots computed thus far.
% Threshold : The stopping threshold in the least squares problem % with an equality constraint (the LSE problem)
% bestCol : This parameter is equal to 1 or 2, depending on the
% method to be used for the calculation of the optimal
% column for the computation of a structured low rank
% approximation of the Sylvester resultant matrix
% 1 : The optimal column is chosen using the first
% principal angle (angle between subspace)
% 2 : The optimal column is chosen using the residual of
% an approximate linear algebraic equation
% Type : This parameter is equal to 1 or 2, depending on the
% method used to calculate the multiplicities of the
% roots:
% 1 : The multiplicities are computed
% 2 : The multiplicities are exact
% Calculate the exact rank loss
ErankL=optRankL-No_roots ;
% Calculate the degrees of GCDw_comp and its derivative.
m=length (GCDw_comp) -1 ;
n=m-l ;
% Differentiate the polynomial GCDw_comp.
der_GCDw_comp=polyder (GCDw_comp) ; % Copy the polynomials from the previous AGCD computation into % other polynomials.
f=GCDw_comp;
g=der_GCDw_comp ;
% Calculate the optimal values of theta and alpha.
[thetaO, alphaO] =optimal_linprog (GCDw_comp, der_GCDw_comp) ;
% Transform the polynomials GCDw_comp and der_GCDw_comp to the w variable.
% (1) Calculate the vector of coefficients of f (w) .
GCDw_comp=GCDw_comp. *thetaO. Λ (m:-l : 0) ;
% (2) Calculate the vector of coefficients of g(w).
der_GCDw_comp=der_GCDw_comp . *thetaO . A (n : -1 : 0 ) ;
% Note that f and GCDw_comp are the same polynomial, but in different % w scales. Calculate the degree of the AGCD of GCDw_comp and its
% derivative.
[computedRank ] =findRank2 ( f , GCDw_comp, ErankL, DegM, A, derCount-1 ) ;
if Type==l
optRankL=computedRank ;
else
optRankL=ErankL ;
end
% Refine GCDw_comp using SNTLN applied to the Sylvester matrix of it % and its derivative.
% On exit, telda_f and telda_g are the corrected forms of the
% polynomials f and g. They are normalised by the geometric means
% of their coefficients, and alpha is included in telda_g.
[telda_f , telda_g, thetaN] =SNTLNrefine ( f , g, GCDw_comp, ...
alphaO*der_GCDw_comp, alphaO, thetaO, optRankL, Threshold, bestCol ) ;
% Plot the singular values of the polynomials after their refinement.
[xAxis, yAxis, yValue, GCD] =Draw_SV_der_I (telda_f , telda_g, ErankL, optRankL) ; figure
plot (xAxis , yAxis , ' -ko ' , ' LineWidth
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize ' , 6)
hold on
plot (m+n-ErankL, yValue, ' ks ' , ' MarkerFaceColor ' , ' k ' , ' MarkerSize ' , 8 ) ;
xlabel('\it i ' , ' FontSize ' , 16 )
ylabel ( 'log_{ 10 } \sigma_{\iti } /\sigma_{ \itl }', ' FontSize ', 16 )
Appendix 47 Deconvolution2
function [M] =Deconvolution2 (M)
% This function performs d deconvolutions of d+1 polynomials
% using a linear structure preserving matrix method. These d
% convolutions of the d+1 polynomials f_{i}(y) are
% h_{i} (y)=f_{i-l} (y) /f_{i} (y) , i=l,...,d.
% M is a structure that has the following fields:
% 'f' contains the polynomials
% 'df' contains the degrees of each polynomial
% 'h' contains the results from the deconvolution
% 'n' contains the degrees of the polynomials 'h'
% 'z' contains the structured perturbations applied to the polynomials 'f' % 'dz' contains the degrees of the polynomials z
% Since the length of 'f ' is the longest field, the size of M will be % equal to length of f, that is, the number of polynomials
% Define some constants.
PolyNum=size (M, 2 ) ; % number of polynomials f_{i}, that is, d+1
deconvNum=PolyNum-l ; % number of deconvolutions, that is, d
% Initialise some variables.
dfSum=0; % the sum of the degrees of the polynomials 'f'
hdSum=0; % the sum of the degrees of the polynomials 'h'
for i=l : 1 : PolyNum
% Calculate the degree of the polynomial f_{i}, and then add it to % dfsum to obtain the sum of the degrees of the polynomials f.
M(i) .df=length (M(i) . f)-l;
dfSum=dfSum+M(i) .df ;
if i==PolyNum-l
Mm=dfSum+i; % The variable Mm is equivalent to the variable 'M' end % in the report on deconvolution. See Section 3.3.
% The next condition is required for the polynomials h_{i}
if i>l
% Calculate the degree of the polynomial h_{i} and then
% sum these degrees.
M(i-l) .n=M(i-l) .df-M(i) .df;
hdSum=hdSum+M(i-l) . n;
end
end
% Mml and N are equivalent to the variables 'Ml' and 'N' in the report % on deconvolution. See Section 3.3.
Mml=dfSum+PolyNum;
N=hdSum+deconvNum;
%Initialise the vector z.
z= [ ] ;
for i=l : 1 : PolyNum
M(i) . zO=zeros (M(i) .df+1, 1) ; z=[z;M(i) .zO] ;
end
% Construct the coefficient matrices C and E, where C stores the
% coefficients of the polynomials f_{i}, and E stores the structured
% perturbations stored in the vector z.
C=ChauchyM(M,Mm,N, 1) ;
E=ChauchyM(M,Mm,N, 2) ;
%Creating the right hand side vector b.
row=l ;
for i=l : 1 : PolyNum-1
b ( row : 1 : ( row+M ( i ) . df ) , 1 ) =M ( i ) . f ' ;
row=row+M(i) .df+1;
end
% Calculate the initial value of the vector that stores the
% polynomials h_{i} by solving a least squares problem. Compute
% the residual t of the solution.
h=pinv(C) *b;
hO=h;
t=b-C*h;
% Place the polynomials h_{i} in the structure M.
row=l ;
for i=l : 1 : deconvNum
M ( i ) . h=h ( row : 1 : row+M ( i ) . n ) ;
row=row+M(i) .n+1;
end
% Create the big matrix Y, which is formed from d Cauchy matrices.
% The variables r and c represent the number of rows and columns
% respectively that are neded for each Cauchy matrix.
Y=zeros (Mm, Mml ) ;
row=l ; col=M ( 1 ) . df+2 ;
for i=2 : 1 : PolyNum
c(i-l)=M(i) .df ;
r(i-l)=c(i-l)+M(i-l) .n+1;
Y (row: 1 :row+r (i-l)-l,col:l:col+(c(i-l)+l)-l) =cauchy (M(i-1 ) . h, M(i) .df ) ; row=row+r ( i-1 ) ; col=col+c ( i-1 ) +1 ;
end
% Creating the matrix P.
P= [eye (Mm) , zeros (Mm, M (PolyNum) .df+1) ] ;
% Define the matrices G and F, and vectors s and y, required for the
% solution of the LSE problem:
% min I |Fy-s| I subject to Gy=t .
G= [C+E Y-P] ;
F=eye (N+Mml ) ;
s=- [h-hO; z] ;
y= [ h ; z ] ;
% Initialise the threshold and number of iterations for the
% solution of the LSE problem.
TH=3 ;
ite=0;
while ΤΗ>10Λ-12
ite=ite+l; % increment the counter for the number of iterations
% Break out of the loop if more than 50 iterations are required. if ite>50
break
end
% Solve the LSE problem: min | |Fy-s| I subject to Gy=t . y_lse=LSE (F, s, G, t) ;
% Update the solution vector.
y=y+y_lse ;
h=y(l:l:N) ;
z=y (N+l : 1 : end) ;
% Use the vectors h and z to update the structure M.
% (1) Update M.zO.
row=l ;
for i=l : 1 : PolyNum
M(i) . zO=z (row: 1 :row+M(i) . df ) ;
row=row+M(i) . df+1;
end
% (2) Update M.h.
row=l ;
for i=l : 1 : deconvNum
M ( i ) . h=h ( row : 1 : row+M ( i ) . n ) ;
row=row+M(i) .n+l;
end
% Update the matrix Y.
Y=zeros (Mm, Mml ) ;
row=l ; col=M ( 1 ) . df+2 ;
for i=2 : 1 : PolyNum
c(i-l)=M(i) .df ;
r(i-l)=c(i-l)+M(i-l) .n+l;
Y (row: 1 : row+r (i-l)-l,col:l:col+(c(i-l)+l)-l)=...
cauchy (M(i-l) .h,M(i) .df ) ;
row=row+r ( i-1 ) ; col=col+c ( i-1 ) +1 ;
end
% Update the matrices C, E and G, and the vectors s and t. C=ChauchyM(M,Mm,N, 1) ;
E=ChauchyM(M,Mm,N, 2) ;
G= [C+E Y-P] ;
s=- [h-hO; z] ;
t= (b+P*z) - (C+E) *h;normt=norm (t ) ;
% Update the threshold TH.
TH=normt/norm(b+P*z) ;
end % while loop Appendix 48 creatJacobian
function Jac=creatJacobian (a)
% This function creates the Jacobian matrix of the polynomial % whose roots and multiplicities are stored in the matrix a.
% The first column of a stores the values of the roots, and the % second column of a stores the multiplicities of the roots.
% Define the roots and multiplicities of the polynomial
% defined by the vector a.
root=a ( : , 1 ) ;
mult=a ( : , 2 ) ;
% Set the size of the Jacobian matrix.
Jac=zeros ( sum (mult ) , length (root ) ) ;
% Define the entries of the Jacobian matrix,
for c=l : 1 : length (root )
s=-mult (c) *creatPoly ( [root (c) , mult (c) -1 ] ) ;
a2=[a(l:c-l, :) ; a ( c+1 : end, : ) ] ;
Jac ( : , c ) =conv ( s , creatPoly ( a2 ) ) ;
end
Appendix 49 findRank2
function [spot] =findRank2 (f , GCDw_comp, ErankLoss, DegM, A, D)
% f : The polynomial, such that the GCD of this polynomial
% and its derivative is calculated. This polynomial
% is expressed in the previous w variable.
% GCDw_comp : The polynomial, such that the GCD of this polynomial % and its derivative is calculated. This polynomial is
% expressed in the current w variable.
% Erankloss : The exact rank loss associated with this AGCD
% computation
% DegM : The degree of the given polynomial whose roots are to % be computed
% A : An integer that checks the sum of the multiplicities % of the roots computed thus far
% D : The counter of the previous AGCD computation
% Note: f and GCDw_comp represent the same polynomial, but in
% different w variables.
% Define the degrees of GCDw_comp and its derivative.
m=length (GCDw_comp) -1 ;
n=m-l ;
% Start the test for the determination of the degree of the AGCD.
% Collect the results from Methods 1, 2 and 3.
[minAngle, minResidual , errorM] =ABSRC3_info2 ( f ) ;
Gradient=zeros ( length (minAngle ) , 1 ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Method 1: The first principal angle
% This method requires the gradient of the graph of the angles.
% This requires more than one point,
if length (minAngle ) > 1
for i=l : 1 : length (minAngle ) -1
Gradient ( i ) =logl 0 (minAngle (i+1) ) -log10 (minAngle ( i ) ) ;
end
end % if
if length (minAngle ) ==1
testl=l ;
else
[Angvalue, testl ] =max (Gradient ) ;
end
% Method 2: The residual of an approximate linear algebraic equation
% This method also requires the gradient of a graph,
if length (minResidual ) > 1
for i=l : 1 : length (minResidual ) -1
Gradient ( i ) =logl 0 (minResidual (i+1) ) -log10 (minResidual ( i ) ) ;
end
end % if
if length (minResidual ) ==1 test2=l ;
else
[Angvalue, test2 ] =max (Gradient ) ;
end
% Method 3: This method is only valid for a polynomial and its derivative
% Obtain the results from Method 1.
C3_angle=errorM ( 1 , : ) ;
% Find the minimum value and sort the results in ascending order
[AngleValue, test3 ] =min (C3_angle) ;
[secodtest, test33 ] =sort (C3_angle) ;
% Obtain the results from Method 2, and as above, find the minimum % value.
C3_residual=errorM ( 2 , : ) ;
[AngleValue, test4 ] =min (C3_residual ) ;
% The results from the four tests have been gathered and they can
% now be used to determine the degree of the AGCD.
% If the four tests agree and the degree of the AGCD is not equal to
% one, do not perform any more tests.
if (testl==test2) && (test3==test4 ) && (testl==test3 )
spot=testl; % this is the degree of the AGCD
% Some polynomials for Method 3 yield a low value at k=l (the
% first point on the left of the graph) . This point requires
% special consideration if it is determined that spot=l, that is,
% the computed degree of the AGCD is equal to one.
if spot==l
% Check that there is more than one point on the graph, if length (test33 ) >1
% rem is the degree of the polynomial not yet accounted for. rem=DegM- (A+D) ;
if rem < (2*D+3)
% spot=l is a spurious minimum and thus consider the % second minimum.
spot=test33 (2) ;
end
end
end % spot=l
else
% Come here if the four tests do not yield the same result for % the degree of the AGCD. In this circumstance, only consider
% Method 3, using the result of Method 1.
C3_angle=errorM ( 1 , : ) ;
[AnglesInOrder , Index] =sort (loglO (C3_angle) ) ;
spot=Index ( 1 ) ;
% Perform the same test as above in order to determine if
% spot=l is a spurious minimum.
% Note: The next six lines eliminate the first spurious minimum, % but it cannot be guaranteed that the second minimum, that is, % Index (2), is a genuine minimum. The estimate of the degree of % the AGCD may therefore still be incorrect,
if spot==l rem=DegM- (A+D) ;
if rem < (2*D+3)
spot=Index ( 2 ) ;
end
end
% If C3_angle has a sharp minimum, consider this point. If, however, % there are two minima whose y-values differ by less than 0.9,
% choose the minimum on the right.
Diff=zeros ( length (C3_angle ) , 1 ) ;
for i=l : 1 : length (C3_angle) -1
Diff ( i ) =AnglesInOrder ( i+1 ) -Angles InOrder ( 1 ) ;
if abs(Diff (i) )<=0.9
if Index ( 1 ) <Index ( i+1 )
spot=Index ( i+1 ) ;
end
end
end % for
end % if (testl==test2 ) && (test3==test4) && (testl==test3 )
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Plot the results of the rank estimation tests.
figure
x=l : 1 : min (m, n ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% The results of Method 1.
subplot (2,2,1)
plot (x, logl 0 (minAngle ) , ' -ko ' , ' LineWidth
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize' , 6)
xlabel('\it k ' , ' FontSize ' , 16 )
ylabel ( ' log_{10}\it \phi_{k} ' , 'FontSize' ,16)
% Remove the comment in the next two lines if a black star is to be
% placed at the theoretically exact rankloss.
% hold on
% plot (ErankLoss , logl 0 (minAngle (ErankLoss ) ) , ' b* ' ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% The result of Method 2.
subplot (2,2,2)
plot (x, logl 0 (minResidual ) , ' -ko ' , ' LineWidth
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize' , 6)
xlabel('\it k ', ' FontSize ', 16 )
ylabel ( ' log_{10}\it r_{k} ' , 'FontSize' ,16)
% Remove the comment in the next two lines if a black star is to be
% placed at the theoretically exact rankloss.
% hold on
% plot (ErankLoss , logl 0 (minResidual (ErankLoss ) ) , ' b* ' ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% The result of Method 3, using Method 1.
C3_angle=errorM ( 1 , : ) ; subplot (2,2,3)
plot (x, loglO (C3_angle) , ' -ko ' , 'LineWidth' , 1, ...
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize ' , 6)
xlabel('\it k ' , ' FontSize ' , 16 )
ylabel ( 'log_{ 10 } \it error_{ k }', ' FontSize ', 16 )
% Remove the comment in the next two lines if a black star is to be % placed at the theoretically exact rankloss.
% hold on
% plot (ErankLoss, loglO (C3_angle (ErankLoss) ) , 'b* ' ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% The result of Method 3, using Method 2.
C3_angle=errorM ( 2 , : ) ;
subplot (2, 2, 4)
% plot (x, loglO (C3_angle) , ' rO ' ,x, loglO (C3_angle) , 'k— ' )
% hold on
plot (x, loglO (C3_angle) , ' -ko ' , 'LineWidth' , 1, ...
' MarkerEdgeColor ' , ' r ' , ...
' MarkerFaceColor ' , ' r ' , ...
'MarkerSize' , 6)
xlabel('\it k ', ' FontSize ', 16 )
ylabel (' log_{ 10 } \it error_{ k }', ' FontSize ', 16 )
% Remove the comment in the next two lines if a black star is to be % placed at the theoretically exact rankloss.
% hold on
% plot (ErankLoss, loglO (C3_angle (ErankLoss) ) , 'b* ' ) ;
Appendix 50 SNTLNr
function [telda_f , telda_g, theta] =SNTLNrefine ( f , g, fw, gw, ...
alphaO, thetaO, rankloss , Threshold, bestCol )
% This procedure uses the method of SNTLN to refine the polynomial
% and its derivative, such that they have a non-constant GCD
% f : The polynomial, expressed in the variable w from the
% previous AGCD computation, whose roots are to be computed. g : The derivative of f
fw : The polynomial f, expressed in the current variable w gw : The polynomial g, expressed in the current variable w alphaO : The optimal value of alpha
theta : The optimal value of theta
rankloss : The degree of the AGCD of fw and gw, which is equal
to the rank loss of their Sylvester matrix
Threshold : The stopping criterion in the iterative solution
of the LSE problem
bestCol : This parameter is equal to 1 or 2, depending on the
method to be used for the calculation of the optimal column for the computation of a structured low rank approximation of the Sylvester resultant matrix
1 : The optimal column is chosen using the first
prncipal angle (angle between subspace)
2 : The optimal column is chosen using the residual of an approximate linear algebraic equation
telda_f : The corrected form of the polynomial f
telda_g : The corrected form of the polynomial g
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
k=rankloss ;
m=length ( fw) -1 ; the degree of fw
n=m-l ; the degree of the derivative of fw
% Normalise fw and gw by the geometric means of their coefficients,
[fw, fscaler] =GMnorm_denoscaler (fw) ;
[gw, gscaler ] =GMnorm_denoscaler (gw) ;
% fscaler is the reciprocal of the geometric mean of the coefficients % of the entry vector fw.
% gscaler is the reciprocal of the geometric mean of the coefficients % of the entry vector gw.
% On exit, fw is normalised by the geometric mean of its coefficients. % On exit, gw is normalised by the geometric mean of its coefficients. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% These polynomials are used in the iterative scheme. f_bar=fscaler . *f ;
g_bar=gscaler . *g;
% Calculate the optimal column of the Sylvester matrix and % its subresultant matrices,
switch bestCol
case 1 % the first principal angle is used
q_col=ColumnAngle ( fw, alphaO*gw) ;
case 2 % the residual of a linear algebraic equation
q_col=ColumnRes ( fw, alphaO*gw) ;
end
q=q_col (rankloss ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Start the iterative scheme and the method of SNTLN
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Form the kth subresultant matrix, and the matrix Ak and vector % where k is the rankloss, and q is the associated optimal column [Ak, Ck] =KthSylvester (fw, alphaO*gw, k, q) ;
% Obtain the least squares solution of (Ak) (x)=ck, and calculate % the residual
x=Ak\Ck;
xO=x ;
r= (Ck-Ak*x) ;
% Initialise some variables for the method of SNTLN
% The structured perturbations of the coefficient matrix.
z=zeros (m+n+2 , 1 ) ;
% The perturbations of the right hand side vector.
hk=zeros (size (Ck) ) ;
% The derivative of hk with respect to theta.
hk_dt=hk;
% The derivative of hk with respect to alpha.
dh_da=hk ;
% The error matrix.
Ek=zeros (size (Ak) ) ;
% The derivative of the error matrix with respect to alpha.
Ek_da=Ek ;
% The derivative of the error matrix with respect to theta.
Ek_dt=Ek;
% Initialise the derivatives of Ak and ck with respect to alpha. Ak_da=Creat_dAkda (n, k, q, alphaO, Ak) ;
dck_da=Creat_dckda (n, k, q, alphaO, Ck) ;
% Initialise the derivatives of f and g with respect to theta. df_dt=Creat_dt ( fw, thetaO, m) ;
dg_dt=Creat_dt ( alphaO*gw, thetaO, n ) ;
% Form the derivatives of Ak and ck with respect to theta.
[Ak_dt, Ck_dt] =KthSylvester (df_dt, dg_dt, k, q) ;
% Form the matrices Y and P.
Y=Creat_Y_k (m, n, k, q, x, alphaO, thetaO) ;
P=Creat_P (m, n, k, q, thetaO) ;
% Initialise some variables.
alpha=alphaO ; theta=thetaO;
y= [ z ; x; alpha; theta] ;
% Set the initial value of the threshold in SNTLN. It is set to a large % number so that SNTLN will be executed once, but it is changed later. TH=3 ;
% The counter for the number of iterations for the LSE problem.
iet_conut=0 ;
% The scale factor alpha multiplies the polynomial g, and thus if the % optimal column contains the coefficients of f, alpha is equal to one. if q < (n-k+2)
alphaq=l ;
else
alphaq=alpha ;
end
% Define the matrices H_z, H_x H_a and H_t, and thus define the matrix C. H_z=Y-alphaq*P ;
H_x=Ak+Ek ;
H_a= (Ak_da+Ek_da) *x- (dck_da+dh_da) ;
H_t= (Ak_dt+Ek_dt) *x- (Ck_dt+hk_dt ) ;
C= [ H_z H_x H_a H_t ] ;
% Initialise the matrix E and right hand side vector s of the
% function to be minimised.
E=eye ( 2 *m+2 *n-2 *k+5 ) ;
s=- [ z ; x-xO; alpha-alphaO; theta-thetaO] ;
while TH>Threshold
iet_conut=iet_conut+l ; % increment the counter for the iterations
% Stop the iterations if ite is greater than 100.
if iet_conut>l 00
break
end
% Solve the LSE problem and update the solution.
y_lse=LSE (E, s, C, r) ;
y=y+y_lse ;
% Calculate the parts of y that define z, x, alpha and theta.
z=y ( 1 : m+n+2 ) ;
x=y(m+n+3:2*m+2*n-2*k+3) ;
alpha=y ( 2 *m+2 *n-2 *k+4 ) ;
theta=y ( 2 *m+2 *n-2 *k+5 ) ;
% Update the vector s
s=- [ z ; x-xO; alpha-alphaO; theta-thetaO] ;
% Update all the variables before the next iteration.
[C, r, TH, telda_f , telda_g] =LSE_Updating ( z, x, alpha, theta, f_bar, g_bar, k, q)
% telda_f and telda_g are the corrected forms of the given inexact % polynomials f and g, respectively. The polynomials telda_f and % telda_g are not normalised by the geometric means of their
% coefficients, and alpha is included in telda_g.
end % end while % Calculate the normalised forms of the corrected polynomials % telda_f and telda_g.
telda_f=geomecoeff (telda_f ) ;
telda_g=alpha*geomecoeff (telda_g) ;
Appendix 51 ChauchyM
function C=ChauchyM (M, Mm, , Type )
% This function forms the coefficient matrix C used in the structured % decomvolution .
% Type = 1 if the elements of C are equal to the polynomials f
% Type = 2 if the elements of C are equal to the structured
% perturbations z
% The matrix C is a block Cauchy matrix, and it is built up by % forming these Cauchy matrices and then placing them in C.
% Initialise some data.
PolyNum=size (M, 2 ) ;
row=l ;
col=l ;
C=zeros (Mm, ) ;
for i=2 : 1 : PolyNum
c=M(i-l) .n;
r=M(i) . df+c+1;
if Type==l % use M(i).f to define C
C (row : 1 : row+r-1 , col : 1 :col+ (c+1) -1) =cauchy (M ( i ) . f , c ) ;
else % Type=2, and thus use M(i) .zO to define C
C (row : 1 : row+r-1 , col : 1 :col+ (c+1) -1) =cauchy (M ( i ) . zO , c ) ;
end
% Update the row and column indices for the next Cauchy matrix % to be placed in C.
row=row+r ;
col=col+c+l ;
end
Appendix 52 ABSRC3.
function [minAngles , minResiduals , error ] =ABSRC3_info2 ( f )
% This function performs some calculations on the polynomial f in order % to calculate the degree of the AGCD of f and its derivative.
minAngles A vector that stores the data for calculating the degree of the AGCD of f and its derivative using the first principal angle
minResidual A vector that stores the data for calculating the degree of the AGCD of f and its derivative using the residual of an approximate equation
A matrix that stores the data for calculating the degree of the AGCD of f and its derivative using Method 3, which is only valid for a polynomial and its derivative
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% givenPoly=f ;
m=length ( f ) -1 ; % the degree of f
n=m-l; % the degree of the derivative of f
% Calculate g, the derivative of f, and normalise f and g by the % geometric means of their coefficients.
g=polyder ( f ) ;
f=geomecoeff (f ) ;
g=geomecoeff (g) ;
% Calculate the optimal values of alpha and theta.
[thetaO, alphaO] =optimal_linprog ( f , g) ;
% Transform the polynomials f and g to the w variable
fw=f . *thetaO . A (m : -1 : 0 ) ; % the vector of coefficients of f (w)
gw=g . *thetaO . A (n : -1 : 0 ) ; % the vector of coefficients of g(w)
agw=alphaO*gw; % agw = (alpha) x(gw)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Use three methods to generate data to calculate the degree of an % AGCD of f and its derivative.
% Method 1: The first principal angle
% Method 2: The residual of an approximate linear algebraic equation % Method 3: A criterion that must be satisfied between f and g
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Method 1: The first principal angle
minAngles=zeros ( 1 , min (m, n ) ) ;
for k=l : 1 : min (m, n ) % loop for all the subresultant matrices
% Form the kth subresultant matrix Sk
Sk=KthSylvesterM(fw, agw, k) ;
column_Sk=size ( Sk, 2 ) ; % the number of columns of Sk
angle_Vect=zeros ( 1 , column_Sk ) ; % Compute the angle between each column of Sk and the space spanned % by the other columns.
for q=l : 1 : column_Sk % loop for each column of Sk
[Ak, ck] =KthSylvester (fw, agw, k, q) ;
[r, c] =size (Ak) ; % The number of rows and columns of Ak
% Calculate the first principal angle between ck, the qth column % of Sk, and the other columns of Sk.
%Step 1: Compute ul
ul=ck/norm ( ck ) ;
% Step 2: The columns of Nl define an orthonormal basis for the % space spanned by columns of Ak .
% Nl is a rectangular matrix with orthogonal columns, and % R is a square upper triangular matrix.
[Nl, R] =qr (Ak, 0 ) ; % the 'thin' QR factorisation
[P, Seg,Q]=svd(Nl) ;
% The columns of N2 define an orthognal basis for the
% complement of Nl .
N2=P ( : , c+1 :r) ;
sigma_Vect=svd (ul ' *N2 ) ;
angle_Vect (q) =asin ( sigma_Vect ) ;
end % q loop for columns of subresultant matrix
% minAngles(k) stores the minimum angle, and q_col(k) stores
% the column of Sk for which the minimum angle is achieved, for
% the kth subresultant matrix.
[minAngles (k) , q_col (k) ] = min ( angle_Vect ) ;
end % k loop for subresultant matrices
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Method 2: The residual of an approximate equation
minResiduals=zeros ( 1 , min (m, n ) ) ;
for k=l : 1 : min (m, n )
% Form the kth subresultant matrix Sk
Sk=KthSylvesterM(fw, agw, k) ;
SK_column=size ( Sk, 2 ) ; % the number of columns of Sk
residuals=zeros (1, SK_column ) ;
% Compute the residual between each column of Sk and the space % spanned by the other columns,
for col=l : 1 : SK_column
% Calculate the matrix Ak and vector ck .
[Ak, ck] =KthSylvester (fw, agw, k, col) ;
xk=pinv (Ak ) *ck ; % the least squares solution of (Ak) (xk)=ck residuals ( col ) =norm (Ak*xk-ck ) ; % the non-normalised residual end % loop for the columns, for the kth subresultant matrix
% minResiduals (k) stores the smallest residual and q_res(k) stores % the column of Sk for which the minimum residual is achieved, % for the kth subresultant matrix.
[minResiduals (k) , q_res (k) ] =min (residuals) ;
end % k loop for subressultant matrices
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Method 3: A criterion between a polynomial and its derivative error=zeros ( 2 , m-1 ) ;
% Store the results from Methods 1 and 2.
optColumnM= [q_col ; q_res ] ;
% Define a constant that is required later.
Lamda= (abs (givenPoly (m+1 ) ) / factorial (m) )A(l/m) ...
/ (prod (abs (givenPoly) . A ( 1 / (m* (m+1 ) ) ) ) ) ;
% Method 3 requires the results from Methods 1 and 2. Go through % the loop for Method 1 (j=l) and Method 2 (j=2) .
for j=l:l:2
% Define the vector that stores the optimal column.
optColumn=optColumnM ( j , : ) ;
for k=l:l:m-l % loop for the subresultant matrices
% Form the kth subresultant matrix, and return the matrix Ak % and vector ck .
[Ak, ck] =KthSylvester (fw, agw, k, optColumn (k) ) ;
xk=pinv (Ak ) *ck ; % the least squares solution of (Ak) (xk)=ck
% Form the coprime polynomials uk and vk from the vector xk . x_vect= [xk ( 1 : optColumn (k ) -1 ) ; -1 ; xk ( optColumn (k ) : end) ] ;
vk=x_vect ( 1 : m-k ) ;
uk=-x_vect (m-k+1 : end) ;
% Obtain an estimate of the AGCD of f and its derivative g by % solving f=(uk) (dk) and g=(vk) (dk)
Ckl=cauchy (uk, k) ;
Ck2=cauchy (vk, k) ;
Bk= [Ckl ; Ck2 ] ; % the coefficient matrix
bk=[fw,agw] ' ; % the right hand side vector
dk=pinv (Bk) *bk; % the least squares estimate of the AGCD
% Compute the error measure.
m_k_vec=m-k : -1 : 1 ;
k_vec=k : -1 : 1 ;
% Construct the matrices.
Lk=cauchy (dk, m-k-1 ) ;
Vk=thetaO/ (alphaO*Lamda) *Lk;
Uk=cauchy (k_vec ' . *dk ( 1 : end-1 ) , m-k) ;
R= [diag (m_k_vec, 0 ) , zeros (m-k, 1 ) ] ;
% Construct the error matrix error, which has two columns. % First row: Error measure using optimal column from Method 1 % Second row: Error measre using optimal column from Method 2 error ( j , k ) = ...
norm(Vk*vk- (Lk*R+Uk) *uk) / (norm(Vk*vk) +norm( (Lk*R+Uk) *uk) ) ; end % k loop for subresultant matrices end % j loop for Methods 1 and 2
Appendix 53 Draw_SV_der_I
function [xAxis, yAxis, yValue, GCD] =Draw_SV_der_I (f , g, ErankL, optRankL)
% This function prepares some data for plotting some graphs.
% f : The vector of coefficients of a polynomial
% g : The derivative of the polynomial stored in f
% ErankL : The exact degree of the GCD of f and g
% OptRankL : The computed degree of the GCD of f and g
% xAxis : The data for the x-axis for plotting the singular
% values of the Sylvester matrix of f and g
% yAxis : The data for the y-axis for plotting the singular
% values of the Sylvester matrix S(f,g) of f and g
% yValue : The theoretical last non-zero singular value of S(f,g)
% GCD : The GCD of f and g
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Form the Sylvester matrix S of f and g, and calculate segNorm,
% the logarithm of the normalised singular values of S.
S=KthSylvesterM(f , g, 1) ;
sp=svd ( S ) ;
segNorm=logl 0 ( sp/sp ( 1 ) ) ;
xAxis=l : 1 : length ( sp) ;
yAxis=segNorm;
m=length ( f ) -1 ; % the degree of f
n=m-l; % the degree of the derivative of f
% Define the theoretical last non-zero singular value of S.
yValue=segNorm (m+n-ErankL ) ;
% Continue if the degree of the GCD of f and g is greater than zero, % that is, the polynomials are not coprime.
if optRankL~=0
% Calculate the vector that stores, for each subresultant matrix, % the index of the optimal column. Use the criterion based on the % residual of an approximate linear algebraic equation.
q_col=ColumnRes ( f , g) ;
k=optRankL ;
% Form the kth subresultant matrix, and define the matrix Ak and
% vector ck, where ck is the optimal column whose index is
% calculated in ColumnRes.
[Ak, ck] =KthSylvester (f , g, k, q_col (k) ) ;
xk=Ak\ck; % the least squares solution of (Ak) (xk)=ck
% Form the coprime polynomials uk and vk from the vector xk . x_vect= [ xk ( 1 : q_col (k ) -1 ) ; -1 ; xk (q_col (k ) : end) ] ;
vk=x_vect ( 1 : n-k+1 ) ;
uk=-x_vect (n-k+2 : end) ;
% Calculate dk, the GCD of f and g, by solving the equations % f=(uk) (dk) and g=(vk)(dk).
Ckl=cauchy (uk, k) ;
Ck2=cauchy (vk, k) ;
Bk= [Ckl ; Ck2 ] ; % the coefficient matrix that stores uk and vk bk=[f,g] ' ; % the right hand side vector
dk=Bk\bk; % the least squares solution of (Bk) (dk)=bk GCD=dk ' ;
end % if optRankL=0
Appendix 54 PM
function s=PM (roots , mult )
% This function uses convolution to define the polynomial s whose
% roots are stored in the vector roots, and the multiplicities
% of the roots are stored in the vector mult.
s=l;
for i=l : 1 : length (mult )
for 1=1 : 1 :mult (i)
P= [1 -roots (i) ] ;
s=conv ( s , P ) ;
end
end
% By construction, the leading coefficient of the polynomial % defined by s is unity, and thus s is redefined to exclude % this coefficient.
s=s ( 2 : end) ;

Claims

1 . A floating point arithmetic environment comprising an arithmetic unit for performing floating point arithmetic to a first degree of precision; comprising
an input for receiving first data representing a first polynomial having at least one multiple root; said arithmetic unit being adapted to determine an approximate greatest common divisor of said first polynomial and a derivative of said first polynomial;
said arithmetic unit being adapted to determine the roots of said first polynomial using said approximate greatest common divisor; said roots being determined to a second degree of precision; said second degree being more accurate than said first degree.
2. A floating point arithmetic environment as claimed in claim 1 , said arithmetic unit further comprising a pre-processor adapted to normalise the coefficients of said first polynomial and said derivative of said first polynomial.
3. A floating point arithmetic environment as claimed in claim 2 where said pre-processor adapted to normalise the coefficients of said first polynomial and said derivative of said first polynomial comprises a normalising unit adapted to reduce a difference in magnitude of the coefficients of the first polynomial and the derivative of the first polynomial.
4. A floating point arithmetic environment as claimed in either of claims 2 and 3 wherein said normalising unit adapted to reduce a difference in magnitude of the coefficients of the first polynomial and the derivative of the first polynomial is adapted to normalise the coefficients using a geometric mean.
5. A floating point arithmetic environment as claimed in any of claims 1 to 4 comprising a degree calculator adapted to determine the degree of said approximate greatest common divisor of said first polynomial and said first derivative of said first polynomial.
6. A floating point arithmetic environment as claimed in claim 5 wherein said degree calculator is adapted to perturb at least one coefficient of at least one of said first polynomial and said derivative of said first polynomial such that a Sylvester matrix of said first polynomial and said derivative of said first polynomial post-perturbation is non-singular.
7. A floating point arithmetic environment as claimed in either of claims 5 and 6 wherein said arithmetic unit being adapted to determine an approximate greatest common divisor of said first polynomial and a derivative of said first polynomial comprises an adaptation arranged to calculate a structured low rank approximation of at least one Sylvester subresultant matrix.
8. A floating point arithmetic environment as claimed in claim 7, wherein said adaptation arranged to calculate said structured low rank approximation of said at least one Sylvester subresultant matrix comprises means to apply a method of structured non-linear total least norm, given said degree, to determine said approximate greatest common divisor.
9. A floating point arithmetic environment as claimed in any of claims 6 to 8, further comprising a polynomial factorisation unit adapted to determine an approximately factorise said first polynomial and said derivative of said first polynomial.
10. A floating point arithmetic environment as claimed in claim 9 further comprising an iterative solution unit adapted to iteratively apply a least squares equality constraint in determining the approximate greatest common divisor.
1 1 . A floating point arithmetic environment as claimed in any of claim 1 to 10, further comprising a polynomial division unit arranged to determine simple roots of said first polymonial from said approximate greatest common divisor by polynomial division.
12. A digital signal processing method for recovering an original signal from a first digital signal bearing a corrupted version of said original signal, said first digital signal being represented as a polynomial having respective coefficients; the method comprising the steps of receiving first inexact data representing said coefficients; said first inexact data being data bearing first corruption ;
creating, using said first inexact data, to at least two inexact polynomials;
determining the greatest common factor of said at least two inexact polynomials; and outputting said greatest common factor as the recovered original signal; said recovered original signal bearing coefficients of exact data bearing no corruption or at least second inexact data bearing reduced corruption as compared to said first inexact data bearing said first corruption.
13. A method as claimed in claim 12 in which at least of said original signal and said first signal comprises at least one of video data, audio data, image data or a communication signal.
14. A method for recovering a transmitted signal from a received signal; the method comprising the steps of
receiving data representing the received signal;
mapping said data to at least two inexact polynomials;
determining the greatest common factor of said at least two inexact polynomials; said greatest common factor corresponding to the transfer function or channel response of the medium between a transmitter that output the transmitted signal and a receiver that received the received signal;
processing the data representing the received signal using the greatest common factor to recover the transmitted signal.
15. A method as claimed in claim 14 wherein said mapping is arranged to map first data from a first portion of said data to a first polynomial of said at least two inexact polynomials and to map second data from a second portion of said data to a second polynomial of said at least two inexact polynomials.
16. A method as claimed in any of claims 12 to 15, wherein the step of determining the greatest common factor comprises the step of determining the rank of a matrix associated with said at least two inexact polynomials.
17. A method for recovering transmitted data from at least two received signals; the method comprising the steps of
receiving first data representing a first received signal of said at least two received signals; receiving second data representing a second received signal of said at least two received signals;
creating, from said first and second data, at least two inexact polynomials;
determining the greatest common factor of said at least two inexact polynomials; said greatest common factor corresponding to said transmitted data or approximation thereto; and outputting said greatest common factor as recovered transmitted data.
18. A method for processing a signal represented as a polynomial having coefficients; the method being executable within a floating point environment and comprising the steps of receiving first inexact data representing said coefficients; said first inexact data being data bearing first noise;
creating, using said first inexact data, to at least two inexact polynomials;
determining the roots and multiplicities of said roots of at least a first inexact polynomial of said at least two inexact polynomials using said at least one greatest common factor associated with said at least two inexact polynomials; and
outputting a processed signal constituted as a polynomial bearing the determined roots as coefficients.
19. An apparatus for processing a first signal represented as a polynomial having
coefficients, the apparatus comprising:
an input interface for receiving first inexact data representing said coefficients, said inexact data being data bearing noise;
a processor system including a floating point arithmetic unit; and
a memory system for storing code that when executed by the processor system using the floating point arithmetic unit causes the processor system to:
create from said first inexact data at least two inexact polynomials;
determine a greatest common factor of said at least two inexact polynomials; and output a recovered signal based on said greatest common factor, wherein said greatest common factor is one of said first inexact data with the noise removed or said inexact data with the noise reduced as compared to said first inexact data bearing noise.
20. An apparatus for processing inexact data that is a corrupted version of exact data, wherein the inexact data represents a first polynomial having at least one root, said apparatus comprising:
an input interface for receiving the inexact data;
a processor system including a floating point arithmetic unit; and
a memory system for storing code which when executed by the processor system using the floating point arithmetic unit causes the processor system to:
determine an approximate greatest common divisor of said first polynomial and a derivative of said first polynomial; and
determine roots of said first polynomial using said approximate greatest divisor and the derivative of the first polynomial; said roots representing at least approximations to said exact data being reduced corruption or being said exact data being no corruption.
21 . An apparatus as claimed in claim 20, wherein said memory system further comprises code which when executed by the processor system using the floating point arithmetic unit causes the processor system to:
create a recovered signal from said determined roots.
22. An apparatus comprising means arrange to implement a method as claimed in any of claims 12 to 18.
23. Machine executable instructions arranged, when executed, to implement a method or floating point arithmetic environment or apparatus as claimed in any preceding claim.
24. Machine-readable storage storing machine executable as claimed in claim 23.
PCT/GB2011/050560 2010-03-19 2011-03-21 Signal processing system and method WO2011114174A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GBGB1004610.0A GB201004610D0 (en) 2010-03-19 2010-03-19 Signal processing
GB1004610.0 2010-03-19
GBGB1019534.5A GB201019534D0 (en) 2010-03-19 2010-11-18 Signal processing
GB1019534.5 2010-11-18

Publications (2)

Publication Number Publication Date
WO2011114174A2 true WO2011114174A2 (en) 2011-09-22
WO2011114174A3 WO2011114174A3 (en) 2012-12-20

Family

ID=42227994

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2011/050560 WO2011114174A2 (en) 2010-03-19 2011-03-21 Signal processing system and method

Country Status (2)

Country Link
GB (2) GB201004610D0 (en)
WO (1) WO2011114174A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190790A (en) * 2021-03-30 2021-07-30 桂林电子科技大学 Time-varying graph signal reconstruction method based on multiple shift operators

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1004610A (en) 1961-07-13 1965-09-15 James Donald Robbins Dry cleaning composition
GB1019534A (en) 1965-01-01 1966-02-09 Tom Edgerton Clarke Hirst Power transmission universal joint

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0700922D0 (en) * 2007-01-18 2007-02-28 Univ Sheffield Data processing system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1004610A (en) 1961-07-13 1965-09-15 James Donald Robbins Dry cleaning composition
GB1019534A (en) 1965-01-01 1966-02-09 Tom Edgerton Clarke Hirst Power transmission universal joint

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
D.A. BINI, A. MARCO: "Computing curve intersections by means of simultaneous iterations", NUMER. ALGOR., vol. 43, 2006, pages 151 - 175, XP019453142, DOI: doi:10.1007/s11075-006-9048-0
FAUGERAS: "Three-Dimensional Computer Vision: A Geometric Viewpoint", 1993, THE MIT PRESS
J. T. KAJIYA: "Ray tracing parametric patches", COMPUTER GRAPHICS, vol. 16, 1982, pages 245 - 254
P. STOICA, T. S6DERSTR6M.: "Common factor detection and estimation", AUTOMATICA, vol. 33, no. 5, 1997, pages 985 - 989, XP055038921, DOI: doi:10.1016/S0005-1098(96)00248-8
R.P. MARKOT, R.L. MAGEDSON: "Solutions of tangential surface and curve intersections", COMPUTER AIDED DESIGN, vol. 21, 1989, pages 421 - 427
S. GOEDECKER: "Remark on algorithms to find roots of polynomials", SIAM J. SCI. STAT. COMPUT., vol. 15, 1994, pages 1059 - 1063
S. PETITJEAN: "Algebraic geometry and computer vision: Polynomial systems, real and complex roots", JOURNAL OF MATHEMATICAL IMAGING AND VISION, vol. 10, 1999, pages 191 - 220
T. SEDERBERG, G. CHANG: "Best linear common divisors for approximate degree reduction", COMPUTER AIDED DESIGN, vol. 25, 1993, pages 163 - 168, XP000364339, DOI: doi:10.1016/0010-4485(93)90041-L

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190790A (en) * 2021-03-30 2021-07-30 桂林电子科技大学 Time-varying graph signal reconstruction method based on multiple shift operators
CN113190790B (en) * 2021-03-30 2023-05-30 桂林电子科技大学 Time-varying graph signal reconstruction method based on multiple shift operators

Also Published As

Publication number Publication date
GB201004610D0 (en) 2010-05-05
WO2011114174A3 (en) 2012-12-20
GB201019534D0 (en) 2010-12-29

Similar Documents

Publication Publication Date Title
Rodríguez Total variation regularization algorithms for images corrupted with different noise models: a review
Sroubek et al. Multichannel blind deconvolution of spatially misaligned images
Quéau et al. Variational methods for normal integration
van den Berg et al. Stationary coexistence of hexagons and rolls via rigorous computations
Harker et al. Least squares surface reconstruction from gradients: Direct algebraic methods with spectral, Tikhonov, and constrained regularization
Quan et al. Data-driven multi-scale non-local wavelet frame construction and image recovery
Osher et al. Level set methods
Gao et al. Total generalized variation restoration with non-quadratic fidelity
Ullah et al. A new variational approach for restoring images with multiplicative noise
Tiirola Image denoising using directional adaptive variable exponents model
Ramlau Regularization properties of Tikhonov regularization with sparsity constraints
Gazzola et al. Inheritance of the discrete Picard condition in Krylov subspace methods
Elser Random projections and the optimization of an algorithm for phase retrieval
WO2011114174A2 (en) Signal processing system and method
Gallo The SO (3) and SE (3) Lie Algebras of Rigid Body Rotations and Motions and their Application to Discrete Integration, Gradient Descent Optimization, and State Estimation
Mei et al. HPM‐based dynamic sparse grid approach for Perona‐Malik equation
Jia et al. A new TV-Stokes model for image deblurring and denoising with fast algorithms
Ji et al. A high-order source removal finite element method for a class of elliptic interface problems
Stuke et al. Estimation of multiple motions: regularization and performance evaluation
Fairag et al. An effective algorithm for mean curvature-based image deblurring problem
Zhang et al. Multi-scale variance stabilizing transform for multi-dimensional Poisson count image denoising
Donatelli et al. Antireflective boundary conditions for deblurring problems
Jung et al. Inverse polynomial reconstruction of two dimensional Fourier images
Buccini et al. A multigrid frame based method for image deblurring
Arico et al. The anti-reflective algebra: structural and computational analysis with application to image deblurring and denoising

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11714815

Country of ref document: EP

Kind code of ref document: A2