Title Error Optimization in Digital to Analog Conversion.
Field of the Invention The present invention relates to Digital to Analog
Conversion techniques and, more specifically, to special linearization techniques that improve the static and dynamic performance of a Digital to Analog Converter (DAC).
Background of the Invention
In Analog to Digital and Digital to Analog Conversion techniques, the principle law of operation is Shannon's sampling theorem. According to this well known theorem, the information contained in a signal can be represented by its sampled values if the signal is sampled at a rate of twice the maximum frequency contained in the signal. The maximum frequency in the signal is called the Nyquist frequency fNyq and defines the signal frequency band of interest.
A Digital to Analog Converter (DAC) is the interface that transforms discrete time domain and discrete amplitude domain signals represented by digital input code values into analog electrical output signals in the form of a current, a voltage or an electrical charge value. Given that the converter is designed to have a resolution of N bits, a set of 2n output values can be produced by the conversion. The larger the resolution /V is, the more problematic it becomes to accurately represent the 2N output values.
The fundamental building block in a DAC converter is an N- bit DAC core, which should operate with the uttermost level of linearity possible. This building block is fed with digital code values and outputs the analog electrical (charge, current, voltage) equivalent values based on the use of 2N unit electrical source elements, which are either charge, current or voltage sources. The word unit element will be used in
the following to indicate charge, current or voltage units.
Each time a new input code w(m) , m is the discrete time index, is provided at the input, the DAC switches on w(m) unit elements in order to represent the input code by an electrical equivalent output value. Ideally, all the unit elements used should exhibit identical behaviour, i.e. all should have the same electrical value and should switch with the same time delay. Or to describe it more generally, every unit element during switching should behave in the same way.
In practice, several physical mechanisms limit the level of accuracy with which the discrete input values can be represented by their electrical equivalent. These mechanisms usually worsen the situation as the sampling rate and the signal frequency increases. Additionally, the more the resolution N increases, the more difficult it becomes to design elements with identical behaviour because their electrical characteristics and properties are subject to variations caused by improper physical semiconductor process mechanisms during production.
A DAC can be characterized as being a Nyquist or an oversampled DAC, based on the use of oversampling.
If a signal is sampled at a rate larger than twice the Nyquist frequency then this signal is oversampled. If the signal is sampled at a rate less than the Nyquist frequency, the signal is undersampled. In real converters, the signals are also quantized. The process of quantization is a non-linear memoryless operation which adds also distortion to the signal. Oversampling of a signal creates a surplus of samples that spreads the total quantization error power to a wider frequency range. In this manner, quantization error power in the band of interest, defined by the Nyquist frequency, is reduced. The performance of the DAC within the frequency band of interest (i.e. the baseband) - expressed by its dynamic range - is enhanced this way. This implies that we can use a converter with a resolution of N, but with a given oversampling ratio OSR =
f (2fNyq) to achieve an effective resolution larger than /V.
The additional use of noise shaping techniques increases the dynamic range even further, essentially transferring more quantization noise power away from the band of interest. The dynamic range, and hence the effective resolution of a
DAC is determined based on how close its actual Signal to Noise Ratio (SNR) is with respect to the theoretical Signal to Quantization Noise Ratio (SQNR).
In addition to the SNR, several additional Figures of performance merit are used. There are two kinds of static performance merits: Differential Non Linearity (DNL) which describes the difference between two adjacent analog signal values compared to the step size, i.e. the Least Significant Bit (LSB) , and the Integral Non Linearity (INL) which describes the relative accuracy. The motivation to use oversampling and noise shaping techniques to achieve high accuracy in D/A converters is justified because these methods overcome the analog accuracy problem by trading digital complexity (oversampling, interpolation, noise shaping) and speed for the. des red insensitivity to unit element physical problems. Ideally, the use of a core DAC with 1 bit would make extensive use of digital signal processing and would avoid the necessity of precise analog circuits. Because the sampling rate for a low bit DAC core needs to be much larger than the Nyquist rate, i.e. the maximum signal frequency to be converted, oversampling methods necessitate the use of circuitry with a significant higher bandwidth than the signal to be converted. In addition, high order noise shaping mandates the use of high order analog filters, which are a sincere drawback as they are non trivial to design. Hence, oversampling converters are best suited for low frequency applications, not for the demands of wideband applications. Consequently, if there is required a DAC with wideband frequency performance and large dynamic range, one is forced to use DAC
cores with a large resolution N, most often pure Nyquist DAC's. In such a case, the number of elements of the core, and the area they occupy, is such that is becomes extremely difficult to design them in such a way that after the silicon implementation they all will behave in the same way. In this case, those physical mechanisms relevant to topology become a dominant limit on the performance of the converter which can be obtained. The significance of the problem mandates the employment of correction algorithms and techniques to provide linearity.
Several proposals exist in the prior l terature to select the unit elements to limit the accumulation of errors an to provide linearity. These proposal can basically be divided in three major groups. The first group comprises the so-called Dynamic Element Matching techniques, the second group is referred to as the Switching Sequences techniques and the third group is called Calibration techniques. The basic idea behind Dynamic Element Matching is that a given input code can be represented by different combinations of unit elements. Hence, upon repetition of the same pattern of input codes, if the representat on of each code is varied, for example randomly, the generated signal error eventually will show a noisy type behaviour and properties. That is, a time averaging of the error takes place.
Essentially, this means that the error is decorrelated from the signal and the generated error power is spread over the frequency band. Hence,
Dynamic Element Matching can also be called a temporal averaging method.
If a particular number of unit elements is selected, the added error is the sum of the errors associated with each of the selected unit elements. Accordingly, every time a particular input code is converted, an inherent error summations takes place. The basic idea behind the technique of Switching Sequences is now that if the error profile of each of the unit elements is known, the selection of unit elements, i.e. the switching sequences, can be controlled in a way that for each code the result of the summation is as small as possible,
preferably zero. Therefore, the technique of Switching Sequences can also be called a spatial averaging method.
It is important to realize that the spatial method exploits the topological profile of errors in order to limit the error per converted code, while the temporal method leaves the error per code unchanged but relies only on altering the spectral properties of the total generated signal error. Calibration relies on the correction of the error in each individual unit element.
Those skilled in the art will appreciate that in a 256 bit DAC core, for example, which implies 28-l unit elements, it practically becomes impossible or it will be at least very costly to implement optimal temporal and spatial correction techniques. For these large numbers of elements it is also very difficult and almost impossible to use calibration based methods. Such methods are not of interest for the present invention.
Further, because these known methods are based on the assumption that the error profiles are fixed, i.e. that the error profile of a unit element will not alter, it will be appreciated that these methods in no way will provide an optimum solution for physical changes during actual operation of the DAC, such as the operating temperature of a DAC core and aging thereof, which definitely have an impact on the error profile of the unit elements.
Summary of the Invention It is an object of the present invention to improve the static and dynamic behaviour of Digital to Analog Converters (DAC's) providing a more accurate output signal compared to prior art DAC's.
In a first aspect according to the present invention, there is provided a method for error optimization in Digital to Analog Converters (DAC) comprising a plurality of selectable unit elements, wherein an input signal represented by a digital input code is converted
into a corresponding analog output signal by selecting a plurality of the unit elements in accordance with the input code, characterized in that the unit elements, for a particular input code, are selected as a result of processing at least one of a group comprising unit element error profile information, input signal type information and output signal type information such to satisfy at least one of a group comprising constraints on the DAC and cost functions.
Different from the above-disclosed prior art methods for error optimization, the method according to the present invention provides processing of a plurality of parameters or variables contributing to match a plurality of requirements set in relation to the accuracy of the conversion to be obtained.
The approach according to the present invention is more elaborated compared to the known temporal and spatial averaging methods, by including information with respect to the type of the input and output signals in order to satisfy constraints put on the DAC and costs functions.
The unit element error profile information, the input signal type information and the output signal type information used in the method according to the present invention may be a-priori known or can be extracted during operation of the DAC. With the latter, the method according to the invention accounts for alteration of the error profiles of the DAC core due to operating temperature, aging, etc., whereas no complex measurements have to be performed beforehand for obtaining error profile information of each unit element in a DAC core.
The constraints put on the DAC, in a further embodiment of the invention, essentially relate to placement and routing efficiency of a corresponding lay-out of the unit elements.
The costs functions essentially relate to the properties of the errors.
Those skilled in the art will appreciate that the method
according to the information is, of course, not limited to the above- mentioned constraints and costs functions.
In a preferred embodiment of the method according to the invention, the unit element error profile information comprises errors relating to electrical accuracy of a unit element and errors relating to switching performance of a unit element.
In a yet further embodiment of the invention, the processing comprises summation of the errors relating to electrical accuracy of selected unit elements and averaging of the errors relating to switching performance of the selected unit elements, wherein a selection of unit elements is provided, in accordance with the input code, such that the summation and averaging results are below a predetermined value, which value ideally equals zero.
The processing according to the invention comprises a stochastic converging algorithm.
The results of the processing in accordance with the method of the invention may be provided as a sequence or map S of unit elements to be selected for converting a particular input code.
This sequence or map S may be calculated externally (of- chip) of a DAC and may be implemented internally (on-chip) of the DAC. However, the method according to the invention may also be implemented internally (on-chip) of the DAC, such that the sequence or map S of unit elements for the conversion of an input code is calculated internally (on-chip) of the DAC. In a second aspect according to the present invention, there is provided a sequence or map 5 for selecting unit elements of a DAC core with respect to a given digital input code.
In a third aspect according to the present invention, there is provided a DAC operated and designed in accordance with the method of the invention, comprising decoder means and a DAC core, the DAC core comprising a plurality of unit elements, wherein the decoder means are
arranged for selecting a plurality of the unit elements for converting a digital input code into a corresponding analog output signal, characterized by mapping means, interposed between the decoder means and the DAC core, for selecting a plurality of the unit elements in accordance with a sequence or map provided by processing means arranged to operate in accordance with the method of the present invention disclosed above.
In a further embodiment of the DAC according to the present invention, the mapping means and the processing means are provided integral to the DAC, i.e. on-chip in the case of a semiconductor DAC.
The above-mentioned features and advantages of the invention are illustrated in the following description with reference to the enclosed drawings.
Brief Description of the Drawings
Figure 1 shows a simplified example of a prior art Digital to Analog Converter (DAC).
Figures 2a, 2b and 2c show examples of the most common profiles of process errors in an integrated semiconductor type DAC. Figure 3 shows typical output amplitude and timing problems in a non-ideal DAC.
Figure 4 shows, in a schematic manner, a general overview of the method parameters and DAC designed in accordance with the present invention. Figure 5 shows Integral Non Linearity plots vs gradient angle θ for an 8-bit core with linear errors.
Figure 6 shows Integral Non Linearity plots vs gradient angle θ for an 8-bit core with linear and parabolic errors.
Figures 7a, 7b, 7c and 7d show DC transfer function non- linearities causing signal distortion.
Figures 8a, 8b and 8c show Spurious Free Dynamic Range
(SFDR) performance of a core DAC of 8 bits using different selection sequences Sz.3;4 when the error is linear.
Figure 9 illustrates an error transfer mechanism according to the invention from topology to the output signal. Figures 10, 11, 12 and 13 show varies plots relating to the optimal mapping of process related random errors in accordance with the present invention.
Detained Description of the Embodiments Figure 1 shows a block diagram of a basic prior art Digital to Analog Converter (DAC) architecture 1. The DAC 1 comprises a decoder 2, for example a binary to thermometer decoder, and a DAC core 5. The DAC core 5 comprises a plurality of electrical source elements or unit elements 8, which are either charge, current of voltage sources. The decoder 2 comprises an input terminal 3 for providing a digital input code which has to be converted into an electrical wave form signal at an output terminal 7 of the DAC core 5. An output 4 of the decoder 2 connects to an input 6 of the DAC core 5.
In operation, the decoder 2 converts an input code w(m) , wherein m is the discrete time index, into a bit sequence at the output 4 for "turning on" 9 of a plurality of unit elements 8 of the DAC core 5, to provide an electrical output signal s (t) at the output terminal 7, representing the input code w(m) provided at the input terminal 3. Switching on and off the unit elements is, for convenience sake, represented by switches 9 which can be controlled from the input 6 of the DAC core 5.
Dependent on the input code provided, a particular selection of the unit elements 8 is made, in order to represent the respective input code by a correct electrical value at the output terminal 7.
The unit elements 8 can be combined into pools or arrays or
matrices, to provide the output signal s (t) at the output terminal 7 in response to the digital input signal or word w(m) .
Lets consider, for example, the converter 1 presented in
Figure 1. For a resolution of N=12 bit, the DAC core 5 consists of 212 = 4096 unit elements 8, which are switched on/off 9 depending on the digital input values w(m) . Ideally, when a digital value w(m) is converted, w(m) unit elements 8 have to be switched on in the same time instant mTs, and their electrical value (current, voltage or charge) should be added into a common output node connecting to the output terminal 7 of the DAC. Each unit element equals an LSB (Least Significant
Bit) reference value. The 4096 unit elements can be grouped in a weighted or non-weighted manner.
During implementation, physical mechanisms relating to the topology of the pool, array or matrix of unit elements 8 are introduced that result in non-equal unit elements. Topological errors include random and systematic device process variations (MOSFETs, resistors), variations in the parameters of interest (MOS Vτ, β, etc.), non equal wire paths, etc. The profiles of these errors can be spatially random or systematic and, consequently, based on the circuit device operation mechanisms. These errors contribute to random and systematic amplitude and time errors, respectively. Examples of the most common profiles of errors are shown in Figures 2a, 2b and 2c.
It will be appreciated that the unit elements 8 neither will have exactly the same amplitude values, nor switch on/off at the same time as is desired for providing the analog output signal. These errors cause noise and distortion in the output signal via the time operating nature of the DAC.
The performance of a DAC is expressed by several Figures of merits. The static performance of the converter is determined by the Differential Non-Linearity (DNL) and the Integral Non-Linearity (INL), while the dynamic performance Figure that reflects to spectral purity is
characterized by the Spurious Free Dynamic Range (SFDR). For the definition of the above Figures of performance the reader is prompted to R.J. van de Plassche, 'Integrated Analog-to-Digital and Digital-to-Analog Converters', Kluwer Academic Press, 1994. A typical output signal prior to low pass filtering but subjected to amplitude and timing errors is shown in Figure 3.
In Figure 3, the solid line represents the output signal s (t) of an ideal DAC, while the broken line represents a realistic or non-ideal DAC output signal. The differences between the ideal and non- ideal case are caused by the above-identified process and timing errors.
In Figures 2a, 2b and 2c realistic amplitude error profiles are shown, caused by process errors in the unit elements 8 (current, voltage or charge) of the DAC core 5 in Figure 1. The amplitude errors are a function of the position of each unit element 8 in the layout of the semiconductor chip.
Figure 2a shows amplitude errors in %LSB due to random process errors. Fig. 2b shows amplitude errors in %LSB due to systematic linear errors and Figure 2c shows amplitude errors in %LSB due to systematic parabolic process errors. On the k, 1 axis the number of unit elements 8 in the DAC core 5 are denoted. There are two main types of errors, random and systematic. The random errors are determined by their inherent properties of the technology used and by the size of each unit element.
Of course, random errors are never a-priori known so their overall effect on INL is determined mainly using Monte Carlo simulations. However they can also be determined a-posteriori from the operation of the DAC 1.
There are two main types of errors, random and systematic.
The random errors are determined by the inherent properties of the technology used and by the size of each unit element. Usually, when one increases the area of the unit elements, the effects of random errors are
reduced. In a high resolution, high accuracy DAC this means that the area of the DAC is enlarged. Consequently, systematic errors also become very important. To achieve the required linearity in the conversion of the digital input code to an electrical output value, all random and systematic errors have to be corrected to the greatest extent possible.
Assume that no post laser trimming or continues time calibration methods are used to correct the accuracy of the amplitude errors of the elements. Given a large resolution, which is demanded nowadays in communications applications, for example, the effects of systematic and random errors on the unit elements have to be reduced. Gradient errors (Figure 2b) are the dominant types of amplitude errors when the area of the elements is increased to reduce the random errors. Additionally, before each particular chip is implemented, there is no information as to the angle or the gradient of the error for that particular chip. One only knows that they exist. Similarly, one must be able to reduce the effect of different switching and timing errors of the unit elements.
Several proposals exist in literature that consider ways to select the unit elements from a pool or array of unit elements 8, so as to limit the accumulation of the errors and to provide linearity. These proposals are basically divided in two major groups, i.e. Dynamic Element Matching Techniques and Switching Sequences Techniques, as discussed in the introductory part above. Both methods are based on two principals, i.e. spatial and temporal averaging, shortly spatiotemporal averaging. Existing DEM methods are focussed purely on temporal averaging. Additional constraints that include other physical phenomena such as the timing problems (explained later on) are completely ignored. An example of an additional constraint is to require that each successive unit element selected lies in a given neighbourhood of the previous selected one. These constraints refer to easiness and symmetry of wiring between the digital decoder output and the unit elements.
It is common practice to avoid very large coarse or fine segments during the design of high resolution DAC's for reasons that have to do with complexity of digital decoding logic and area. The segmentation in two coarse segments with thermometer decoding and a fine segment with binary decoding is the most common solution met in practice.
As a result of the insufficient way to suppress the accumulation of errors with the known techniques, further layout techniques are needed to be used which cause a lot of secondary effects (extra capacitive and resistive load, timing delays etc). In order to avoid the above shortcomings of the prior art, in accordance with the present invention, error optimization in a DAC is achieved by selecting unit elements to represent a digital input code using a map or sequence S obtained from processing not only error profile information of the unit elements and switching errors, but also input signal type information and output signal type information, i.e. signal properties, such that certain criteria are met expressed as costs functions and/or constraints put on the DAC.
Figure 4 shows in a general block diagram the method of and a DAC 10 designed for error optimization in accordance with the present invention.
Referring to the DAC 1 shown in Figure 1, in the DAC 10 of the present invention shown in Figure 4, between the decoder 2 and the
DAC core 5 a mapping block 11 is interposed, providing mapping between the decoder output 4 and the DAC core input 6, in order to select the unit elements 8 of the DAC core 5 to convert a digital code at the input terminal 3 of the decoder 2 into an electrical equivalent value at the output terminal 7 of the DAC core 5, with an optimized error performance.
A processing block 12 performs a suitable processing algorithm to provide the mapping block 11 with a sequence or map S for selecting the unit elements 8 for a given input code.
In accordance with the present invention, the processing
block 12 processes the sequence or map S from a group of input parameters comprising unit element error profile information, input signal type information and output signal type information, wherein the unit element error profile information may be a-priori known or a-posteriori determined from operation of the DAC 10.
The unit element error profile information, input signal type information and output signal type information are processed by a suitable stochastic processing algorithm implemented in the processing block 12, in order to match or satisfy criteria selected from a group comprising constraints put on the DAC and costs functions.
The several variables and parameters inputted to and outputted by the processing block 12 are indicated in Figure 4 by arrows, accompanied by a respective description of the parameter, variable or information concerned. Note that the calculation or processing of the map or sequence 5 may be provided in an open loop arrangement.
In particular, the unit element error profile information comprises errors relating to electrical accuracy of a unit element 8 and errors relating to switching performance 9 of a unit element 8.
To demonstrate the performance of the algorithms according to the invention, a cost function has been set that minimizes the INL on two perpendicular angles of a given linear error profile. A set of sequences are extracted for the toughest of the problems, namely the 8 bit core DAC (16 x 16) and the results of the calculated INL are plotted. In Figure 5 one of the sequences according to the invention is compared with the various published element selection algorithms. The error profile is a plane with a fixed gradient and varied angle (e.g. Figure 2b). In Figure 6 the same sequences are compared when the error consists of linear and parabolic components.
The sequences for an 8 bit DAC core are named as follows: Sx : sequential. The sources are selected sequentially.
S2 : hierarchical symmetrical; see Y. Nakamura, IEEE
Journal of Solid State Circuits, Vol. 26, April 1991, p. 637-642.
53 : Q2 random walk; see J. Van der PI as, IEEE Journal of Solid State Circuits, Vol. 34, December 1999, p. 1708-1718.
54 : An example of the optimized algorithm according to the invention.
For the 6 bit core DAC :
S2 : hierarchical symmetrical.
S6 : European Patent 0,929,158. S7 : An example of the optimized algorithm according to the invention.
As mentioned, several additional required constraints additional to the INL cost function can be added on the processing algorithm according to the present invention to retrieve sequences. This is not feasible with any of the known techniques for error optimisation.
Static linearity problems, such as a non linear DC transfer curve can result in reduced dynamic performance. The dynamic performance reduction is caused by signal distortion that make unwanted spectral tones appear in the spectrum. Figures 7a, 7b, 7c and 7d exemplify the mechanism of the spurious tone appearance in the output of the DAC. An ideal DAC would have a linear DC transfer curve because each output value would be a unit step away from the previous and the next code value. In a non-ideal DAC the result of the topological matching errors cause a non linear DC transfer.
When a periodic input signal is converted, additional tones add to the fundamental. Not shown in Figures 7b and 7c is the sampling effect of the DAC output, which folds back these harmonics in the baseband. Since an optimum sequence minimizes the accumulated errors per code, it makes also the DC transfer curve as linear as possible. Note
that using a classical selection scheme one might obtain the static linearity tolerance level of INL = 0.5 LSB (no missing codes) but that does not mean that we have a linear transfer curve. Hence, optimizing the selection logic results in net performance gain on the spectral domain. In areas like Direct Digital Synthesis, or Multi Carrier
Converters, SFDR where high accuracy is demanded in very high sample rates, limitations due to DAC linearity plays the dominant limiting factor of the performance of the whole system.
Through the circuit mechanisms of the MOS devices the process parameter deviations can manifest as timing problems. By that it is meant that two equally designed switching elements 9, such as MOS switches, that drive the unit elements have a relative delay to switch on and off. Special care has to be taken on the layout level to minimize the impact of non equal inter-connection wires, to clock correctly all the unit elements simultaneously etc. However, our main interest here is with the time asymmetry of the switching devices as a result of the mismatches and process gradients which are encountered during implementation of an IC, in the same way the amplitude inaccuracies appear. From this angle of view, none of the prior art pays any attention, neither any analysis or precautions are made to deal with the systematic way the process mismatches become time mismatches and then a distortion in the converted output signal .
The topological dependence of these errors and the fact that they can be processed by similar selection algorithms as the ampl tude errors - because they belong both to the same error modulation mechanism - is incorporated in the present invention.
Shading light with analysis and then designing a scalable algorithm that optimizes the way the timing errors are mapped to the output signal is part of the novel approach according to the present invention.
To exemplify the different effects of the selection
algorithm on a given set of timing errors, reference is made to Figures 8a, 8b and 8c.
Figures 8a, 8b and 8c show Spurious Free Dynamic Range
(SFDR) performance of a core DAC of 8 bits using different selection sequences S2.3;4 when the timing error is linear. Plots for three different angles of the error are shown. The solid line stands for one of the optimized sequences according to the present invention.
The initial error is a plane with the angle varying as previously in the amplitude case. The error profile defines how much delay is added to each source when it is turned on/off in respect to its position. Using different selection algorithms for three different angles of the assumed linear error, the results shown are obtained.
The input signal is a full scale sinusoid while the simulations are performed using a modular high level DAC model that has been developed in Matlab and C code. The output describes the maximum distortion component induced by the time asymmetry of the switching devices.
Similar plots can be obtained for DAC cores with different resolution (N = 5, 6, 7, ...) and error profiles. In general, the cost function that has been fed the algorithm in order to give the optimum element selection sequence is the same with that of the amplitude problem (more about that is discussed later). It must be noted that linear errors where assumed, also for the time error profile, and this refers to an extend to what happens in reality. However, as is mentioned already, the processing algorithm according to the present invention can be easily modified to incorporate specific constraints that are specific to the timing problems. Then the algorithm will find a map or sequence S that behaves optimally against the given set of time error constraints, amplitude error constraints and other (e.g. distance of successive sources or elements to reduce complexity, relative importance of the time errors against the amplitude
errors) .
The exact information concerning the cost functions is left as a degree of freedom for the designer and this strong point allows generalization. The comparison in Figures 8a, 8b and 8c has been made under the above considerations.
Consider the discrete to continuous time signal conversion (Digital to Analog) as the two-dimensional mapping of the discrete pair w(m) ,m (amplitude value and time moment) to the continuous pair w(m) »LSB, mTs . The discrete time is denoted by m; w(m) is the digital input value; LSB refers to the reference physical value (here it is current, but this may also be voltage or electric charge), and Ts is the sample repetition period in seconds.
The physical mechanism that are introduced during chip fabrication lead to the inaccurate two-dimensional representation/mapping of the discrete input signal to the continuous output signal. The goal of the approach according to the invention is to identify the optimum way to transfer the errors from the topology to the output signal given specific profiles of errors on time and amplitude. Optimum means that the impact of the errors are either DC shifts or linear filtering, i.e. non linearities are removed and that additional constraints reflecting to placement and routing efficiency can be incorporated.
The above fundamental way of viewing the conversion allows to identify that the distortion in the converted signal caused by amplitude and time errors are both governed by a basic modulation mechanism 15. This modulation is identified as the error transfer mechanism from topology to the signal's spectrum. The modulation mechanism is presented in Figure 9 and is explained as follows.
An input signal value w(m) is reconstructed with the sum of w(m) selected unit current elements. This is mostly encountered in wide band DAC's. The unit current elements are selected with a specific order defined by a switching sequence scheme or map S. Thus, a one-to-one
mapping is performed, namely S, from the input signal values to the output signal values. The same one-to-one relation appears between the input signal and amplitude and time errors. This error mapping is defined as:
a, S
1 > a J,
wherei n a defi nes the ampl i tude error, // defi nes the timi ng error, and S i s the sequence or map accordi ng to whi ch uni t el ements are sel ected. Signal errors due to indivi dual are gi ven by: j=w error (w) = 2u j
If S is such that error (w) < bound, linearity of the conversion is increased. Signal errors per code are given by the "SUM'Of the individual amplitude errors.
Signal errors due to individual timing errors are calculated by the difference between input codes:
/ y==mmaaxx(( ww>00 ,,ww,, ) error Δw) = error (w0 -> wx) = - ∑ μ . -Aw- LSB
WX ~ Wθ l =min(w0 ,v^ )
wherein: LSB is the unit value of a unit current element
Δw = w2 -w0, i.e. the error in relation to a code difference. This means that the signal error added when a code difference occurs is governed by the "AVERAGE" of the timing errors.
Because the input code w varies, the signal error can be regarded as a modulation 15 or mixing process of the error (w) or error (Δw) and the input w of a given code. A suitable processing algorithm in accordance with the
present invention now may read:
Find a map or sequence 5 that rearranges a ϊ. -> a J, and
μ. -> μ . such that error (w) and error (Δw) are bounded and ideally equal
zero. Another example of the inventive approach relates to the optimal mapping of process related random errors.
It must be noted that it is the first time reported in known literature that a proper static map, i.e. a map that does not change in time, can correct the effects of spatially random process errors.
It is known that to a great extend the random errors that occur during chip processing and fabrication are approximated with stochastic White Gaussian processes. The known information is exploited in that the errors are Gaussian (a-priori knowledge). An auxiliary block, is used which measures the errors of the elements, be it amplitude or time related. Then the general processing unit of the DAC performs an "Order Statistics Filtering" in the errors, or more generally it applies a non-linear filtering to the errors.
An example of an order statistic filter operation, which is used here is to sort the elements in accordance with the errors in ascending or descending order. It should be noted that the term order statistics filtering includes various different types of filters. For more information, see I. Pitas and A. Venetsanopoulos, Nonlinear Digital Filters: Principles and Applications, Kluwer Academic Publishers. In Figure 10 a set of random errors that occur in a chip sample are shown as a function of the element number. The DAC is taken to be N=8 bit, hence, 255 elements are used.
In Figure 11 the result after sorting the elements are shown, and the numbering of the elements is rearranged so as element number 1 is the one with the absolute minimum error, and element 255 is
the element with the absolute maximum error.
It is important to realize that the exact values of the errors in each sample chip are not important. In every case, because the errors resemble a Gaussian process, after sorting them the distribution will resemble the shape of that in Figure 11. Consequently, if a map is found by the processing unit to take into account this profile or errors then the random errors are corrected. The map used can as well be that of gradient errors because the error profile resembles closely that of a linear error for most of the range between 1-255. The importance of this approach becomes more profound if one takes together the effects of random and non random (systematic) errors.
In Figure 12 we show a combination of both random and systematic (gradient) errors. The gradient can have any angle. In Figure 13 the errors are sorted according to the above processing operation. Then a map is applied, which is optimized to this profile. It is possible to apply the same map that was used previously to eliminate the effect of gradient errors. In this way random and systematic errors of any angle can be efficiently corrected at the same time.
If w(m) is an input periodic signal, then the output contains error periodic to the input, hence, harmonic distortion comes naturally. For a given set of error profiles, the key factor that determines the level of the resulting signal distortion is the operation of the mapping algorithm. The identification of the importance of 5 drives us to seek optimal sequences 5 based on the following lines of thought:
Line 1: An ideal element selection algorithm acts as a linearization buffer between the input and the output. It selects the unit elements based on their respected time and amplitude errors in a way that the effect that the errors cause in the converted signal is offset
and linear filtering. Specific cost functions define "offset" and "linear" while others define the placement and routing efficiency.
It can be proven that if the algorithm is designed in such away that for every input signal the accumulation functions of the errors are minimal, then the non linear effects (distortion) are also limited. As explained, the minimal accumulation of amplitude errors is well defined as Integral Non Linearity. However, this property of accumulation is fundamental and governs also the time problem. In this way the distortion of the converted signal is minimized. Line 2: Given a set of optimal selection sequences Sk, wherein k being the number of sequences and all being extracted for the same DAC core, the correlation relation between the input values and the added time and amplitude errors can be removed by properly operating between these optimal sequences. Properly means randomly or by a given function, for example.
Using any of the obtained k sequences static performance is always guaranteed. The dynamic effects of the DC transfer errors and the timing errors are kept to minimum. By cyclic or using all these sequences in a given fashion the correlation between the input signal and the error is removed and the remaining level of distortion is transformed to noise.
Summarizing, the novel and inventive points of the present invention that are claimed are :
The concept of the error path from topology to the spectrum and the key role of the map S as a linearization factor, for both amplitude and time induced errors.
The implementation of a processing algorithm that provides optimum sequences or mappings S given any size of a DAC core and any type of error profile and information as to input and output signal properties. These sequences satisfy a set of constraints that describe the desired linear behaviour of the system and also the placement and routing efficiency.
The multiplexing of a set of optimal sequences to decorrelate the remaining dependency between the input signal and the error.
Next the development of a unique processing engine that performs this operation according to the invention is disclosed. The efficiency of the algorithm against given shapes of errors, it's superiority in performance against existing ones, the advanced concepts that it utilizes and it's adaptability to any resolution size (complexity level) unlike any algorithm reported so far are exemplified with simulations.
The main objective of the algorithm is to find that sequence that selects the unit elements to reconstruct a signal in such away, that follows the guidelines that have presented in the previous section. This implies that the accumulation of the time and amplitude errors should be minimum and close to the theoretical lower bounds. Since, the same physical errors (fabrication inaccuracies in threshold voltages, dimensions of transistors etc) cause both amplitude and time errors, the same algorithm can process both types of errors.
The dominant error profiles are linear or parabolic. The problem of finding minimal accumulation of given errors is NP-hard. Therefore, no polynomial -time algorithm can find an optimal solution. Also, a near-optimal solution is generally not feasible within polynomial time. As a consequence, one has to use heuristics to find a near-optimal solution in reasonable time. A stochastic search algorithm is used to perform this task successfully. In principle, an exhaustive enumeration algorithm could be applied to small problem instances to find an optimal solution. However, in practice this is not feasible and it poses a major bottleneck.
It is observed that minimizing the accumulation of errors alone is not sufficient for a fixed angle of a gradient error or parabolic error. For example, in practice the error gradient angle of the
process parameter errors are not known. Hence, an optimal solution for a given angle might still produce bad results for another angle. A dual minimization approach has been used, for example, and the algorithm has to find the sequence of elements that minimizes the accumulated errors for two perpendicular angles simultaneously. This approach yields a solution which is comparable in quality to the single-angle minimization approach, but is almost angle-independent.
An important side-effect of a stochastic approach is that a set of near-optimal solutions can be obtained easily. This enables the exploitation of multiple solutions, which has a provably good effect on the obtained results when used in an orthogonal sense. Besides minimizing the accumulated errors, the general stochastic search approach allows for incorporating more constraints. These constraints could be based on other process technology properties, system properties or physical problems that relate and define the way the unit elements (sources) should be combined and selected.