GB2425027A - Viterbi detector using Markov noise model - Google Patents
Viterbi detector using Markov noise model Download PDFInfo
- Publication number
- GB2425027A GB2425027A GB0607061A GB0607061A GB2425027A GB 2425027 A GB2425027 A GB 2425027A GB 0607061 A GB0607061 A GB 0607061A GB 0607061 A GB0607061 A GB 0607061A GB 2425027 A GB2425027 A GB 2425027A
- Authority
- GB
- United Kingdom
- Prior art keywords
- noise
- values
- data
- sequence
- data values
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 68
- 230000001419 dependent effect Effects 0.000 claims abstract description 28
- 230000007704 transition Effects 0.000 claims abstract description 19
- 238000004364 calculation method Methods 0.000 claims abstract description 11
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 230000000875 corresponding effect Effects 0.000 claims description 30
- 230000002596 correlated effect Effects 0.000 claims description 10
- 230000005291 magnetic effect Effects 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 5
- 230000001276 controlling effect Effects 0.000 claims description 2
- 238000013500 data storage Methods 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 23
- 230000008569 process Effects 0.000 description 23
- 238000005516 engineering process Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 239000000523 sample Substances 0.000 description 6
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 5
- 229910052710 silicon Inorganic materials 0.000 description 5
- 239000010703 silicon Substances 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- ORQBXQOJMQIAOY-UHFFFAOYSA-N nobelium Chemical compound [No] ORQBXQOJMQIAOY-UHFFFAOYSA-N 0.000 description 4
- 241000282461 Canis lupus Species 0.000 description 3
- 238000007476 Maximum Likelihood Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 241001653634 Russula vesca Species 0.000 description 2
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005325 percolation Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/18—Error detection or correction; Testing, e.g. of drop-outs
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/39—Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
- H03M13/395—Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using a collapsed trellis, e.g. M-step algorithm, radix-n architectures with n>2
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/39—Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
- H03M13/41—Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/63—Joint error correction and other techniques
- H03M13/6337—Error control coding in combination with channel estimation
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/18—Error detection or correction; Testing, e.g. of drop-outs
- G11B20/1803—Error detection or correction; Testing, e.g. of drop-outs by redundancy in data representation
Landscapes
- Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Error Detection And Correction (AREA)
Abstract
A method and apparatus for state sequence likelihood detection, for receiving a stream of data values from a data medium, wherein the received data values may include added noise that is dependent on previous noise and dependent on data on the data medium, and outputting information specifying a sequence of states corresponding to the stream of received data values, said sequence of states corresponding to possible data values on the medium, involving obtaining or calculating noise statistics for possible sequenqes of data values on the medium using a noise model; calculating weighting values (branch metrics) indicating likelihoods that a data value received at a particular time corresponds to a particular state transition, using knowledge of a previous most likely sequence of data values on the medium over previous time steps, the received data value and said calculated noise statistics; extending said most likely sequence of data values using said calculated weighting values; and outputting information specifying said determined sequence of states. The noise statistics may be based on the Markov model, and the branch metric calculation involves a trace-back operation over a number of time intervals equal to the Markov length.
Description
Likelihood Detector for Data-Dependent Correlated Noise The present
invention relates to state sequence likelihood detectors such as Viterbi detectors, and in particular, to such detectors that are adapted to allow for data- dependent correlated noise when calculating an output.
In diverse areas such as hard disk drive technology, the theory of errorfree communication, adaptive planning, and the theory of traffic flows, it is of great practical importance to solve the problem of finding a most probable sequence of ideal states corresponding to a sequence of measured data, where the measured data is noisy and the ideal states correspond to removal or reduction of this noise. In applications related to communication theories, most probable path-searching algorithms are used to reduce noise from signals transmitted over noisy channels (CDMA2000, Gigabit internet, etc), combat both inter symbol interference and channel noise in magnetic and optical storage devices, maintain communications with deep space research probes (e. g. Galileo).
A sequence of state transitions can be visualised as a path on a trellis diagram. The problem of finding the most probable sequence of states reduces to the problem of finding the path of lowest weighting on a trellis, whose branches are equipped with weightings which are real numbers called branch metrics. Finding the most likely path on a trellis can be solved by techniques of likelihood detection. Optimal likelihood detection techniques are known as maximum likelihood detection, although sub-optimal techniques are also known to be used, e.g. in situations where maximum likelihood detection would be too computationally difficult or expensive. A common technique, used in both maximum likelihood detection and in sub-optimal detection, is the Viterbi algorithm.
One example is in magnetic or optical disk technology, where recording densities have significantly increased in recent years. To achieve high throughput and low error rate requirements, modem read-channel implementations have to take into account inter symbol interference (1ST) and noise which is dependent on previous samples of noise and on data written on the disk.
In a read channel the noisy signal read at time i is rjyj+n1.
where the noiseless signal read at time i is y=G(x, x_i, x1_2, ..., x1_1), and G is an (1+1)-dimensional vector describing inter-symbol interference. Parameter I is called inter-symbol interference length. The noise at time i is n1.
The Viterbi algorithm is used to infer the sequence of recorded bits on the disk from the sequence of received signals. This algorithm finds the optimal path on the trellis whose states are labelled by all possible sequences of recorded bits of given length. Each new recorded bit causes a transition on the trellis between two states s and s' at time i and i+1 connected by a branch b. The optimal path is characterized by the smallest sum of the transition weights assigned to its branches, these weights are called branch metrics.
Thus if m(s) is the cumulative path metric for the surviving path leading to state s at time i and y1 is the branch label representing the noiseless signal associated with branch b, the cumulative metric is updated as m+1(s') = min{m1(s)+[r1-y1]2} where the minimum is taken over all branches ending at state s' with starting state s.
The branches that give the minimum values are used to extend the surviving paths in the trellis. Such a detector is optimal if the noise is uncorrelated Gaussian, e.g. additive Gaussian white noise (AGWN). In the presence of other types of noise, different branch metric can be used to improve the performance. Factors such as misequalisation, timing error and jitter, inter-track interference, DC-offset, non-linear bit shift, overwrite, particulate noise, transition noise and percolation can result in correlated noise effects.
To illustrate how the Viterbi algorithm is used, we now consider an example of convolution encoding and subsequent signal detection. Convolution encoding is a bit level encoding technique where each coded bit is generated by convolving the input bit with the past input bits. This can be done by temporarily storing past input bits in delay elements such as flip flops or other memory or storage elements. Thus the information content of each input bit is spread over a number of coded bits, improving the error resistance of the data. The constraint length K of the convolution encoder is the number of bit shifts over which a single input bit can influence the encoder output. Convolution encoding occurs in a hard disk read system. Each bit on the disk surface generates a magnetic field, which is detected by the disk read head in order to read the bit.
However, the read operation is also affected by the magnetic field produced by the neighbouring bits. Thus, any data value obtained in a read operation actually corresponds to a convolution of magnetically recorded bits on the disc, known as inter symbol interference (ISI).
A convolution encoder may be modelled by a generator polynomial G(D), which describes the number of bits influencing the encoder output, and the magnitude of each bit's influence. The generator polynomial has the formula: G(D) = where D is a delay operator representing a delay of n time units, K is the encoder constraint length and gn's are real numbers, which describe the weight with which past transitions contribute to the current reading.
A simple example of a convolution encoder has a generator polynomial G(D) = I - D. This encoder has a single delay element for storing a previous input, and produces an output equal to the current input minus the previous input. Thus, the encoder operates on the input data x1 to give an output of G(D)x1 = x1 - Dx1. The delay operator acts to represent the previous (delayed) input, thus Dx1 = x1, and G(D)x1 = x1 - x1. The constraint length is 2 because each input bit influences the output over two bit shifts.
Figure 1 shows a block diagram of a prior art apparatus for transmission and reception of convolution encoded data. The apparatus includes a convolution encoder 100, with generator polynomial G(D) = I - D. The apparatus of figure 1 may represent a communications system, with deliberate convolution encoding of data to increase noise resistance during transmission through a channel. However, it may also represent a hard disk read process, in which the convolution encoding is not deliberate, but is a result of inter symbol interference caused by the magnetic fields of the individual data bits on the disk.
The encoder 100 performs convolution encoding of a stream of input data. The data is then sent over a noisy channel. Noise source 104 represents the effect of noise during transmission of the data. The transmitted data is then received and decoded by a detector 108.
The encoder 100 has an input 101 for accepting a stream of time dependent input binary data x1, where i represents the time interval. The input data is received by the encoder at a rate of k bits/second. The input 101 of the encoder 100 is connected to an encoder delay element 102. The delay element 102 stores a single bit of data, corresponding to the input bit x1 at time i, and outputs this data bit at the following time interval i+1.
Thus, at time i, the output of the delay element is x11. The output of the delay element 102 is connected to a multiplication unit 103 which multiplies the output value x1 by minus one, giving an output of-x11. The encoder 100 has a sum unit 104 which is connected to both the encoder input 101 and the output of the multiplication unit 103.
The signals x1 and -x11 are summed by the sum unit 104, to give an output signal of y1 = xi - x1.i.
The encoder output signal y1 is sent from the output 105 of the encoder 100 via a channel, such as a radio link, a wire, or any other form of data transmission channel, to a detector 108. Noise source 106 represents noise n generated as the signal yj passes through the channel. This may be any type of noise, for example, decorrelated noise such as white noise or Gaussian white noise. A sum unit 107, with inputs connected to the noise source 106 and the encoder output 105, represents the addition of the noise n1 to the data y1. Thus, the signal received after the data has passed through the noisy channel is r = yj + n1. The detector 108 receives the signal r, and the detector then performs a detection and convolution decoding process.
The Viterbi Algorithm (VA) is a recursive procedure which can be most easily described when used with a known initial state at time tO, and a known final state at time t=T. VA allows the most likely sequence of states at intermediate times to be found. Figure 2A shows an example of a two-state trellis diagram which can be used to visualise the VA process. The trellis diagram is a state transition diagram which graphically represents all of the possible states of the system over a sequence of time intervals. The horizontal axis of the trellis represents time, starting at time t=O at the left hand side of the trellis, and ending with time t=T at the right hand side of the trellis. The vertical axis represents the possible states of the finite state machine. In this example, these possible states are zero and one, corresponding to the possible input states x of the convolution encoder of figure 1. Pairs of possible states at adjacent time intervals are connected by lines, with each line representing a state transition to a different state or to an identical state during one time interval. The possible sequences of states over the whole of the time period are represented by the possible paths along the trellis. At time t=O, the system is pre-set to state zero. At the next time interval, t=l, the state may remain as zero or change to one. This is represented by the darker upper and lower lines of the trellis between t=O and t= I. A change of state from zero to one is represented by the upper line, extending diagonally upwards to connect to one, and a sequence of all zero states is represented by the lower line, extending horizontally to connect to zero. At time t=l, if the system is in state one, it may follow one of two routes, i.e. remain at one, or change to zero. Similarly, if the system is in state zero, it may follow one of a further two routes, i.e. remain at zero, or change to one. At the final time tT of the trellis of figure 2A, the system is reset to zero, thus only state zero is a possible state.
As can be seen from figures 2A, the trellis contains one or more paths between each possible initial and final state. For instance, there are two different paths from the "zero" state at time t=O to the "zero" state at t=2. These paths are 010 and 000, where the first bit represents the state at time t=0, the second bit represents the state at time t 1, and the third bit represents the state at time t=2. Figure 2B shows an identical trellis to figure 2A, with these two paths in bold lines. VA involves identifying paths between any possible states at time t- 1, and each possible state at time t. If more than one path connects from t-l to a particular state at time t, then VA chooses which of these paths corresponds to the most likely state sequence. Then, the least likely paths are eliminated. The remaining path is called the survivor.
The most likely path through the trellis can be determined using numbers known as branch metrics, which indicate the relative likelihoods of each of the possible state transitions in the trellis occurring.. The branch metrics for each time interval may depend on the previous encoder state, and the new encoder input. The number shown beside each line in the trellis in figures 2A and "B are the branch metrics. In one example, relating to figure 1, branch metrics may be obtained using the expected values y, of the received data r, and the actual values r1 of the received data. The branch metrics in a path from time t1 to time t2 can be summed to indicate the likelihood of that path occurring. These sums of branch metrics are known as path metrics.
To find a survivor at a given state at t=t+1, the path metrics of all paths leading to this state are computed by adding appropriate branch metrics to path metrics of survivors at time t and choosing the path of lowest path metric (i.e. the highest likelihood path) leading to this state. This procedure is called add-compare-select operation and it has to be performed for all states at t=t+1. As t=T is reached, there will be only one survivor left, with probability P =1-C rexp(-C2T), where C1 and C2 are constants. Thus, the probability P approaches 1 as time T increases and Crexp(-C2.T) becomes small.
VA is capable of reducing the amount of de-correlated noise, such as Gaussian white noise, from received data. VA may also be used to reduce the amount of coloured noise, if the correlation length of the noise is small enough.
Figure 3 illustrates the use of VA by the detector 108 of figure 1, to detect the received data r, and output a corresponding sequence of values indicating the estimated encoder input values of highest likelihood. A series of eight trellis diagrams are shown, representing eight steps of the VA decoding process. A ninth trellis diagram shows a trace-back of the optimal path through the trellis. Again, the numbers on the trellis diagrams represent branch metrics, indicating the likelihood that received data corresponds to particular state transitions. Each trellis diagram is similar to that described with reference to figures 2A and 2B, i.e. it has two possible states which are one and zero. These also represent the possible values of x11,which are sequentially stored in the delay element 101.
The trellis extends from a known state of zero at time t=0 to a known state of zero at time t=T. A path through the trellis represents a sequence of data which is input to the encoder. Any unique data sequence input to the encoder has a unique path through the trellis. The initial state of each trellis is set to zero, by pre-setting the delay element to zero, i.e. setting the first value of x to zero. The information which is to be convolution encoded and transmitted then begins with the second value of x. At the end of the information, an extra zero character is added to allow the last character of the information to be fully convolution encoded. It is not essential that this initial and final state should be zero, but in this example, their values should be known by the detector.
The states of the trellis, with a value of zero or one, represent possible values of x1, where x1 is a possible value of the convolution encoder input, which is stored in the delay element 102. The possible values of the convolution encoder output, y1 x1 - are thus represented by the slope of the lines connecting two states of the trellis at adjacent time intervals i-I and i. The values of y1 are known as "branch labels", and they represent ideal values of the received data, without any added noise n1. Lines with a zero slope, (such as the line between state zero at t=0 and state zero at t=1) correspond to yj = 0. Lines with a left-to-right upwards slope (such as the line between state zero at t0 and state one at t=1) correspond to y = 1. Lines with a left-to-right downwards slope (such as the line between state one at t=l and state zero at t=2) correspond to yj = -1.
When the detector receives the transmitted signal, this signal r1 may include noise n1.
Thus, the problem to be solved by the detector is to determine the most likely path through the trellis (i.e. the most likely sequence of characters input to the encoder), based on the noisy received signal. The branch metrics are assigned to the trellis to indicate the likelihood of each state corresponding to the received signal at that time.
For additive Gaussian white noise (AGWN), the branch metrics can be calculated as (r1- y)2, i.e. the square of the difference between the received value r1 and the expected value y1 at that point in the trellis. The most likely path is the path with the lowest path metric.
When the formula (r1-y1)2 is used to calculate branch metrics for the twostate trellis of figure 3, it is common to get a lot of different paths having equal path metrics. It may not always be possible, therefore, to choose a single path of greatest likelihood, because one of two equally likely paths must be chosen. Therefore, for the purposes of illustrating the technique of finding a unique path through the trellis using VA, the values shown on figure 3 as branch metrics have not been calculated from a sample set of received data r1, but instead, small integers have been chosen for each branch metric to ensure different weights for each path.
In practice, although the presence of multiple paths of equal likeliness degrades the VA performance, it is often possible to pre-process the data to avoid getting large numbers of equally likely paths.
The first trellis diagram, at the top of figure 3, corresponds to step 1 of the detection andlor decoding process. Step 1 concerns the time interval between t=0 and t 1. The state of the system at tO is zero, because the delay element was preset to zero before data transmission began. Two possible paths through the trellis during the first time interval are identified as bold lines on the trellis. These correspond to data sequences of 00 and 01 respectively, where the first bit represents the state at time t0 and the second bit represents the state at time t=1. The 00 path is the lower of the two paths in the trellis, and the 01 path is the upper of the two paths in the trellis. The 00 path has a path metric of 0, but the 01 path has a path metric of 2. As only a single path is formed between the initial state at time t=0 and the next state at t= 1, no reduction of the trellis is performed at step 1.
The second trellis corresponds to step 2 of the decoding process. The part of the trellis between t=0 and t=2 is now considered. A total of four paths are now possible, namely, 000, 001, 010 and 011, where the first two bits represent the possible paths in step 1, and the third bit represents the state at time t=2. The path metric of each path may be calculated by adding all the branch metrics on the path. Thus, the path metric of 000 is 0+2=2, of 001 is 0+0=0, of 010 is 2+1=3, and ofOl 1 is 2+1=3. The paths 000 and 010, with path metrics of 2 and 3 respectively, both lead to a final state of 0 at time t=2.
Therefore, the 010 can be eliminated, as it has the highest path metric, and the 000 path is the survivor. Similarly, the paths 001 and 011, with path metrics of 0 and 3 respectively, both lead to a final state of I at time t=2. Thus, the 011 path can be discarded, and the 001 path is the survivor. The two survivor paths, 001 and 000, are shown in bold on the trellis diagram.
In step 3 of the process, the part of the trellis up to t=3 is considered. The four new possible paths are 0010, 0011, 0000 and 0001, with path metrics of 0, 0, 3 and 4 respectively. The paths 0000, with path metric 3, and the path 0001, with path metric 4, can both be eliminated, as these have highest path metrics for final states 0 and I respectively. Thus, the survivors are 0010 and 0011, each with a path metric of 0.
In step 4 of the process, the part of the trellis up to t=4 is considered. The four new possible paths are 00100, 00101, 00110 and 00111, with path metrics of 1, 2, 2 and 0 respectively. The paths 00101 and 00110 can be eliminated, as these have highest path metrics for final states I and 0 respectively. Thus, the survivors are 00100 and 00111, each with a path metric of 0.
In step 5 of the process, the part of the trellis up to t=5 is considered. The four new possiblepathsare 001000, 001001,001110 and 001111, withpathmetrics of 3,3,1 and 0 respectively. The paths 001000 and 001001 can be eliminated, as these have highest path metrics for final states 0 and I respectively. Thus, the survivors are 001110 and 001111, with path metrics of 1 and 0 respectively.
In step 6 of the process, the part of the trellis up to t=6 is considered. The four new possible paths are 0011100,0011101,0011110 and 0011111, with path metrics of 3,2, 2 and 1 respectively. The paths 0011100 and 0011101 can be eliminated, as these have highest path metrics for final states 0 and 1 respectively. Thus, the survivors are 0011110 and 0011111, with path metrics of2 and 1 respectively.
In step 7 of the process, the part of the trellis up to t=7 is considered. The four new possible paths are 00111100,00111101,00111110 and 00111111, with path metrics of 2, 4, 3 and 3 respectively. The paths 00111110 and 00111101 can be eliminated, as these have highest path metrics for final states 0 and 1 respectively. Thus, the survivors are 00111100 and 00111111, with path metrics of 2 and 3 respectively.
In step 8 of the process, the part of the trellis up to t=8 is considered. At t=8, the state is set to zero, since a reset signal will be sent at the end of each transmission. Thus, there are only have two paths to consider instead of four. The two paths are 001111000, 001111110, with path metrics of 2 and 4 respectively. As both paths have the same final state, the path 001111110, which has the highest path metric, can be eliminated. Thus, the only survivor is 001111000, with a path metric of 2.
The ninth trellis shows trace-back of the path with the lowest overall path metric, where only the final survivor path is shown in bold, and dead-end paths are no longer shown in bold.
In the absence of any additional noise n1, the received data input to the detector or decoder is an "ideal input". For a trellis of finite length, an ideal input is a sequence of received data with a corresponding path in the trellis which has a path metric of zero. In other words, for an ideal input, there is a corresponding path which has a sequence of branch labels which is equal to the sequence of received data.
High speed implementations of maximal likelihood detectors (MLDs) rely on a simultaneous computation of a large number of branch metrics of length n, where n is the number of time steps of the trellis processed in parallel. Such detector designs are referred to as radix-2 designs. It is often convenient to choose n to be equal to the constraint length of the detector. The constraint length is the smallest number of time steps on the trellis for which loops, i.e. multiple paths colmecting a given state at time T to a given state at time (T+n), appear. The calculation of a large number of branch metrics is both time and area consuming, and this is a limiting factor in a high speed detector design.
An increase in the detector's throughput can be achieved by increasing its radix.
However, the area penalty for such increase can be significant. For example, a radix-32 detector requires the computation of branch metrics for paths connecting each of 16 initial states to each of 16 final states, where two paths are obtained for each connection.
This is 512 different paths on the corresponding trellis. If this computation is performed in 512 independent blocks, the total area of the branch metric block alone will be approximately 512 X 10000 tm2 5 mm2. In comparison, the total area of currently used high speed radix-4 detectors is approximately 0.6 mm2.
In an article entitled "The Viterbi algorithm and Markov noise memory", published in IEEE Transactions on Information Theory, vol. 46. pp. 291301, January 2000, Kavcic and Moura show that the noise in a modern read channel is well approximated by the so-called Markov noise model and is dependent on both previous noise samples and data recorded on the disk, that is data dependent and correlated: n1=(d)W(0, I)+b(d)n11.
Where n1 is the noise sample at time i.
d=(x, xii, x2, ..., x1D+I) is the vector of D most recent recorded data bits. We assume that x1= 1. Parameter D is called data dependence length.
are independent identically distributed standard Gaussian random variables.
c(d) is data-dependent noise variance.
b(d) is an L-dirnensional vector of correlation coefficients.
n1..i=(n1.1, n2, ..., ni-L) is the vector of L past noise samples.
Parameter L is called Markov length and the parameters D, I and L satisfy an inequality DL+1+l Kavcic and Moura disclose an optimal Viterbi detector in the presence of Markov noise.
They also show that the required branch metric (BM) is given by BM1n(d)+ 1/r2(d)((r1 - y, ..., rjL-yjI)(l,-b(d))2 The BM is obtained from the noise model. The BM depends on the most recent recorded bit x1 and I+L past bits x1, x2, ..., X1L1. Therefore, for the BM to be the function of the trellis state and the most recent transition, the trellis' states must be labelled by sequences of recorded bits of length I+L. Such a trellis will have 21+L states.
The major drawback of Kavcic and Moura's method is that it results in very area intensive silicon implementation since the viterbi detector requires a trellis with 21'L states. Modem read channels are characterised by large ISI and Markov lengths. As a result, the number of states of optimal Viterbi detector can be as high as 256. This is a huge increase in the number of states compared with venerable 16-state E2PR4 detector.
Consequently, VLSI implementations of optimal data-dependent Viterbi detectors are too large and/or too slow to satisfy requirements of modem read channel.
Kavcic and Moura also present another method which is similar to the method disclosed by Altekar and Wolf, in an article entitled "Improvements in detectors based upon coloured noise", published in IEEE Transactions on Magnetics, vol. 34, no. 1, pp. 94- 97, January 1998. The latter method does not account for data dependent noise. These methods take into account noise correlations without increasing the number of Viterbi Detector's states compared with an optimal white noise Viterbi detector.
In both methods, received signals are processed in large blocks which require a high radix Viterbi detector. Altekar and Wolf point out that this method may not be practical since high radix Viterbi detectors require a large amount of silicon area.
In both methods smaller block size can be chosen. However the methods disclosed only account for noise correlations between signals within the block exactly, whereas correlations between data samples in different blocks are neglected. The drawback of this is that error rate performance suffers due to neglect of inter-block correlations, which can only be improved by increasing the radix at the expense of silicon area.
In an article entitled "Improving performance of PRML/EPRML through noise prediction", published in IEEE Transactions on Magnetics, vol. 32, no. 5 part 1, pp. 3968-3970, September 1996, Eleftheriou and flirt disclose a method which takes into account noise correlations, but not data dependent noise, without increasing the number of Viterbi Detector's states compared with an optimal white noise Viterbi detector. In their method, past noise samples are estimated using a short local trace-back and subtracted from the current received signal to eliminate the correlated component of noise. The resulting Viterbi detector has 21 states only. However their method does not take into account data dependent noise.
The requirements imposed on modern read channel result in implementations of the Viterbi detector which require large silicon area and struggle to meet desired throughput. A method and apparatus ase disclosed which result in improved error rate, reduction in silicon area and high throughput.
The present invention provides a likelihood detector for receiving a stream of data values from a data medium, wherein the received data values correspond to ideal values but may include added noise that is dependent on previous noise and dependent on data on the data medium, said ideal values being determined by possible values of data on the medium, and for outputting information specifying a sequence of states corresponding to the stream of received data values, said sequence of states corresponding to possible data values on the medium, the detector comprising: means forobtaining or calculating noise statistics for possible sequences of data values on the medium using a noise model; means for calculating weighting values indicating likelihoods that a data value received at a particular time corresponds to a particular state transition, using knowledge of a previous most likely sequence of data values on the medium over previous time steps, the received data value and said calculated noise statistics; means for extending said most likely sequence of data values using said calculated weighting values; and output means for outputting information specifying said determined sequence of states.
Embodiments of the present invention involve a generalisation to datadependent Markov noise. When deciding, which of the two trellis states feeding into a given state is a survivor, an L-step trace-back from each of the contenders is performed. This produces two data patterns of length I+L+1, which is sufficient to determine both contending branch metrics.
A disadvantage of local noise prediction technique is the possibility of propagating errors through the local feedback loop. This leads to an increased probability of long error bursts at the output of noise predictive Viterbi detector. Thus, in further embodiments of the present invention, the weighting values are calculated over intervals each corresponding to a plurality of received data values. In other words, the data is processed in blocks, to prevent a local feedback loop from developing.
A further advantageous embodiment of the invention is a radix 2 detector, where the weighting values are calculated over intervals each corresponding to one received data value.
The noise statistics may be calculated as separate white noise correlated noise components. The correlated noise components may include noise correlation coefficients which measure the correlation strength between the current noise sample and L most recent noise samples conditional on a sequence of data on the media, where L is the Markov length. The white noise components may include at least one of a logarithm of a white noise strength, and an inverse square of a white noise strength. The noise statistics may be pre-calculated before processing of the received data begins.
The length of the previous most likely sequence of data values which is used for calculation of weighting values, may be determined according to the amount of inter- symbol interference between successive data values on the medium. The number of data values on the medium which are used to determine said noise statistics may be determined according to a noise correlation strength of the noise model.
The detector may be configured to process sequences of encoded data for a chosen size, e.g. the size may be equal to the ISI constraint length. The output values from the detector may be identical to the ideal input values of the detector. However, the output may be an alternative sequence, derivable from the ideal value sequence.
The detector may have a storage means for storing information on preferred paths in the trellis. The size of each said section of input data processed by the detector may be less than 5 times the constraint length, which allows a fast throughput of data. However, this is not essential.
The present invention can be implemented by software or programmable computing apparatus. Thus the present invention encompasses a carrier medium carrying computer readable code for controlling a computer or number of computers to carry out the method. The carrier medium can comprise a transient medium, e.g. an electrical, optical, microwave, RF, electromagnetic, acoustic or magnetic signal (e.g. a TCP IP signal over an IP network such as the internet), or a carrier medium such as a floppy disk, CD ROM, hard disk, or programmable memory device.
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a block diagram of an apparatus for convolution encoding and decoding of
data, according to the prior art;
Figures 2A and 2B show an example of a trellis diagram representing a twostate time
dependent process as known in the prior art;
Figure 3 is a series of trellis diagrams representing a series of steps in a Viterbi
decoding process, according to the prior art;
Figure 4 shows part of a trellis diagram for a first embodiment of the invention, in which traceback is performed in single timesteps; Figure 5 is a flowchart, showing a method according to a first embodiment of the invention; Figure 6 shows part of a trellis diagram for a second embodiment of the invention, in which traceback is performed in blocks of multiple timesteps; and Figure 7 is a block diagram of a likelihood detector according to an embodiment of the invention.
A first embodiment of the invention is illustrated using the trellis diagram of figure 4, and the flowchart of figure 5. This embodiment is a radix-2 example. In this example, the inter-symbol interference (1ST) length I is 4. Also, the Markov length L is 4 and data dependence length D is 4.
Figure 4 shows a section of the trellis. The states of the trellis are labelled by all combinations of recorded disk bits over the 151 length. Since ISI length is 4 we have a trellis with 16 states, each state comprising four bits.
In figure 4, five time intervals are shown, and the trellis is divided accordingly into five sections, shown using dashed lines. The most recent time interval is shown at the right hand side of the diagram. The most recent trellis state is represented as x1,x1,x12,x13.
The two possible previous trellis states are represented as x1,x12,x13,l and x1,x2,x13,- I. The state transition value (corresponding to the new data bit on the disk) between each possible previous trellis state and the most recent trellis state is x1, which may be either +1 or-i.
At earlier time intervals on the figure, the data on the disk is represented by for largerj, and the states are represented by four adjacent x values. One of the trellis paths is highlighted in bold and marked as "frozen". This trellis path has been calculated to be the most likely path through the trellis, and this calculation process is now described.
The received symbol at time i is modelled by: r1 = G(x1,. . . + r(x,. . .,Xj.D)Wj(O,l) +[b1 (xi,. . . ,X14)),. . . ,bL(xj,.. . ,Xj4))]. [nj,. . . ,n_LI In this particular example r1 = G(x1,. . . ,x) + . . ,x4 W(O, I) +[b1 (x1,. . . ,x14),. . . ,b(x1,. . . ,x1..)] . [n11,. . . ,n4] The data pattern for each branch is also determined. Each branch is determined by its initial state, and the recorded bit. These combinations of 5 bits are the data patterns for the branches and there are 32 of them for this example.
Figure 5 is a flowchart showing the calculation process in this embodiment. The process starts at step S501. At step S502, we know the noise model, the trellis structure, and an initial trellis state or path.
Next, at step S503, the following quantities are computed for each of the 32 data patterns, using the noise model: 1) the noise correlation coefficients, bi(x,. . .,x14,. . .,b4(x1,. . ., x4, which measure the correlation strength between the current noise sample and L most recent noise samples conditional on data pattern. Here L=4 is Markov length. 2)ln
where expresses the strength of the white noise component 3) 1Ic2 These statistics are preferably pre-computed and used in the following calculation of the branch metric.
The next step of the flow diagram is step S504, in which received data at time T is used to calculate corresponding branch metrics. At a time step i, a final state x1x1.1x12x1.3 has two incoming braches labelled by x1. The start states of these branches are x11x12x131 and XiIXi2XI3-1. To compute the path metric to state XiXI.lXi2Xi3 we use the branch metric given by Kavcic and Moura BM=lnc(d)+1/2(d)((r1 - y, ..., r-y4(1,-b(d))2 For the branch with initial state x1x12x1.31 and final state xIxI.lxI2xI. 3 the data pattern is d = (x1, x1, Xi.2, x13, 1). We have already computed lnc(d), 1/c2(d) and b(d).
Also y, is only a function of x1,x11,x2,x3 and x and these are completely determined by the initial state and the branch labels. But we do not have perfect information on the x1s needed to determine y,.. .,yj4, for example y4 requires Xi8. Further there are 16 possible paths of length 4 leading to the initial state x.1x2xI3 1.
Thus, we now perform trace back for 4 time steps to choose the most likely path of length 4 leading to this state. In this maimer we determine the most likely sequence of x1s which we use to determine. .,y14. We repeat this process for the state x1x2x3- At step S505, we now compute the path metric to state xx x2x3, being the minimum, of the sum of path metric to each of the two initial states summed with the respective branch metric, in the usual Viterbi algorithm and store the decision on which of the two branches was chosen, this will be used in trace back at next time step.
The new stored decision may be output to a trace-back block, to be used for future branch metric calculations.
A drawback of the above disclosed method and of Eleftheriou and flirt's method is that it introduces a feedback loop at every time step. A current estimated noise sample is used to estimate noise samples in the future. It is known that this feedback leads to an increased probability of long error bursts and it is known that error correction circuitry following the Viterbi detector is susceptible to long error bursts.
A further embodiment of the invention is now described, which overcome the deficiencies of prior-art methods for Viterbi detectors in the presence of data-dependent correlated noise and reduce the probability of long error bursts.
This embodiment of the invention comprises performing block-wise processing of data noisy data and using noise prediction to estimate correlations between blocks instead of neglecting them. In particular the noise prediction is performed over blocks, comprising a number of time steps, rather than at every time step. This results in a Viterbi detector with small number of states, improved error rate and reduced susceptibility to error bursts.
The proposed scheme based on block-wise processing supplemented with local noise prediction is free from deficiencies of block processing and local noise prediction used separately. It is also very natural for read channel applications, which already use high radix Viterbi detectors with moderate value of radix (e. g. 4) due to high throughput requirements.
This embodiment of the present invention also has theoretical foundations. It is well known that after a number of steps B in the past, all survivors converge with probability close to 1. For example, Fettweis uses this fact to find a most likely trellis path in US5,042,036. B is called survivor length. While the precise expression for the survivor length depends on the particular ISI and noise model, the empirical rule for choosing B is B=5K, where K is the constraint length of the trellis. The number of non-converged survivors monotonically decreases as one is tracing back. Correspondingly, the reliability of noise estimates using these survivors is increasing.
In a present embodiment of the invention, the high radix Viterbi detector with local noise prediction, branch metric of the most recent branches within the block rely less on local trace-back, as there are no assumptions on the most significant bits of the data pattern. Moreover, least significant bits result from a more reliable trace-back, which is more reliable due to convergence of survivors. Branch metrics of the oldest branches within the block rely on local noise prediction, but their relative contribution to the path metric of paths of length equal to block length is small. Therefore, the solution is more resistant to error propagation.
The fact, that inter-block correlations are estimated in present scheme rather than neglected, as in Kavcic and Moura and Altekar and Wolf, means the higher rate of convergence of high radix Viterbi detector with local noise prediction to the optimal one. As a result one can have a nearly optimal data-dependent Viterbi detector with the value of radix less than that of high radix Viterbi detector without noise prediction.
The trellis diagram of figure 6 illustrates a second embodiment of the invention, for radix 4.
Figure 6 shows a trellis diagram similar to that in figure 4. However, in this case, the traceback is over blocks of two time intervals, instead of being over single time intervals.
Again for each of the 32 data patterns we compute 1) the noise correlation coefficients, bj(x1,. . .,x14,. . .,b4(x1,. . ., x4.
2) lny 3) 1/2 A radix 4 time step consists of two radix 2 time steps and a final state X1XI1Xi2X13 has four incoming braches labelled by x1x11. The start states of these branches are XI2Xi3 11, x2x3 1-1, xF2x3- 11 and x2x3- 1-1. To compute the path metric for state x1x1x12x3 we use the branch metric given by Kavcic and Moura extended to 2 radix 2 time steps which is simply the sum of two radix2 branch metrics: BM=lnc(di)+1/c2(d1) ((r1 - y, ..., r1-y14)(1,-b(di))2 + - YI-i, *.., r15-y5)(1,-b(d2))2 For the branch with initial state x12x3 11 and final state x1x11x2x3 the required data patterns are (xi, x1, x2, x3, 1), (Xi.i, x2, x3, 1, 1). We have already computed 1nc(d), 1/c2(d) and b(d).
Also Yi and y-i are only function of x,xIl,xI2,x.3,x4 and x5 and these are completely determined by the initial state and the branch labels. But we do not have perfect information on the x1s needed to determine. .,yj5, for example y.5 requires x9.
Further there are 16 possible paths of length 2 radix 4 time steps leading to the initial state x12x13 11. we now perform trace back for 2 radix 4 time steps to choose the most likely path of length 2 radix 4 leading to this state. In this manner we determine the most likely sequence of x1s which we use to determine y2,.. .,yj.
We repeat this process for the states x12x31-1, x12x13-1 I and x2x13-1-1 x.
We now compute the path metric to state xx1x2x13, being the minimum, of the sum of path metric to each of the four initial states summed with the respective branch metric, as in the usual Viterbi algorithm and store the decision on which of the four branches was chosen. This will be used in trace back at next time step.
Figure 7 is a block diagram showing the structure of an apparatus used in one embodiment of the invention. The apparatus has a noise statistic calculating block 701, which uses the noise model to calculate the noise statistics, e.g. as described with reference to step S503 of figure 5.
The output of the noise statistic calculating block 701 is sent to a branch metric block 502, which performs calculations of branch metrics, as described with reference to step S504 of figure 5. The branch metric block uses received data and information about most likely paths for past times, as well as the noise statistics, to calculate branch metrics.
The output of the branch metric block 702 is sent to an add-compareselect block 503, which uses Viterbi algorithm to determine the path metrics to each state, using the newly calculated branch metrics.
The add-compare-select block 503 output is sent to a trace back block 704, which reconstructs the most likely paths in the trellis. This information is made available to the branch metric block 502, for future branch metric calculations.
The traceback process in the above examples is performed using the Viterbi algorithm.
However, it is alternatively possible to use modifications of the Viterbi algorithm to perform the traceback.
The present invention is not limited to radix-2, and may also include radix 4, radix 8, radix 16, radix 32 and other values of the radix.
Although embodiments described have all used hard inputs and hard outputs, it is also possible to use embodiments of the invention with soft inputs andlor soft outputs, e.g. by retaining multiple paths where the path metric difference falls below a threshold value.
The present invention may be implemented as a dedicated semiconductor chip.
Embodiments of the invention may be constructed using at least one standard cell. A standard cell is a logic unit which may be used as a building block for building more complex circuits. Standard cells may be made available as selections from a standard cell library. A customised selection of logic units from the library may be provided on a single chip to allow simplification of a particular implementation of the logic units. In addition, embodiments of the invention may be provided as standard cells, and made available within a standard cell library. However, the present invention is not limited to such a technology or design. A further embodiment of the invention is an integrated circuit including any detector according to the invention. The invention also encompasses circuit boards including any detector according to the invention, and digital electronic devices including any detector according to the invention.
The present invention can be implemented by software or programmable computing apparatus. This includes any computer, including PDAs (personal digital assistants), mobile phones, etc. Thus the present invention encompasses a carrier medium carrying computer readable code for configuring a computer or number of computers as the apparatus of the invention. The carrier medium can comprise a transient medium, e.g. an electrical, optical, microwave, RF, electromagnetic, acoustic or magnetic signal (e.g. a TCP IP signal over an IP network such as the internet), or a carrier medium such as a floppy disk, CD ROM, hard disk, or programmable memory device.
The code for each process in the methods according to the invention may be modular, or may be arranged in an alternative way to perform the same function. The methods and apparatus according to the invention are applicable to any computer.
The present invention can be used in a wide range of communications technology, including 3G cellular technology (e.g. CMDA2000, W-CDMA, TDSCDMA), digital video broadcasting (DVB), digital audio broadcasting (DAB) , broadband wireless (e.g. LMDS - local multipoint distribution service), multipoint multichannel distribution service (MMDS), wireless LAN (local area network) such as WLAN-802.1 la, digital subscriber line technology (xDSL), cable modem and satellite communications.
The present invention may also be applied in other fields of technology where Viterbi detectors are used.
While the invention has been described in terms of what are at present its preferred embodiments, it will be apparent to those skilled in the art that various changes can be made to the preferred embodiments without departing from the scope of the invention, which is defined by the claims.
Claims (41)
- CLAIMS: 1. A likelihood detector for receiving a stream of data valuesfrom a data medium, wherein the received data values correspond to ideal values but may include added noise that is dependent on previous noise and dependent on data on the data medium, said ideal values being determined by possible values of data on the medium, and for outputting information specifying a sequence of states corresponding to the stream of received data values, said sequence of states corresponding to possible data values on the medium, the detector comprising: means for obtaining or calculating noise statistics for possible sequences of data values on the medium using a noise model; means for calculating weighting values indicating likelihoods that a data value received at a particular time corresponds to a particular state transition, using knowledge of a previous most likely sequence of data values on the medium over previous time steps, the received data value and said calculated noise statistics; means for extending said most likely sequence of data values using said calculated weighting values; and output means for outputting information specifying said determined sequence of states.
- 2. A likelihood detector as claimed in claim 1, wherein the weighting values are calculated over intervals each corresponding to a plurality of received data values
- 3. A likelihood detector as claimed in claim 1, wherein the weighting values are calculated over intervals each corresponding to one received data value
- 4. A likelihood detector as claimed in any previous claim, wherein the noise statistics comprise white noise statistics and correlated noise statistics.
- 5. A detector as claimed in claim 1 or claim 4, wherein said noise statistics comprise noise correlation coefficients which measure the correlation strength between the current noise sample and L most recent noise samples conditional on a sequence of data on the media, where L is the Markov length.
- 6. A likelihood detector as claimed in any one of claims 1, 4 or 5, wherein at least one of the noise statistics is calculated by determining a white noise strength and using a logarithm of said white noise strength.
- 7. A likelihood detector as claimed in any one of claims 1, 4, 5 or 6, wherein at least one of the noise statistics is calculated by determining a white noise strength and using an inverse square of said white noise strength.
- 8. A detector as claimed in any one of the previous claims, wherein the noise statistics are pre-calculated before processing the received data.
- 9. A detector as claimed in any previous claim, wherein weighting values are computed using the following formula: BM=lncT(d)+1 /2(d)((r1 - yj, ... , rI.4-yI.4)(1,-b(d))2 where lnc(d), l/2(d) and b(d) are white noise, white noise and coloured noise components respectively, y1 is only a function of a group of state values, and r1 is the received data.
- 10. A detector as claimed in any previous claim, wherein weighting values are computed using the following formula: BMlnY(di)+1/cT2(di)((r1 - y1, .. ., r14-y14(1,-b(di))2 + - ..., r1.5-y15)(1,-b(d2))2 where 1nc(d1), 1/cy2(di) and b(d1) are white noise, white noise and coloured noise components respectively for a first data value on the medium, lnc(d2), 1/cT2(d2) and b(d2) are white noise, white noise and coloured noise components respectively for a second data value on the medium, y1 is only a function of a group of state values, and r1 is the received data.
- 11. A detector as claimed in any previous claim, wherein said means for extending the most likely sequence is configured to use the Viterbi algorithm.
- 12. A sequence detector as claimed in any previous claim, wherein the length of the previous most likely sequence of data values which is used for calculation of weighting values, is determined according to the amount of inter-symbol interference between successive data values on the medium.
- 13. A sequence detector as claimed in any previous claim, wherein the number of data values on the medium which are used to determine said noise statistics is determined according to a noise correlation strength of the noise model.
- 14. A likelihood detector for receiving a stream of data values from a data medium, wherein the received data values correspond to ideal values but may include added noise that is dependent on previous noise and dependent on data on the data medium, said ideal values being determined by possible values of data on the medium, and for outputting information specifying a sequence of states corresponding to the stream of received data values, said sequence of states corresponding to possible data values on the medium, the detector comprising: a noise statistic provider for obtaining or calculating noise statistics for possible sequences of data values on the medium using a noise model; a weighting value calculator for calculating weighting values indicating likelihoods that a data value received at a particular time corresponds to a particular state transition, using knowledge of a previous most likely sequence of data values on the medium over previous time steps, the received data value and said calculated noise statistics; a processor for extending said most likely sequence of data values using said calculated weighting values; and an output for outputting information specifying said determined sequence of states.
- 15. A likelihood detector for receiving a stream of data values, wherein the received data values correspond to ideal values but may include added noise that is data dependent and dependent on previous noise, and for outputting information specifying a sequence of states corresponding to the stream of received data values, said sequence of states corresponding to possible ideal data values, the detector comprising: means for obtaining or calculating noise statistics for the received data values using a noise model; means for calculating weighting values indicating likelihoods that a data value received at a particular time corresponds to a particular state transition, using knowledge of a previous most likely sequence of ideal data values over previous time steps, the received data value and said calculated noise statistics; means for extending said most likely sequence of data values using said calculated weighting values; and output means for outputting information specifying said determined sequence of states.
- 16. A likelihood detector for receiving a stream of data values, wherein the received data values correspond to ideal values but may include added noise that is data dependent and dependent on previous noise, and for outputting information specifying a sequence of states corresponding to the stream of received data values, said sequence of states corresponding to possible ideal data values, the detector comprising: a noise statistic provider for obtaining or calculating noise statistics for the received data values using a noise model; a weighting value calculator for calculating weighting values indicating likelihoods that a data value received at a particular time corresponds to a particular state transition, using knowledge of a previous most likely sequence of ideal data values over previous time steps, the received data value and said calculated noise statistics; a processor for extending said most likely sequence of data values using said calculated weighting values; and an output for outputting information specifying said determined sequence of states.
- 17. A method of receiving a stream of data values from a data medium, wherein the received data values correspond to ideal values but may include added noise that is dependent on previous noise and dependent on data on the data medium, said ideal values being determined by possible values of data on the medium, and for outputting information specifying a sequence of states corresponding to the stream of received data values, said sequence of states corresponding to possible data values on the medium, the method comprising: obtaining or calculating noise statistics for possible sequences of data values on the medium using a noise model; calculating weighting values indicating likelihoods that a data value received at a particular time corresponds to a particular state transition, using knowledge of a previous most likely sequence of data values on the medium over previous time steps, the received data value and said calculated noise statistics; extending said most likely sequence of data values using said calculated weighting values; and outputting information specifying said determined sequence of states.
- 18. A method as claimed in claim 17, wherein the weighting values are calculated over intervals each corresponding to a plurality of received data values
- 19. A method as claimed in claim 17, wherein the weighting values are calculated over intervals each corresponding to one received data value
- 20. A method as claimed in any of claims 17 to 19, wherein the noise statistics comprise white noise statistics and correlated noise statistics.
- 21. A method as claimed in claim 17 or claim 20, wherein said noise statistics comprise noise correlation coefficients which measure the correlation strength between the current noise sample and L most recent noise samples conditional on a sequence of data on the media, where L is the Markov length.
- 22. A method as claimed in any one of claims 17, 20 or 21, wherein at least one of the noise statistics is calculated by determining a white noise strength and using a logarithm of said white noise strength.
- 23. A method as claimed in any one of claims 17, 20, 21 or 22, wherein at least one of the noise statistics is calculated by determining a white noise strength and using an inverse square of said white noise strength.
- 24. A method as claimed in any one of claims 17 to 23, wherein the noise statistics are pre-calculated before processing the received data.
- 25. A method as claimed in any one of claims 17 to 24, wherein weighting values are computed using the following formula: BMln(d)+1/2(d)((r1 - y, . .., r1.4-y.4)(1,-b(d))2 where lnc(d), 1/cy2(d) and b(d) are white noise, white noise and coloured noise components respectively, yj is only a function of a group of state values, and r is the received data.
- 26. A method as claimed in any one of claims 17 to 25, wherein weighting values are computed using the following formula: BMln(d1)+1/c52(di)((r1 yj, ..., r14-y.4)(1,-b(di))2 + lna(d2)+l/c2(d2)((r1.i - Yi-I, *.., r5-y.5) (l,-b(d2))2 where ln(di), 1/c,2(d1) and b(d1) are white noise, white noise and coloured noise components respectively for a first data value on the medium, lnc(d2), 1/2(d2) and b(d2) are white noise, white noise and coloured noise components respectively for a second data value on the medium, y1 is only a function of a group of state values, and r1 is the received data.
- 27. A method as claimed in any one of claims 17 to 26, wherein said means for extending the most likely sequence is configured to use the Viterbi algorithm.
- 28. A method as claimed in any one of claims 17 to 27, wherein the length of the previous most likely sequence of data values which is used for calculation of weighting values, is determined according to the amount of inter-symbol interference between successive data values on the medium.
- 29. A method as claimed in any one of claim 17 to 28, wherein the number of data values on the medium which are used to determine said noise statistics is determined according to a noise correlation strength of the noise model.
- 30. A method of state sequence likelihood detection, for receiving a stream of data values, wherein the received data values correspond to ideal values but may include added noise that is data dependent and dependent on previous noise, and for outputting information specifying a sequence of states corresponding to the stream of received data values, said sequence of states corresponding to possible ideal data values, the detector comprising: means for obtaining or calculating noise statistics for the received data values using a noise model; means for calculating weighting values indicating likelihoods that a data value received at a particular time corresponds to a particular state transition, using knowledge of a previous most likely sequence of ideal data values over previous time steps, the received data value and said calculated noise statistics; means for extending said most likely sequence of data values using said calculated weighting values; and output means for outputting information specifying said determined sequence of states.
- 31. A data traffic flow controller including the likelihood detector of any one of claims ito 16.
- 32. A magnetic data storage device including the likelihood detector of any one of claims ito 16.
- 33. A hard disk read head including the likelihood detector of any one of claims ito 16.
- 34. A hard disk read head decoding unit comprising the likelihood detector of any one of claims 1 to 16.
- 35. A hard disk drive including the likelihood detector of any one of claims 1 to 16.
- 36. A computer apparatus containing the hard disk drive of claim 35.
- 37. An optical data storage device including the likelihood detector of any one of claims Ito 16.
- 38. A communications receiver including the likelihood detector of any one of claims ito 16.
- 39. A computer configured as the likelihood detector of any one of claims ito 16.
- 40. A carrier medium carrying computer readable code for configuring a computer as the apparatus of any one of claims 1 to 16.
- 41. A carrier medium carrying computer readable code for controlling a computer to carry out the method of any one of claims 17 to 30.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US66942905P | 2005-04-08 | 2005-04-08 |
Publications (3)
Publication Number | Publication Date |
---|---|
GB0607061D0 GB0607061D0 (en) | 2006-05-17 |
GB2425027A true GB2425027A (en) | 2006-10-11 |
GB2425027B GB2425027B (en) | 2007-08-15 |
Family
ID=36539562
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0607061A Expired - Fee Related GB2425027B (en) | 2005-04-08 | 2006-04-07 | Likelihood detector for data-dependant correlated noise |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2425027B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7953187B2 (en) | 2006-06-21 | 2011-05-31 | Forte Design Systems Limited | Likelihood detector apparatus and method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6438180B1 (en) * | 1997-05-09 | 2002-08-20 | Carnegie Mellon University | Soft and hard sequence detection in ISI memory channels |
US20040268208A1 (en) * | 2003-06-27 | 2004-12-30 | Seagate Technology Llc | Computation of branch metric values in a data detector |
US20050180288A1 (en) * | 2002-04-18 | 2005-08-18 | Infineon Technologies North America Corp. | Method and apparatus for calibrating data-dependent noise prediction |
-
2006
- 2006-04-07 GB GB0607061A patent/GB2425027B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6438180B1 (en) * | 1997-05-09 | 2002-08-20 | Carnegie Mellon University | Soft and hard sequence detection in ISI memory channels |
US20050180288A1 (en) * | 2002-04-18 | 2005-08-18 | Infineon Technologies North America Corp. | Method and apparatus for calibrating data-dependent noise prediction |
US20040268208A1 (en) * | 2003-06-27 | 2004-12-30 | Seagate Technology Llc | Computation of branch metric values in a data detector |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7953187B2 (en) | 2006-06-21 | 2011-05-31 | Forte Design Systems Limited | Likelihood detector apparatus and method |
Also Published As
Publication number | Publication date |
---|---|
GB2425027B (en) | 2007-08-15 |
GB0607061D0 (en) | 2006-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7653868B2 (en) | Method and apparatus for precomputation and pipelined selection of branch metrics in a reduced state Viterbi detector | |
US7702991B2 (en) | Method and apparatus for reduced-state viterbi detection in a read channel of a magnetic recording system | |
US6597742B1 (en) | Implementing reduced-state viterbi detectors | |
US8699557B2 (en) | Pipelined decision-feedback unit in a reduced-state Viterbi detector with local feedback | |
US7953187B2 (en) | Likelihood detector apparatus and method | |
US6148431A (en) | Add compare select circuit and method implementing a viterbi algorithm | |
US7263652B2 (en) | Maximum likelihood detector and/or decoder | |
US5454014A (en) | Digital signal processor | |
US7487432B2 (en) | Method and apparatus for multiple step Viterbi detection with local feedback | |
EP2339757B1 (en) | Power-reduced preliminary decoded bits in viterbi decoder | |
US20080098288A1 (en) | Forward decision aided nonlinear viterbi detector | |
US20070113161A1 (en) | Cascaded radix architecture for high-speed viterbi decoder | |
US8009773B1 (en) | Low complexity implementation of a Viterbi decoder with near optimal performance | |
US7822138B2 (en) | Calculating apparatus and method for use in a maximum likelihood detector and/or decoder | |
JP3683501B2 (en) | End of coded or uncoded modulation by path-oriented decoder | |
US7653154B2 (en) | Method and apparatus for precomputation and pipelined selection of intersymbol interference estimates in a reduced-state Viterbi detector | |
US7673224B2 (en) | Low power viterbi decoder using a novel register-exchange architecture | |
GB2425027A (en) | Viterbi detector using Markov noise model | |
EP1542370A1 (en) | Method and system for branch label calculation in a Viterbi decoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
732E | Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977) |
Free format text: REGISTERED BETWEEN 20091105 AND 20091111 |
|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20150407 |