IMPROVEMENTS RELATING TO DATA PACKET ERROR DETECTION
FIELD OF THE INVENTION
The present invention relates to an improved method of data packet error detection. More particularly, though not exclusively, the present invention concerns an improved method of detecting errors in the communcation of data under various error rate conditions, increasing the effective data throughput.
BACKGROUND OF THE INVENTION
For most known data communication systems a checksum is appended to individual data packets to detect the presence of transmission errors. As the received data error rate increases the probability that the checksum is able to detect corrupted packets actually decreases. In order to maintain a specified undetected packet error performance, at higher error rates, the checksum size has to be increased which reduces the overall data throughput efficiency
A. GENERAL FRAME STRUCTURE
Figure 1 shows the structure for a typical data frame used to communicate packets of data information including a checksum to detect transmission errors.
The total data frame length of n bits includes the actual data packet of k bits plus the overhead associated with the error detection checksum of p bits. Since the checksum conveys no additional user information the overall data throughput efficiency is:
k/n = k/(k+p)
In practice there are other frame overhead components, such as frame synchronisation, but these are ignored for the purposes of this discussion.
B. STANDARD CHECKSUM PERFORMANCE
The checksum is derived from the data packet, prior to transmission, by an algorithm, which conventionally compresses the number of data bits down to a smaller number of parity or checksum bits. Upon reception the receiver repeats this algorithm on the received data packet to see if it agrees with the received checksum bits.
A popular checksum approach uses the Cyclic Redundancy Check (CRC) technique. For example the following 12 bit CRC checksum polynomial can detect all single, double and triple bit errors.
12 bit CRC polynomial P(X) = 1 + X + X 2 + X 3 + X 5 + X 7 + X 11 + X 12
This factors to P(X) = (1 + X) . (1 + X 2 + X 5 + X 6 + X 11)
The (1 + X) factor ensures that the CRC detects all odd numbered error patterns. The second factor, of degree 11 , is primitive and enables all double bit error combinations to be detected for data frames of up to 2047 bits. The CRC also detects an error burst of up to 12 bit errors.
Figure 2 shows the undetected error performance for this typical 12 bit CRC checksum, for several sizes of data packet, against Bit Error Rates (BERs) from 0 to 50%.
For BERs below 0.01 (1%) there will only be a few data errors per frame depending on the frame length. Since the CRC checksum used is able to detect all error combinations of up to 3 bit errors then the probability of an undetected frame error is very low, especially for the shorter frame lengths. However, as the BER increases towards a maximum figure of 50%, the
undetected error performance degrades and approaches a final value, which is independent of the data packet length. This limiting value is to be expected since, under random data conditions, the probability of the checksum being correct just by chance is simply 2"p, which for this 12 bit checksum is 2.44x10"4. Note that many CRC checksum polynomials have the less desirable attribute that they produce undetected error rates that exceed the final 50% BER value for intermediate BERs.
This can pose a particular problem where the undetected error performance is critical such as for high integrity data applications. The problem is exacerbated where the communication link is subject to wide fluctuations in quality, for example due to" multipath fading, blocking and local interference effects. This is especially the case for mobile applications.
To improve the checksum performance, at higher error rates, the size of the checksum could be increased. Table 1 shows the undetected error probability performance at 50% BER and typical data throughput efficiencies (for a 100 bit data frame) for different checksum sizes.
Table 1 : Checksum Performance at 50% BER
For example to provide a low undetected error probability of better than 1x10"6 at a 50% BER needs a checksum of at least 20 bits. This results in a corresponding reduction in data throughput efficiency when compared to the use of a shorter checksum. The use of a larger data packet size helps to offset the impact of the increased checksum size but reduces the checksum performance at lower BERs and leads to increased frame error rates and greater throughput delays.
Another way to detect badly corrupted data frames is to use information from the data demodulator such as any loss of lock indication. However this information is typically subject to delays and is deliberately designed not to respond to short error bursts in order to maintain lock during poor signal conditions.
OBJECTS AND SUMMARY OF THE INVENTION
The principal object of the present invention is to overcome or at least substantially reduce some of the abovementioned drawbacks.
It is a principal object of the present invention also to provide an efficient and reliable method of deriving quality information from a received data packet and to permit a reduction in the overall size of the checksum so as to improve the resultant data integrity and throughput efficiency.
It is another principal object of the present invention to provide the use of a quality value derived from a frame of data in combination with a checksum enabling an improved error detection performance to be achieved under all data error conditions.
It is another principal object of the present invention to provide a method of error detection which can be used with different types of checksum (for example, CRC or parity schemes) and which provides the use of a quality value enabling an accurate estimate of the signal to noise ratio for the received signal to be achieved.
In broad terms, the present invention resides in the concept of taking advantage of the use of data quality information derived from the whole data frame,
assisting the checksum for high Bit Error Rates and obviating the need to increase the checksum size.
Thus, according to the present invention there is provided a method of detecting databit/symbol errors, the method comprising:
(a) receiving and demodulating a data frame, which data frame comprises a plurality of databits/symbols, and providing a soft decision output for each of the demodulated databits/symbols;
(b) deriving a quality value for each demodulated data frame in dependence upon each said soft decision output and comparing the quality value with a predetermined threshold quality value;
(c) providing a first confidence decision on the likelihood of the integrity of the demodulated data in dependence upon the comparison of said derived quality value with the threshold quality value;
(d) either rejecting the demodulated data when the first confidence decision output is indicative of a quality value below the threshold value associated with a poor integrity of the data, or alternatively, processing the demodulated data when the first confidence decision output is indicative of a quality value exceeding the threshold value associated with a satisfactory integrity of the data, said processing further comprising calculating a checksum value associated with the demodulated data and comparing the calculated checksum value with the actual received checksum value; and
(e) providing a second confidence decision on the likelihood of the integrity of the demodulated data in dependence upon the processing step of (d), the data being rejected when an error is identified by means of the checksum comparison in (d) or the data being accepted when no error is identified by means of the checksum comparison in (d).
In accordance with an exemplary embodiment of the invention which will be described hereinafter in detail, the processing stage comprises performing a
forward error correction (FEC) decoding on the demodulated data so as to convert any received data symbols into databits. The forward error correction is conveniently effected on the data prior to the checksum comparison.
Advantageously, different soft decision output criteria can be applied to different data in the method, enabling different quality values to be derived. The soft decision output criteria can be based on angle and/or voltage level criteria for example.
Advantageously, different discrete soft decision levels (for example, designated quantisation levels: 4 or 8 level and so forth) can be applied to the data to derive different quality values.
Conveniently, different linear/non-linear techniques and/or different data modulation techniques, phase shift keying and amplitude shift keying for example, can be applied to the method enabling different soft decision outputs to be provided and different quality values to be derived.
Conveniently, different threshold quality values can be selectively applied to the data in accordance with different levels of quality value as derived for different data frames.
Further, automatic repeat request (ARQ) techniques can be applied to the method enabling the various steps of the method, as described hereinabove, to be automatically repeated until data frames with detected errors have been received correctly.
The present invention also extends to a data processing system which is adapted and arranged to carry out the method described hereinabove.
It is to be appreciated that the present invention can be conveniently implemented in hardware or software. Also, the present invention finds utility for various space-borne or terrestrial data communication applications.
The above and further features of the present invention are set forth with particularity in the appended claims and will be described hereinafter with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows a known data frame structure;
Figure 2 shows the typical error performance of a typical 12 bit CRC checksum;
Figures 3 and 4 show certain performance characteristics of the present invention;
Figure 5 is a flowchart of the error detection method of the present invention; and
Figures 6 to 9 show the error performance characteristics of the present invention compared to known techniques.
DETAILED DESCRIPTION OF THE EMBODIMENT
In this specification, it is to be understood that the term "soft decision" means or covers any simple decision as to whether a particular measurement, typically corrupted by noise, is to be assigned to one of two or more possible expected states and includes an indication of the actual measurement value with respect to the expected values, thereby providing an indication of confidence in the decision. It is also to be understood that the term "quality value" means or covers any measure of confidence in a decision, this being independent of the actual decision state. A "quality value" can be extracted from a "soft decision".
As previously mentioned, rather than increase the checksum size the present invention proposes the use of quality information derived from the whole data frame to assist the checksum for higher BERs. The actual data modulation technique is not critical but an important feature is that the receiver's demodulator provides a soft decision output for each databit. A typical 8 level (3 bit) soft decision for a binary data output from such a demodulator is shown in Table 2.
Table 2: Typical 3 Bit Soft Decision Values
As a quality estimate the actual data polarity can be discarded, since the data polarity is not known, and can be reduced to a 4 level (2 bit) Quality Value (QV) as shown in Table 3.
Table 3: Derived 2 Bit Quality Values
A perfect data bit with no noise added has a QV of 0, whereas data bits demodulated from just random noise generally have a mean QV of 1.5 depending on the exact type of soft decision employed.
The mean QV for a signal plus Additive White Gaussian Noise (AWGN), for a Phase Shift Keyed (PSK) demodulator providing 8 level soft decisions, at different Energy per bit to Noise (Eb/No) power ratios is shown in Figure 3.
This shows that the QV can be advantageously used as a measure of the received signal to noise ratio, from which it is then possible to estimate the BER.
Each QV is subject to statistical variation and so by combining the QVs for all n frame data bits an overall Frame Quality Value (FQV) can be formed as below:
FQV = Σ QVb / n (for b=1 to n)
The mean FQV is the same as the individual QVs but the variance reduces as the frame size is increased. The standard deviation of the FQV against Eb/No for different frame lengths (n) is shown in Figure 4.
By applying a suitable threshold the FQV in the invention can therefore be advantageously used to reject data frames with low signal to noise ratios, which have greater risk of the checksum failing to detect bit errors. Due to the statistical variation in FQVs the threshold must not be set too low such that it also rejects acceptable data frames. Note that the longer frame lengths, which are more prone to CRC checksum failure, have FQVs with less variance and provide a more accurate estimate of BER.
USE OF FORWARD ERROR CORRECTION (FEC)
For the general reduction of data bit errors over the communication link the data frame preferably incorporates some form of Forward Error Correction (FEC). The proposed error detection method of the invention is compatible with the use of FEC. In this case the Frame Quality Value (FQV) is generated from the demodulated soft decision symbols rather than data bits. The symbol error rate, prior to FEC decoding, is generally much higher than the corresponding data bit error rate. Therefore, the FQV detection threshold is set at a different level to the non-FEC case to maintain a desired optimum performance.
Referring now to Figure 5, this shows in flowchart form how the various decision levels and steps of the method of the invention are taken in order to provide an improved error detection of incoming datapackets. In operation of the method, therefore, a frame of incoming databits/symbols is first received and demodulated to provide a number of soft decisions 1. A frame quality value (FQV) is produced using the soft decisions and this frame quality value is compared with a predetermined threshold value 2. Selection of the threshold quality value is determined by the operator and that is done by taking account, inter alia, the nature of the data received and the type of data decoding/ processing required.
A confidence decision 3, 4 is then made as to whether or not the data has a sufficiently high integrity based on comparison of the frame quality value with the threshold quality value. When the frame quality value is found to be below the threshold quality value 3, the data is rejected. When the frame quality value exceeds the threshold quality value 4, the data is maintained for further processing. The processing stage preferably comprises (a) a first step performing a forward error correction (FEC) decode 5 on the data to convert, if required, any number of data symbols into databits and (b) a second step performing a checksum calculation and comparing this calculated checksum with the actual received checksum value associated with the received data 6. The FEC decoding step 5 is optional. The checksum calculation and comparison 6 conveniently identifies the presence of errors in the data which may remain after poor quality frames have already been rejected by the FQV. When an error is identified 7, a command signal is transmitted to cause the data to be rejected. When no such error is found 8, a command signal is transmitted to cause the data to be accepted. The error detection method of the invention is thereby completed.
The above described steps of the method of the invention can be controllably repeated if requested/required.
Note in Figure 5, the need to identify the extra key steps to generate and check the Frame Quality Value (FQV) against the threshold quality value prior to the normal checksum procedure. This improves the rejection of poor quality frames which could otherwise cause undetected error problems using the checksum. Note also that the extra processing involved with the generation of the Frame Quality Value (FQV) is minimal, compared to a typical CRC checksum calculation or FEC decoding, and could be easily implemented in hardware or software.
PERFORMANCE COMPARISON
Figure 6 shows a typical performance comparison of the undetected frame error rates for the CRC checksum, the FQV and the proposed combined error detection procedure of the invention. These simulation results are based on a simple BPSK demodulator, with no FEC, providing 3 bit soft decisions with 100 bit data frames, which include a 12 bit CRC checksum as previously described. The result for each Eb/No value is based on the analysis of 106 data frames. The simulation actually showed no undetected frame errors for the combined scheme so the graph shows the predicted performance by using the product of the two independent detectors.
This indicates that the proposed error detection method of the invention has a worst case performance of <10"8 at an Eb/No of 4dB where the undetected frame error rates of the two individual detectors overlap slightly. This is a significant improvement with respect to the simple CRC checksum method which has a worst case performance of over 10"4 for all Eb/No values below 3dB.
The degree of overlap between the individual detectors, which determines the combined error detection performance, can be adjusted by changing the FQV threshold. Other simulations with an increased threshold value show a greater
overlap and consequently a higher undetected frame error rate for the combined scheme, which agrees statistically with the product of the two independent detector performance values.
Alternatively, decreasing the FQV threshold moves the rising edge of the FQV detector to the right and therefore reduce the number of undetected frame errors for the combined detection process. However this increases the probability of good data frames being rejected, as shown in Figure 7, and therefore a trade-off according to the required performance is necessary.
Figure 7 shows the frame error rates for the standard CRC scheme and the combined approach using the same FQV threshold as for Figure 6. As can be seen the proposed error detection method causes the rejection of extra frames in the Eb/No region from 3 to 8 dB. This in practice is not a significant problem since, under normal operating conditions, it is expected that a Eb/No figure of better than 10dB can be used to guarantee a nominal frame error rate of <10"3. In the proposed invention, when the signal to noise ratio degrades due to fading or blocking effects for example, although some good data frames may be rejected, there is significantly less likelihood of erroneously accepting corrupted data frames. This is an important advantage associated with the proposed method of the invention.
The graph in Figure 8 shows the performance for an application using Forward Error Correction (FEC). This simulation was done for a 400 bit frame length employing rate A convolutional coding and using a 3 bit soft decision Viterbi decoder. The FQV makes use of the same soft decision values as used by the Viterbi decoder. Due to the application of FEC, there is a relatively higher symbol error rate present at the demodulator output, and therefore an increased FQV threshold has been used to provide optimum overall performance.
With this threshold setting the combined detector performance shows a maximum undetected frame error rate of just below 10"6 at 3dB. As for the non- FEC case there is a slightly increased frame error rate as shown in Figure 9.
Having thus described the present invention by reference to a preferred embodiment, it is to be appreciated that the embodiment is in all respects exemplary and that modifications and variations are possible without departure from the spirit and scope of the invention. For example, it is possible that the order of the steps of the method of the invention can be altered and the same technical effect would be obtained. Further, whilst in the described embodiment the processing includes a forward error correction of the data, the processing could alternatively be performed without use of forward error correction so as to realise the technical effect of the invention.
The invention finds utility for various applications, for example space-borne or terrestrial data communication applications.