GB2363484A - Adaptive pre/post processing of data for compression/decompression - Google Patents

Adaptive pre/post processing of data for compression/decompression Download PDF

Info

Publication number
GB2363484A
GB2363484A GB0014890A GB0014890A GB2363484A GB 2363484 A GB2363484 A GB 2363484A GB 0014890 A GB0014890 A GB 0014890A GB 0014890 A GB0014890 A GB 0014890A GB 2363484 A GB2363484 A GB 2363484A
Authority
GB
United Kingdom
Prior art keywords
data
processor
symbols
data symbol
weighting factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0014890A
Other versions
GB0014890D0 (en
Inventor
Jason Charles Pelly
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe Ltd
Original Assignee
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony United Kingdom Ltd filed Critical Sony United Kingdom Ltd
Priority to GB0014890A priority Critical patent/GB2363484A/en
Publication of GB0014890D0 publication Critical patent/GB0014890D0/en
Publication of GB2363484A publication Critical patent/GB2363484A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Image Processing (AREA)

Abstract

A data processor has a pre-processor which is arranged to receive source data symbols and to generate modelled data symbols having substantially reduced redundancy. The pre-processor generates the modelled data symbols by forming a prediction metric, 102, for each original source data symbol from a comparison between each original source data symbol and at least one preceding source data symbol weighted by a corresponding weighting factor, adapting the corresponding weighting factor to the effect of reducing the prediction metric, and forming each of the modelled data symbols, 106, from a difference between each original data symbol and the at least one preceding data symbol weighted by the adapted corresponding weighting factor. The pre-processor may adapt the corresponding weighting factor values by selecting values which produce a minimum value of the prediction metric formed from a calculated sum of squared error predictions. The data processor may further include a compression encoding processor coupled to the pre-processor, which is arranged to generate compression encoded data. A complementary decompression arrangement is described. The data may be colour pixel data.

Description

2363484 Data Processor and Method of Processing: Data
Field of Invention
The present invention relates to data processors and methods of processing data. In preferred embodiments, the data processors operate to data compression 5 encode data, and to data compression decode data.
Backt!round of Invention A process in which data symbols from a source are converted into data symbols having a different probability of occurrence is known as modelling. Typically the modelled data symbols have a reduced redundancy in comparison to the original 10 source data symbols. An example application of the modelling process is the field of data compression encoding.
Data compression encoders operate in accordance with a compression encoding algorithm, which if successful has an effect of compressing an amount of source data into a substantially reduced amount of compression encoded data. For some forms of 15 compression encoding there is no loss of information when the source data is compression encoded. Such forms of data compression encoding are known as 'loss less' because although the amount of compression encoded data is less than the original source data, the compression encoded data retains all the inforination which the source data represents. Therefore, for example, loss-less' data compression is 20 distinguished from data compression encoding techniques such as the Motion Picture C Experts Group (MPEG) II, in which encoding includes the step of quantising a Discrete Cosine transform representation of the pixels of a source data image.
ID Although this improves the efficiency with which source data images can be compressed to the effect that a compression ratio of the source data amount with 25 respect to the compression encoded data amount is increased, this is done by discarding information. This information can not be recovered and reproduced at a corresponding data compression decoder. However, the application of the quantisation step to the representation of the data is such that the effect on a compression encoded and decoded image is not visually disturbing to the human eye.
An example of a loss-less coding process is the 'WINZIP' software application which is commonly used on Personal Computers (PC) to reduce an amount of data required to represent a source data file. The process of encoding and decoding is reversible, so that all the information in the oriainal source data file can be recovered 5 when decoding is performed. A further example of a loss-less data compression encoding process is the Joint Photographic Experts Group (JPEG) encoding process, n which is typically applied to digital representations of still images generated by, for example, digital video cameras.
In order to compress an amount of data required to represent the digital images.
10 it is known to pre-process the source data. The pre-processing step which is known as data rnodelling, serves to pre-process the source data symbols to produce modelled data symbols, before the modelled symbols are data compression encoded. An effect of data modelling is to transform the source data stream into a new data stream having lower entropy. As a result, the new data stream can be compression encoded by a 15 greater amount.
Z:1 Huffinan coding is an example of a data compression encoding algorithm which benefits from the step of pre-processing the source data. In Huffrnan coding, data compression encoding is effected by assigning the characters of the source data to the nodes of a tree in accordance with a probability of occurrence of the source data 20 symbols. For the binary case, the bits which comprise the compression encoded data symbols are assigned to the branches of the tree so that the path from the root of the tree to a leaf node representing a source data symbol provides the compression code word for that source data symbol. In general, in common with other compression encoding algorithms, Huffrnan coding provides greatest data compression for data 25 sources with low entropy. Data sources with low entropy can be characterised as having a probability distribution concentrated around certain data symbols because there is a significant variation in the probability of occurrence of the source data symbols. In contrast, data sources with high entropy are characterised as having a substantially flat probability distribution, in that the symbols of the source occur with 30 substantially equal probability. The purpose of the pre-processor is to convert the symbols of the data source into modelled data symbols having, lower entropy. The Z:5 compression encoder, which may for example perform compression encoding in accordance with Huffman coding can therefore provide greater data compression when operating on the modelled data symbols.
For the example of loss-less JPEG, the modelling step is known as Differential Pulse Code Modulation (DPCM) modelling, and the compression encoding is 5 performed in accordance with either Huffman coding or arithmetic coding, In DPCM modelling, the entropy of the source data is reduced by generating an estimate of each of the source data symbols from a plurality of preceding source data symbols weighted respectively by a corresponding weighting factor and forn-iing a new data stream with zn modelled data symbols representative of a prediction error formed by a difference 10 between each prediction of the data symbols and the data symbols themselves.
A general characteristic of loss-less data compression encoding is that the compression ratio provided is not as high as can be achieved with compression encoding schemes which discard information.
Summary of Invention
15 According to the present invention there is provided a data processor which is arranged in operation to generate modelled data from source data, the data processor comprising a pre-processor which is arranged to receive source data symbols and to zn generate modelled data symbols representing the source data symbols by generating a prediction metric for each original source data symbol from a comparison between at 20 least one preceding source data symbol and at least one other preceding source data C, symbol weighted by a corresponding weighting factor, adapting the corresponding weighting factor to the effect of reducing the prediction metric, and forming each of zn In the modelled data symbols from a difference between each original data symbol and the at least one other preceding data symbol weighted by the adapted corresponding 25 weighting factor.
An improvement in data modelling finds application in any field where it is beneficial to reduce the entropy of the source data. For example in the field of compression encoding, an improvement in the amount by which source data can be compressed can be provided by an improvement in the compression encoding process,
3 30 or an improvement in the modelling pre-process. The present invention therefore finds application in improving the amount of data compression through an improvement in the modelling pre-process. This has been made by identifying that the performance of the modelling pre-process and therefore the data compression encoder is dependent on the local statistics of the data source. Each of the original source data symbols is modelled by generating a prediction of the original Source data symbol. For each 5 original source data symbol the preceding source data symbol is selected and a prediction metric is formed from a comparison between the preceding source data symbol and at least one other preceding source data symbol weighted by a corresponding weighting factor. The prediction is made Of the original source data symbol from the weighted value of the at least one other preceding source data I tn' 10 symbol. The prediction will most closely match the original source data symbol if the preceding source data symbol and the original source data symbol only differ by a small amount or not at all. By adapting the weighting factor or factors used in modelling the data symbols performed by the pre-processor in accordance with the difference between the preceding data symbols and the prediction of the preceding 15 data symbols, the modelling is adapted to the statistics of the data source, with an effect that, for example, the data compression provided is increased.
Although DPCM modelling has been used as an example to explain data modelling, it will be understood that the present invention is not limited to and not limited by the term DPCM modelling but finds application in any pre- process which is 20 performed to reduce the entropy of a data source. Similarly, it will be appreciated that Huffman coding has been used as an example of a data compression encoding algorithm. The invention is, therefore, not limited to any particular form of data compression encoding. More particularly, the invention finds application in both loss less data compression encoding and compression encoding in which information is 25 discarded.
In preferred embodiments, the pre-processor may adapt the corresponding weighting factor values by selecting from a range of weighting factor values, a value for the corresponding weighting factor which produces a minimum value of the prediction metric formed from a calculated sum of squared error predictions. Thus, by 3 )0 searching a range of values and evaluating the prediction metric for values within the range, the weighting factor values which minimise the prediction metric can be found.
Z=1 To improve the likelihood of generating a prediction for a data symbol by minimising the prediction metric, it is preferable to use a plurality of preceding data symbols in generating the prediction metric. As such, in preferred embodiments the at least one preceding data symbol may be a plurality of preceding data symbols, the 5 corresponding weighting factor being a plurality of corresponding weighting factors.
t Z=1 each weighting factor being adapted independently by selecting from a range of weighting factor values.
Although the data compression encoder may provide data compression for any data source, the present invention finds particular application where the data source 10 may represent images, the data symbols being representative of row and column components of at least one part of an image. For applications in which the data source may be representative of an image, the plurality of preceding source data symbols may comprise at least one data symbol in the same row and at least one data symbol in the same column, each of which may be weighted by a corresponding weighting factor. In Z=1 15 this case, the data symbols may be pixel values of at least part of the image.
As already explained, the prediction of the original data symbols are generated from preceding data symbols and it is with respect to the difference between them that the weighting factors are adapted. However at the start of a new row, there are no preceding symbols with respect to which an adaptation of the weighting factor can be 20 made. Therefore, the weighting factors for generating the prediction of a data symbol at the start of a new row in the image may be generated from an average of a plurality of the weighting factors from the previous row.
In order to provide a further improvement in matching the modelling process to the statistics of the source data, the number of the preceding symbols, which are 25 weighted by a respective corresponding weighting factor, may be varied. This may be varied in accordance with, for example, the prediction metric. This provides an advantage because the likelihood that the predictions of the data symbols will match substantially the original data symbols depends upon the number of preceding symbols which are used to form the prediction metric and used to generate the predictions of 3 As such, adapting the )0 the data symbols, and the local statistics of the data source.
number of the preceding symbols which are used to generate the prediction metric, provides an improvement in the likelihood that the data symbol predictions will match the original data symbols because the modelling process will be better matched to the statistics of the particular data source.
A further improvement in the likelihood that the predictions of the data symbols match the original source data symbols can be provided in embodiments of 5 the present invention in which the weighting factors may be made more important for C, preceding symbols closer to the position of the original data symbol being predicted.
t:) - 1=1 As mentioned above the present invention finds application with compression encoding and decoding to improve an amount of data compression, which can be C, achieved. As such the data processor may comprise a compression encoding processor 11 10 coupled to the pre-processor, which is arranged in operation to generate compression encoded data by representing the modelled data symbols as compression encoded symbols.
According to a further aspect of the present invention there is provided a data processor according to patent claim 11. This aspect of the present invention provides a 15 data processor which operates to effect a reverse modelling process.
Accordina to a further aspect of the present invention there is provided a data processor according to patent claim 12. This aspect of the present invention provides a data processor which operates to effect a data compression decoding operation.
Various further aspects and features of the present invention are defined in the 20 appended claims.
Brief Description of Drawin2s
Embodiments of the present invention will now be described by way of example only with reference to the accompanying drawings wherein:
Figure I is a schematic block diagram of a general data compression encoder z:1 Z=I 25 and decoder arrangement, Figure 2 is a schematic block diagram of the data compression encoder which ZD appears in figure 1, Figure 3 is a representation of the operation of one of the preprocessors appearing in the data compression encoder shown in figure 2, Figure 4 is a schematic illustration of the DPCM modelling process, Figure 5 is a graphical representation of the frequency distribution of pixel values within a sample test image, Figure 6 is a graphical representation of the probability of occurrence of data t I symbols after DCPM modelling, 5 Figure 7 is a graphical illustration of the modelled data symbols reduced by the modulus of the alphabet size of the data symbols, Figure 8 is a schematic illustration of part of a Variable Weight DPCM modelling process, Figure 9 is a schematic illustration of a further part of the Variable Weight C, 10 DCPM modelling process, Figure 10 is a graphical representation of an amount of data required to compression encode sample test images compared to the same compression encoded images after Variable Weight DCPM modelling, Figure I I is a graphical representation of a percentage improvement provided 15 by the Variable Weight DCPM modelling with respect to DCPM modelling, Figure 12 is a flow diagram representing a part of the Variable Weight ZD modelling process, Figure 13 is a flow diagram of a process of updating the weights in the Variable Weight DCPM modelling process, and Figure 14 is a schematic block diagram of a data compression decoder shown in figure 1.
Z Description of Preferred Embodiments data Figure I provides a block diagram of a general arrangement in which source is compression encoded and data compression decoded and supplied to a sink for 25 the data. In figure I a source of data I is arranged to feed source data symbols to a data compression encoder 2. Within the data compression encoder 2 the source data symbols are received by a pre-processor 4 via a connecting channel 6. Also forming part of the compression encoder 2 is a data compression encoding processor 8 to which pre-processed data symbols provided at the output of the pre-processor 4 are fed. The encoding processor 8 feeds compression encoded data corresponding to the source data to an output channel 10. The compression encoding algorithm performed by the encoding processor 8 if success has an effect of compressing the source data into compression encoded data having a considerably reduced volume of data. The C1 compression encoded data is then fed to a channel or storage medium shown generally as a box 12. To illustrate the process of data compression decoding, figure 1 also t W 5 shows the compression encoded data being recovered and communicated to an input of a data compression decoder 14 fed from the channel of the storage medium 12 via a connecting channel 16. The data compression decoder comprises a compression decoding processor 18 connected to a post processor 20. The post- processor 20 receives compression decoded data symbols from an output of the decoding processor 10 18 via a connecting channel 22. The post processor operates to reverse the operation of the pre-processor 4 of the encoder 2 and to provide at an output 24 an estimate of the source data which is fed to a sink 26.
The data compression encoding process illustrated by the block diagram shown in figure 1 could be applied to any form of source data. The compression encoded 15 data, if successfully compressed, will have a considerably reduced volume, and therefore may be either communicated on a smaller band width via the channel or storage medium 12 or stored in a considerably reduced volume as required for example for archive purposes. Although the example embodiment of the present invention will be illustrated with reference to encoding digital images, it will be appreciated that the 20 invention finds application with other types of data.
The compression encoder 2 which appears in figure 1 is shown in more detail in figure 2 where parts also appearing in figure 1 have the same numerical designation.
For the example in which the source is representative of a colour digital image, the Z data compression encoder 2 must encode effectively three different source data 25 streams which are representative of the red, green and blue components of a colour image. Accordingly, the compression encoder 2 shown in figure 2 operates to pre process and compression encode each of the three components of the colour image separately. As such the pre-processor 4 is shown in figure 2 to have three data processors 30, '32, 34 which are arranged respectively to receive data symbols 0 tl corresponding to the red, green and blue components of the colour image. Correspondingly, the compression encoding processor 8 is shown to include three further processors 36, 38, 40 each of which is arranged respectively to compression encode the pre-processed data symbols provided respectively at the output of the three data processors 30, 32, 34. For brevity, the operation of the compression encoder 2 will be explained with reference to one of the three components of the colour image only. However it will be appreciated that the operation of the other two components is 5 effected in a corresponding way.
Z-- The function and purpose of the pre-processor 30, which may also be described as a "modeller", is to transform the source data symbols received via the connecting channel 6 into a new stream of modelled data symbols having a lower entropy. As is known to those skilled in the art the term entropy as applied to an information source 10 is a measure of the relative amount of information provided by the symbols of that source. If the symbols of the data source 1 occur with a probability of pi where i = 1 to n, then the entropy of the data source is calculated in accordance with equation (1).
v H(p,... P'V) = -2:
-,,Pi 1092(Pi) (1) In effect the operation of the pre-processor is to model the data source to the effect of reducing the entropy of the data source so that the data compression encoder following the pre-processor can encode the data with greater efficiency. This is because data compression encoding algorithms are able to increase the compression 20 ratio for data sources having symbols which occur with a range of probabilities producing a concentrated 'peaky' distribution rather than a flat distribution in which data symbols occur with a more equal probability. This will be further illustrated in subsequent paragraphs.
For the present example embodiment the data source is producing digital Z:1 25 images so the pre-processor is arranged to convert the symbols of each of the components of the colour image into modelled data symbols. Figure 3 provides an illustration of the operation of one of the data processors 30, 32, 34 of the preprocessor 4. As shown in figure 3 the pre-processor 4 is arranged to receive data symbols representative of a digital image 46. As shown in figure ') a part of the image 3 )o 48 falling within a part of the image 46 is shown in expanded form as represented by lines 50, 52 by a group of pixels within a box 54. The box 54 is comprised of squares 56 each of which is representative of a pixel of the image 48. As shown in the box 54 a line 58 forms part of an object within the image 46, In this example this line 58 is part of a tree 60. As will be observed from the expanded form of the part of the image 48 shown in figure 3, most of the pixels within the part of the image are representative of the same relative magnitude and therefore the same luminance (or colour component) value, apart from those pixels which make up the line 58. It is a result of a feature that typical images have large areas which correspond to the same luminance pixel values which is utilised by the pre-processor 4 to reduce the entropy of the source. One example of a pre-processing or modelling technique which is applied to the image 46 shown in figure 3) is illustrated in figure 4 where parts also appearing in C figure ') have the same reference numerals. In the present example embodiment the I pre-processor 4 operates to generate the modelled data symbols by performing a C differential pulse code modulation (DPCM) modelling process. DPCM modelling is a known modelling pre-process which is used, for example, in loss-less JPEG compression. This modelling process utilises the feature of typical images that the I value of a given pixel is normally closely related to the values of surrounding pixels.
As illustrated by the line 58, edges of objects give rise to discontinuities, but such edges are usually less common than uniform background areas. Consequently, it is possible to obtain a prediction of the value of a given pixel from the values of 20 neighbouring pixels.
Consider the situation shown in figure 4, where the pre-processor progresses through the image, from a top-left pixel 60 to a bottom-right pixel 62, row by row.
The pre-processor 30 is considered to have processed pixels in position 64, and is about to process the pixel whose value is a at position 66 and has still to process pixels 25 in positions 68. Since the values x, y and z are known, it is possible to obtain from them a predictionp, of the value of a. For example:
p, =X+Y-z or M or, a general linear predictor is given by (2) (3) TV'rX + WVY + VV__ "a - (4) W, + W Y + W_ from some weights wy and w- where vv, + wy + w,;,, 0.
5 Pred ictions are also constrained to lie within the range of the value of the pixels. Naturally, other pixels could be used to form the prediction. Indeed, a simple one-dimension predictor can be formed by merely looking at a single previous pixel value. However, two-dimensional predictors are usually far superior to the one dimensional version. Predictors in the form of equation (4) are of the simplest form 10 for a two-dimensional predictor.
There are special cases to consider:
0 For the pixel at the very top left comer of the image, no prediction can be made as this is the first pixel to be processed. In such a case, the prediction value is taken to be 0 (as will be appreciated other pixel values may be used).
0 For pixels on the very top row, only a one-dimensional predictor is possible, for example, p,, = x would suffice.
0 For pixels on the very left column, there are no pixels further to the left, so a prediction such asp,, = y is often used.
Once a prediction has been formed, the prediction error, e,, can be calculated 20 from equation (5).
e,, =a - p, (5) It is then this error ea which is used to form the modelled data symbols as output by the modelling pre-processor 30 and input to the compression encoder 36. If good predictions are made by the pre-processor 30, then the frequency counts for 25 values of e, close to 0 will be very large. This assists the encoder, as a data stream having only a few symbols with high frequency lends itself well to compression, as opposed to all the symbols having a similar probability. This is illustrated graphically in figure 5. In figure 5 the frequency distribution of the pixel values of a test luminance image for the data source are represented by a line 70. Figure 6 provides a representation of the frequency distribution of the prediction error values 72 after DCPM modelling in which the weighting factors w, = % = 1 and w, = - 1.
cl - As can be seen, the pixel value distribution is very random and unpredictable in figure 5 whereas the DPCM prediction error distribution in figure 6 is centred around 0 5 and decays rapidly in either direction. However, the number of permissible symbols has increased. If there are N possible pixel values (ranging from 0 to,,V), then this form of DCPM modelling increases the alphabet size to be 2N- 1 possible values. For the test image this increases from 256 values to 511 values. This can be avoided. though, by observing that, for a given prediction p, the prediction error can only take Z 11 10 N possible values. Therefore, the prediction error can be taken to a modulus of N as expressed by equation (6).
e, = (a - p,) mod(N) (6) This means that performing WPM modelling does not require an increase in C1 15 the alphabet size. Figure 7 shows the frequency distribution of the luminance test image when modelled with the same DCPM modeller as used for figure 6, but for which the prediction errors are taken modulo 256, which is the alphabet size of the source.
As will be explained later, the post-processor 20 operates to perform a reverse modelling process. The reverse modelling is effected by generating from a prediction, CP C1 Z p, of the pixel value, a, by reversing the operation performed by the pre- processor.
The reverse modelling is performed by a post-processor which receives the value v (a -p,) mod(N). The pixel value can therefore be obtained from a = (v + p,) mod(N) (7) 25 It has been discovered that no one choice of weighting factors is appropriate for every image. This is because there is a high degree to which an optimum choice of Z:1 weights is image-dependent. To this end, WPM modelling is made adaptive by allowing the pre-processing to change the set of weighting values as it progresses Z:1 C1 C_ through an image. This should make the weights better suited not only to the image as )o a whole, but also to individual areas within an image. The reverse- modelling performed by the post-processor is also able to keep track with the pre- processor modelling so that the transmission of additional information is not required. This is known as Variable-Weight (VW) DCPM modelling algorithm, and is described in the C, 1:1 following paragraphs, but it essentially uses a selection of neighbouring pixels to 0 Zn 5 generate a set of weights which is, in some sense, best for that selection of pixels. Tills generated set of weights is then used for the required prediction.
Suppose that pixels which are present in positions 76 which are un-shaded in Figure 8 are known to the pre-processor 4 and post-processor 20 so that they can be made use of, and that a prediction of the value yo is to be made. For any given set of 10 weights, w,,, w,, and w-, a prediction of yj can be made from pixels values v-), xl and xi.
Similarly, a prediction Of Y2 can be made from pixel values Y3, X2 and x3. VW-DCPM modelling finds a set of weights which makes the errors in these predictions minimal.
It then uses this generated set to produce a prediction of yo from yi, x0and xl. The aim is that, due to local statistics, this set of weights should produce a very Good 15 prediction. As a matter of terminology, the above situation is defined to have a window-size of two, since two predictions (for values yj and y2) can be made by both the pre-processor 4 and the post-processor 20, and these two predictions are involved in generating the set of weights.
In general, suppose that we have a window-size of M, with pixels as in figure 20 9. There are M predictions which can be made (for values yj yjj). For a given set of weights, w,, wy and w, we calculate a prediction metric from the sum of squared prediction errors provided by equation (8).
w = Xf WXY1+1 + w YXI + wzxl+l Y,]2 f (w, wy Z) Y, (8) wX +w Y + w which becomes:
W) = + 11, - Y')wZ (9) 25 f(w"WY5: " j1(Y'+1 -Y')w' +(X, -Y,)Wy X (w, + wy + w,)- '=1 Expanding the brackets, we find that:
2 2 f (w, wy W-) = A w., + Bw, + Cw Z + 2(Dw,,wy + Ew,,w, + Fw,w,) (10) ( W.V + W + W,)2 Y where A4 IV/ A= I (y1+1 - yl)2 B:= (x, _ yi)2 1=1 JJ M Al C Y (X"' Y') D Y (Y", - Y,)(x, - Y') 1=1 1=1 Xf M E = Y (y,,, - y,)(x,,, yJ and F = I (xi - y,)(x,,, - y,) i=1 1=1 It is the prediction metric formed from equation (10) which is reduced to a minimum by adapting the weighting factors wx, wy, w__ Once the prediction metric ZD - has been minimised, the modelled data symbols are formed for these adapted 10 weighting factors from the prediction error. The prediction error is formed from a difference between the predicted data symbol and the original data symbol according to equation (6).
From the above equations it can be observed that to calculate the sum of squared errors, the pre-processor only needs to know the values of the coefficients 15 A... F given in (11). These coefficients can be maintained as the pre- processor progresses through the image, from pixel to pixel. When the pre- processor shifts across from one pixel to an adjacent pixel, only one summation term according to the above equations is added, to produce the new coefficients, and one summation terin is subtracted. This means that large summations do not need to be performed and that 20 the complexity and speed of the algorithm is independent of window- size. At the beginning of each row, these coefficient are reset to 0 and for the first M + I pixels, it is not necessary to subtract terms from the coefficients.
Having explained how the sum of squared errors is calculated, the following paragraphs provide an explanation of how the values for w, v,, and w- which minimises equation (8) are generated. In the example embodiment of the invention.
there are three weighting factors and so the value of these weighting factors will be Z 5 known as the best-weight-tt-iplet. It should be noted that integer precision was used when calculating these summations. In doing so there can be more than one set of weialits which minimise the summations and hence a choice needs to be rnade. From t the experiments performe& a good method of choosing between these sets of best weights-triplets is to first find the one(s) of minimal variance and, should this yield 10 more than one triplet, choose the one(s) of minimal mean. If there is actually still more than one triplet remaining, choose arbitrarily. This produces better results than if C1 (1) no selection criteria is used and (ii) selection by mean first and then by variance is performed.
In preferred embodiments, an exhaustive search is performed in order to 15 minimise the prediction metric according to equation (8). However to minimise this search certain restrictions are placed on the weights w, wy and w., which are explained as four assumptions. The first assumption is that the weights take integer values only. This is not too restrictive due to the observation represented by equation (12). f (ka, k,6, ky) = f (a, A, y) (12) 20 for any values k:#O, a, and y, and hence, for example, J(0.2,1.'),-0.1 1) =J(20,1')0, 11).
The second assumption is that the ranges over which v,, wy and w. can vary are limited to those represented by equation (1 3)) WX E-= [Xmin Xmax 1 25 W Y E=- ly i,,, Ya,' 1 W, EE IZmin 5 Zmax 1 These ranges can be changed in dependence upon an amount of time available to find the best weight triplet from every integer triplet of weights which can be considered in order to minimise the prediction metric according to equation (8).
However, if processing time is important, in preferred embodiments it is assumed that 5 both w, and wy are positive since these are the weights for pixels which lie closest to tn the current pixel, Experiments show that restricting w- to non-negative values is also detrimental to the resulting compression. The size of these ranges is also important if the search is to be perfon-ned over the entire range of values, bearing in mind that simply doubling the length of each range yields eight times as many triplets to 10 consider, A third assumption is that as the pre-processor 4 progresses through the 1 image, it generates a sequence of best-weight-trip lets, each of which minimises V I 1 1 2 2 1 equation (8) at a given pixel. If this sequence is (f w, w.. w W X, W Y, W Z then an assumption is made that WX does not vary much from W", Similarly the same assumption is made for w"Yand w"_ Consequently, the number of calculations is 15 reduced, by assuming that the ranges over which wx, vvy and w, can vary are restricted Z71 - as expressed by equation (14).
WX G [Xinin Xrnax I _ I 'V.rprev 1 1 Wxp, ev + 11 W Y G [Yinin Yrnax I n I'Vyprev 11 Wyprev + 11 (14) W I G [Z Z ( -I - 11 W:prev + 11 min 5 max I [W.Prev 20 where Wxprev, Wyprev and Wprev are the most recent values for wX, wy and w. respectively.
This reduces the search to at most twenty seven triplet weight values, but it should be noted that as the ranges become more and more restricted, the final minimum ZD prediction metric value produced by equation (8) increases. Note also that these intersecting ranges do not have to be simply I from the previous values.
25 A fourth assumption made is that the pixels which are weighted by W, and vy lie closer to the current pixel than the pixel which is weighted by -vv-. For this reason the search range can be restricted as expressed in equation (15).
1w.j:1w,j and jw.j:Jvyj (15) Finally, for given values of wy and w,, ( 10) becomes g(w,) = A V.,2 + pW ' +0 (16) (w, + R)2 For some P, 0 and R.
Since R = wy + w--, under the assumption that wy is non-negative and also using 5 equation (15), it is assumed that R_>O. If we assume that w, takes positive values, then the singularity at R does not prevent a solution from being found for a value of iv, which is a minimum. Hence, during an exhaustive search in which iv, is beina varied Z=1 and wy and w,: are fixed, if the calculation of equation (8) starts to become laraer, then n the search is stopped. Using this fact generates the best value of w, when values of ivy 10 and w,: are specified. Since the best weights at the start of one row will not necessarily be related to the best weights at the end of the previous row, the weights at the end of a row are not continued over to the start of the next row Since some form of initialisation must therefore take place at the beginning of each row, a small number of best- weight-triplets from the beginning of the previous row can be averaged and then 15 this average is taken as the new starting triplet. The row-start- window-size is taken to be the number of best-weight-trip lets used to create this average triplet.
The complexity of the VW-WPM algorithm is independent of the windowsize. Therefore it is appropriate to discover what values of window-size give the best compression results. Intuitively, the window-size should not be too large, as otherwise 20 it becomes difficult to truly minimise the prediction metric according equation (8), since different regions with different local statistics will form part of the equation. On the other hand, the window-size should not be too small, because anomalies, such as edges, attain too great a significance and produce best-weight-triplets which are not truly best for the more usual regions. Experiments were performed using a variety of 25 window-sizes giving the greatest compression for VW-DI?CM algorithm. It has been discovered from these experiments that the results are image dependent. In addition, the results also depend on the range of values over which w, wy and w. are allowed to vary. As such in other embodiments, the pre-processor is arranged to make VWDPCM modelling further adaptive, by arranging for the window-size to vary in order to reach an optimum value for a given image. This is achieved by providing three separate VW-DPCM pre-processors per colour component, and using different and Zn sufficiently spaced window-size in each pre-processor. This provides a facility for optimising the window-size. This is optimised by evaluating the outptit bit rates achieved for each encoder connected to each of the three pre-processors and selecting one of the three pre-processors in dependence upon which of the three output bit rates produces the lowest output. After a certain amount of data has been processed, the output bit-rates are compared and an optimised set of window-sizes chosen.
For a fixed encoder, the VW-DPCM modelling technique can be compared 10 against a more standard DPCM method. A graphical representation of the resLilts is presented in figures 10. Figure 11 summarises the comparisons of VW-DPCM t:) modelling against DPCM modelling technique where the weighting values are fixed.
Z:1 The improvement is produced by the VW-13PCNI modelling process which generates a larger number of predictions which have a small er-ror and a smaller number of 15 predictions which have a large error.
H -3 The variable weight DPCM modelling process is summarised in the form of a flow diagram in figures 12 and B. In figure 12 the pixel values for each component of the image are analysed in process step 100 and separated into each of the three components of the colour image. The components are processed separately and so the following steps are performed on each of the components. At process step 102, for an C original pixel, a prediction metric is formed from equations (9) and (10), and the weighting factors are each updated by estimating a best weight triplet for the pixel values processed so far, which minimises the prediction metric. In process step 104 a prediction of the pixel is generated from the window of preceding pixels weighted 25 respectively by the best weight triplet weighting factors. At process step 106 the modelled data symbols are deten-nined from the prediction error calculated from the difference between the pixel prediction generated at process step 104, with the weighting factors optimised at step 102, and the original pixel value. Processing then proceeds as indicated by connecting arrow 108 to the beginning of step 102 so that the next pixel is analysed and a modelled symbol generated.
The process step 102 in which the best weight triplet is estimated, is surnmarised by the flow diagram shown in figure 13. At process step I 10 which connects to process step 100, the coefficient values which are used in generating the squared prediction error are up dated by adding the new summation term to each coefficient A to F and subtracting a summation term as represented in equations (11). At process step 112 the new squared prediction error is calculated in accordance with equation (10). Then, at process step 114 the weight w, within the triplet is adjusted within a search range to a first value. The search ranae is set by the assumptions of the I =1 range within which w, will vary as explained above. The squared prediction error is Z=1 then recalculated at decision step 116 and compared with the previous prediction metric to see whether it has decreased. If the square prediction error has decreased 10 then decision path 118 is followed and the process step 114 is repeated to adjust 1, to a new value. If on the other hand the squared prediction error does not decrease, then the process proceeds to process step 120. If the preceding steps represent the first estimate of the three weighting factors, then processing proceeds to step 122, where the current minimum metric calculated at step 114 is stored as the minimum global metric value, and the current weight triplet w, vvy and w,, is stored as the best weight triplet. If the preceding steps are not the first iteration, then processing proceeds to decision step 124, where the current minimum metric evaluated at the end of decision step 116 is compared with the global metric value. If the current minimum metric is less than the global metric, then processing again passes to step 122, where the current Z=' 20 weight triplet is stored as the best weight triplet w,, wy and w, and the global metric value is assigned the value of the current minimum metric. Processing from step 122, and decision step 124, then proceeds to step 126, where new values of the other two weighting factors wy and w, within the predefined search range are selected to search for a new minimum metric by varying w, within a corresponding predefined search 25 range provided in step 114 and 116. The weighting factors which result vv, wy and w- in the minimum global metric are representative of the best weight triplet which are fed to process step 104.
As will be appreciated the data compression decoder 14 which is shown in figure 1 will operate to perforin the reverse of the data compression encoding 3 )0 processing and the post-processor 20 will operate to perform the reverse Variable Weight DPCM modelling. The data compression decoder 14 is shown in more detail in figure 14 where parts also appearing in figure I have the same numerical designations. As with the data compression encoder, the decoder divides the colour image signal into each of the three components red, green and blue. Each of the encoded parts of the image signal are fed respectively to a data compression decoding tP:7 processor 130, 132, 134 which form part of the decoding processor 18. The decoder 5 processor 130 for example operates to effect the reverse operation to the compression encoding algorithm and generates therefore at an output channel 136 an estimate of the modelled data symbols as were present at the input of the data compression encoder.
Correspondingly, the data compression decoding processors for the other green and L_ Z blue components generate corresponding modelled data symbols at the outputs 138.
10 140. The estimate of the modelled data symbols are then received at corresponding post-processing units 142, 144, 146 forming part of the post-processor 20. Each of the three post-processing units 142, 144, 146 operates to effect a reverse modelling C> process of the variable weight DPCM modelling process, according to equation (7).
Accordingly the data symbols are derived from the modelled data symbols, which represent the prediction error to which the prediction for the data symbol is added.
The prediction is generated from the data symbol generated from the weighting sum of C1 zn the already decoded data symbols. As with the pre-processing units '30, 3)2, 34, each of the post processing units generates predictions of the data symbols from previous estimated data symbols weighted respectively by a weighting factor which is adapted 20 in accordance with previous decisions. Finally therefore the estimates of the data symbols for the red, green and blue components are output on the connecting channel 24.
As will be appreciated by those skilled in the art various modifications may be made to the embodiment herein before described without departing from the scope of 25 the present invention. In particular it will be appreciated that the present invention finds application to any form of data and that data representing images is merely an example.

Claims (1)

  1. I A data processor which is arranLyed in operation to generate modelled data from source data, said data processor comprising - a pre-processor which is arranged to receive source data symbols and to generate modelled data symbols representing said source data symbols by - generating a prediction metric for each original source data symbol from a rn C, comparison between at least one preceding source data symbol and at least one other preceding source data symbol weighted by a corresponding weighting factor, adapting said corresponding weighting factor to the effect of reducing said 10 prediction metric, and - forming each of said modelled data symbols from a difference between said each original data symbol and said at least one other preceding data symbol weighted by said adapted corresponding weighting factor.
    1-1) ZD 15 2. A data processor as claimed in Claim 1, wherein said pre-processor adapts said corresponding weighting factor values by selecting from a range of weighting factor values, a value for said corresponding weighting factor which produces a minimum value of said prediction metric formed from a calculated sum of squared error predictions.
    A data processor as claimed in Claim I or 2, wherein said at least one other preceding data symbol is a plurality of other preceding data symbols, said corresponding weighting factor being a plurality of corresponding weighting factors.
    ZD zn each of which weighting factors is adapted independently by selecting from a range of 25 weighting factor values.
    4. A data processor as claimed in any preceding Claim, wherein said data source is representative of images, said data symbols being representative of row and column components of at least one part of an image.
    I )0 5. A data processor as claimed in Claim 4 when dependent on Claim 3), wherein said plurality of other preceding source data symbols comprises at least one data symbol in the same row and at least one data symbol in the same column as said original data symbol, each of which is weighted by a corresponding weighting factor, t:' z::, 5 each of said weiGhting factors being independently adapted.
    6. A data processor as claimed in any preceding Claim, wherein said data symbols are pixel values of said at least part of the image.
    10 7. A data processor as claimed in any of Claims 4 to 6, wherein said weighting factors for generating the prediction of a data symbol at the start of a new t Z:) row in said image are generated from an average of a plurality weighting factors from Z:7 0 the preceding row.
    15 8. A data processor as claimed in any of Claims 2 to 7, wherein the number of said other preceding symbols which are weighted by a respective weighting z::, Z: Z:) lt factor is varied.
    9. A data processor as claimed in any of Claims 2 to 8, wherein said 20 weighting factors are made more important for said other preceding symbols closer to the current data symbol.
    10. A data processor as claimed in any preceding Claim, comprising - a compression encoding processor coupled to said pre-processor, which is arranged in operation to generate compression encoded data by representing said modelled data symbols as compression encoded symbols.
    11. A data processor which is arranged in operation to estimate source data from modelled data symbols generated by the data processor according to any of 30 claims I to 9, said data processor comprising - a post-processor arranged in operation to generate an estimate of said source data symbols from said modelled data symbols, wherein said post processor is arranged in operation - to generate a prediction metric for each said data symbol estimates from a 5 comparison between at least one preceding estimated data symbol and at least one other preceding data symbol estimate weighted by a corresponding weighting factor.
    CI t - to adapt said at least one corresponding weighting factor to the effect of reducing said prediction metric, and - to derive each of said data symbol estimates from said at least one other preceding estimated data symbol weighted by said adapted corresponding wei hting 0 -- g c factor in combination with the corresponding modelled data symbol.
    C 12. A data processor which is arranged in operation to generate an estimate of source data symbols from data compression encoded data symbols generated by the c 15 data processor claimed in Claim 10, said data processor comprising zn - a data compression decoding processor arranged to receive said compression C1 encoded data symbols, and to generate in accordance with the compression encoding algorithm used by the encoder, modelled data symbols, and - a post-processor coupled to the data compression decoding processor and 20 arranged to generate an estimate of said source data symbols, from the modelled data symbols generated by the decoding processor, wherein said post processor is arranged in operation to generate a prediction metric for each said data symbol estimates from a comparison between at least one preceding estimated data symbol and at least one 25 other preceding data symbol estimate weighted by a corresponding weighting factor, - to adapt said at least one corresponding weighting factor to the effect of reducing said prediction metric, and to derive each of said data symbol estimates from said at least one other preceding estimated data symbol weighted by said adapted corresponding weighting )o factor in combination with the corresponding modelled data symbol.
    13, A method of processing source data, said method comprising the steps of - generating a prediction metric for each original source data symbol from a I C comparison between at least one preceding source data symbol and at least one other 5 preceding data source symbol weighted by a corresponding weighting factor, IP Z=' - adapting said corresponding weighting factor to the effect of reducing said t:) Z:' Z71 prediction metric, and - forming each of said modelled data symbols from a difference between said each original data symbol and said at least one other preceding data symbol weighted 10 by said adapted corresponding wei, hting factor.
    0 14. A method of processing source data as claimed in Claims 133, wherein the step of adapting said corresponding weighting factor, comprises the step of - selecting from a range of weighting factor values, a value for said 15 corresponding weighting factor which produces a minimum value of said prediction metric formed from a calculated sum of squared error predictions.
    15. A method of processing source data as claimed in Claims 133 or 14, wherein said at least one other preceding data symbol is a plurality of other preceding 20 data symbols, said corresponding weighting factor being a plurality of corresponding weighting factors, each of which weighting factors is adapted independently by selecting from a range of weighting factor values.
    16. A method of processing source data as claimed in any of Claims I') to 25 15, wherein said data source is representative of images, said data symbols being zn ZD representative of row and column components of at least one part of an image.
    17. A method of processing source data as claimed in Claim 16 when dependent on Claim 15, wherein said plurality of other preceding source data symbols 3 30 comprises at least one data symbol in the same row and at least one data symbol in the same column, each of which is weighted by a corresponding weighting factor, each of said weighting factors being independently adapted.
    18. A method of processing source data as claimed in Claims 16 or 17, wherein said data symbols are pixel values of said at least part of the image.
    5 19. A method of processing source data as claimed in Claims 16, 17 or 18, wherein the step of adapting said weighting factors, comprises the step of zn - adapting the weighting factor for generating the prediction of a data symbol at I I= the start of a new row in said image from an average of a plurality of weighting factors L for the preceding row.
    I 20. A method of processing source data as claimed in any of Claims 15 to z:I 19, wherein the step of generating a prediction of each of said source data symbols comprises the step of - varying the number of said other preceding data symbols which are weighted I 15 by a respective weighting factor.
    21. A method of processing source data as claimed in any of Claims 15 to 20, wherein the step of generating a prediction of each of said source data symbols zn comprises the step of 20 - making said weighting factors more important for said other preceding symbols closer to the current data symbol, than those other preceding data symbols remote from the current data symbol.
    22. A method of processing source data as claimed in any of Claims I') to 25 2 1, comprising the step of - compression encoding said modelled data symbols to generate compression encoded symbols.
    23. A method of estimating source data from modelled data symbols 3 30 generated by the data processing method according to any of Claims 1- 3) to 22, said method comprising the steps of - generating a prediction metric for each said data symbol estimates from a comparison between at least one preceding estimated data symbol and at least one other preceding data symbol estimate weighted by a corresponding weighting factor, adapting said at least one corresponding weighting factor to the effect of =1 4=1 5 reducing said prediction metric, and - deriving each of said data symbol estimates from said at least one other preceding estimated data symbol weighted by said adapted corresponding weighting 0 ID C factor in combination with the corresponding modelled data symbol.
    10 24. A method of estimating source data from compression encoded data representative of the source data generated in accordance with the data processing method according to Claim 22, said method comprising the steps of ZD - decoding said encoded data symbols in accordance with the compression Z encoding algorithm used by the encoder to generate modelled data symbols, and zn 15 - post-processing said modelled data symbols to generate an estimate of source data symbols from the modelled data symbols, the step of post processing comprising 4:1 the steps of generating a prediction metric for each said data symbol estimates from a comparison between at least one preceding estimated data symbol and at least one 20 other preceding data symbol estimate weighted by a corresponding weighting factor, - adapting said at least one corresponding weighting factor to the effect of reducing said prediction metric, and deriving each of said data symbol estimates from said at least one other preceding data symbol weighted by said adapted corresponding weighting factor in C, 25 combination with the corresponding modelled data symbol.
    25. A computer program providing executable instructions, which when loaded onto a data processor configures the data processor to operate as the data processor claimed in any of Claims I to 12.
    26. A computer program providing computer executable instructions, which when loaded on to a data processor causes the data processor to perform the method according to Claims I) to 24.
    5 27. A computer program product having a computer readable medium recorded thereon information signals representative of the computer program, claimed Z- in any of Claims 25 or 26.
    28. A signal representative of data which has been processed in accordance 9:) 10 with the compression encoding method of Claims B to 22, or the compression encoder according to any of Claims I to 10.
    29. A data processor as herein before described with reference to the accompanying drawings.
    30, A method of processing substantially as herein before described with reference to the accompanying drawings.
GB0014890A 2000-06-16 2000-06-16 Adaptive pre/post processing of data for compression/decompression Withdrawn GB2363484A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0014890A GB2363484A (en) 2000-06-16 2000-06-16 Adaptive pre/post processing of data for compression/decompression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0014890A GB2363484A (en) 2000-06-16 2000-06-16 Adaptive pre/post processing of data for compression/decompression

Publications (2)

Publication Number Publication Date
GB0014890D0 GB0014890D0 (en) 2000-08-09
GB2363484A true GB2363484A (en) 2001-12-19

Family

ID=9893891

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0014890A Withdrawn GB2363484A (en) 2000-06-16 2000-06-16 Adaptive pre/post processing of data for compression/decompression

Country Status (1)

Country Link
GB (1) GB2363484A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966571A (en) * 2020-08-12 2020-11-20 重庆邮电大学 Time estimation cooperative processing method based on ARM-FPGA coprocessor heterogeneous platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4562468A (en) * 1982-05-14 1985-12-31 Nec Corporation Adaptive predictive coding apparatus of television signal
EP0185095A1 (en) * 1984-05-28 1986-06-25 Sony Corporation Digital signal transmission device
GB2214750A (en) * 1988-01-08 1989-09-06 British Broadcasting Corp DPCM picture coding
JPH1068023A (en) * 1996-08-16 1998-03-10 Caterpillar Inc Heat treatment of bush and device therefor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4562468A (en) * 1982-05-14 1985-12-31 Nec Corporation Adaptive predictive coding apparatus of television signal
EP0185095A1 (en) * 1984-05-28 1986-06-25 Sony Corporation Digital signal transmission device
GB2214750A (en) * 1988-01-08 1989-09-06 British Broadcasting Corp DPCM picture coding
JPH1068023A (en) * 1996-08-16 1998-03-10 Caterpillar Inc Heat treatment of bush and device therefor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Adaptive Filtering", Drumright T, 1998, www.ee.calpoly.edu/ïfdepiero/curr/dsp/adaptive/adapt.html *
"Adaptive PCM Coding",4/9/94, http://pino.dhs.org:8080/ïjose/cv/..odegen_masters/www/masters_13.html *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966571A (en) * 2020-08-12 2020-11-20 重庆邮电大学 Time estimation cooperative processing method based on ARM-FPGA coprocessor heterogeneous platform
CN111966571B (en) * 2020-08-12 2023-05-12 重庆邮电大学 Time estimation cooperative processing method based on ARM-FPGA coprocessor heterogeneous platform

Also Published As

Publication number Publication date
GB0014890D0 (en) 2000-08-09

Similar Documents

Publication Publication Date Title
Carpentieri et al. Lossless compression of continuous-tone images
JP4906855B2 (en) Efficient coding and decoding of transform blocks
US5835034A (en) System and method for lossless image compression
US8131096B2 (en) Variable length coding for clustered transform coefficients in video compression
US5287200A (en) Block adaptive linear predictive coding with multi-dimensional adaptive gain and bias
US10382789B2 (en) Systems and methods for digital media compression and recompression
US5909513A (en) Bit allocation for sequence image compression
JP2000125297A (en) Method for coding and decoding consecutive image
US6072909A (en) Image coding devise and image decoding devise using with image disassembly
US6885320B2 (en) Apparatus and method for selecting length of variable length coding bit stream using neural network
Wu et al. BTC-VQ-DCT hybrid coding of digital images
KR20070046852A (en) System and method for compression of mixed graphic and video sources
JP2003188733A (en) Encoding method and arrangement
Philips et al. State-of-the-art techniques for lossless compression of 3D medical image sets
WO2001050769A1 (en) Method and apparatus for video compression using multi-state dynamical predictive systems
GB2363484A (en) Adaptive pre/post processing of data for compression/decompression
JP2002064821A (en) Method for compressing dynamic image information and its system
GB2366679A (en) Processing data having multiple components
JP2000165873A (en) Compression method for moving picture information and its system
JP4415651B2 (en) Image encoding apparatus and image decoding apparatus
JPH08307835A (en) Classification adaptive processing unit and its method
Sathappan et al. Block based prediction with Modified Hierarchical Prediction image coding scheme for Lossless color image compression
JP2939869B2 (en) Image encoding device and image decoding device
JPH08214169A (en) Fractal image coding system
JP2002209111A (en) Image encoder, image communication system and program recording medium

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)