WO2013068732A1 - Context adaptive data encoding - Google Patents

Context adaptive data encoding Download PDF

Info

Publication number
WO2013068732A1
WO2013068732A1 PCT/GB2012/052759 GB2012052759W WO2013068732A1 WO 2013068732 A1 WO2013068732 A1 WO 2013068732A1 GB 2012052759 W GB2012052759 W GB 2012052759W WO 2013068732 A1 WO2013068732 A1 WO 2013068732A1
Authority
WO
WIPO (PCT)
Prior art keywords
range
sub
code values
data bit
data
Prior art date
Application number
PCT/GB2012/052759
Other languages
French (fr)
Inventor
James Alexander GAMEI
Karl Sharman SHARMAN
Paul James SILCOCK
Original Assignee
Sony Corporation
Sony Europe Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corporation, Sony Europe Limited filed Critical Sony Corporation
Publication of WO2013068732A1 publication Critical patent/WO2013068732A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4006Conversion to or from arithmetic code
    • H03M7/4012Binary arithmetic codes
    • H03M7/4018Context adapative binary arithmetic codes [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4006Conversion to or from arithmetic code
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/192Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive

Definitions

  • This invention relates to context adaptive data encoding.
  • Video data compression and decompression systems which involve transforming video data into a frequency domain representation, quantising the frequency domain coefficients and then applying some form of entropy encoding to the quantised coefficients.
  • Entropy in the present context, can be considered as representing the information content of a data symbol or series of symbols.
  • the aim of entropy encoding is to encode a series of data symbols in a lossless manner using (ideally) the smallest number of encoded data bits which are necessary to represent the information content of that series of data symbols.
  • entropy encoding is used to encode the quantised coefficients such that the encoded data is smaller (in terms of its number of bits) then the data size of the original quantised coefficients.
  • a more efficient entropy encoding process gives a smaller output data size for the same input data size.
  • CABAC context adaptive binary arithmetic coding
  • the quantised coefficients are divided into data indicating positions, relative to an array of the coefficients, of coefficient values of certain magnitudes and their signs.
  • a so-called “significance map” may indicate positions in an array of coefficients where the coefficient at that position has a non-zero value.
  • Other maps may indicate where the data has a value of one or more (and its sign); or where the data has a value of two or more.
  • a bit of data is encoded with respect to a probability model, or context, representing an expectation or prediction of how likely it is that the data bit will be a one or a zero.
  • a probability model or context
  • an input data bit is assigned a code value within one of two complementary sub-ranges of a range of code values, with the respective sizes of the sub- ranges being defined by the context.
  • a next step is to modify the overall range (for use in respect of a next input data bit) in response to the assigned code value and the current size of the selected sub-range. If the modified range is then smaller than a threshold (for example, one half of an original range size) then it is increased in size, for example by doubling (shifting left) the modified range.
  • a threshold for example, one half of an original range size
  • an output encoded data bit is generated to indicate that a doubling operation took place.
  • a further step is to modify the context for use with the next input data bit. In currently proposed systems this is carried out by using the current context and the identity of the current "most probable symbol" (either one or zero, whichever is indicated by the context to currently have a greater than 0.5 probability) as an index into a look-up table of new context values.
  • This invention provides a data encoding method for encoding successive input data bits, the method comprising the steps of:
  • FIG 1 schematically illustrates an audio/video (A/V) data transmission and reception system using video data compression and decompression
  • Figure 2 schematically illustrates a video display system using video data decompression
  • FIG. 3 schematically illustrates an audio/video storage system using video data compression and decompression
  • Figure 4 schematically illustrates a video camera using video data compression
  • Figure 5 provides a schematic overview of a video data compression and decompression apparatus
  • Figure 6 schematically illustrates the generation of predicted images
  • FIG. 7 schematically illustrates a largest coding unit (LCU).
  • Figure 8 schematically illustrates a set of four coding units (CU);
  • Figures 9 and 10 schematically illustrate the coding units of Figure 8 sub-divided into smaller coding units
  • Figure 1 schematically illustrates an array of prediction units (PU);
  • FIG 12 schematically illustrates an array of transform units (TU);
  • Figure 13 schematically illustrates a partially-encoded image
  • Figure 14 schematically illustrates a set of possible prediction directions
  • Figure 15 schematically illustrates a set of prediction modes
  • Figure 16 schematically illustrates a zigzag scan
  • Figure 17 schematically illustrates a CABAC entropy encoder
  • Figure 18 schematically illustrates a CAVLC entropy encoding process
  • FIGS. 19A to 19D schematically illustrate aspects of a CABAC encoding and decoding operation
  • Figure 20 schematically illustrates a CABAC encoder
  • Figure 21 schematically illustrates a CABAC decoder
  • Figure 22 is a schematic graph illustrating probability values for context variable values. Description of the Embodiments
  • Figures 1 -4 are provided to give schematic illustrations of apparatus or systems making use of the compression and/or decompression apparatus to be described below in connection with embodiments of the invention. All of the data compression and/or decompression apparatus is to be described below may be implemented in hardware, in software running on a general-purpose data processing apparatus such as a general-purpose computer, as programmable hardware such as an application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or as combinations of these.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • Figure 1 schematically illustrates an audio/video data transmission and reception system using video data compression and decompression.
  • An input audio/video signal 10 is supplied to a video data compression apparatus 20 which compresses at least the video component of the audio/video signal 10 for transmission along a transmission route 30 such as a cable, an optical fibre, a wireless link or the like.
  • the compressed signal is processed by a decompression apparatus 40 to provide an output audio/video signal 50.
  • a compression apparatus 60 compresses an audio/video signal for transmission along the transmission route 30 to a decompression apparatus 70.
  • the compression apparatus 20 and decompression apparatus 70 can therefore form one node of a transmission link.
  • the decompression apparatus 40 and decompression apparatus 60 can form another node of the transmission link.
  • the transmission link is uni-directional, only one of the nodes would require a compression apparatus and the other node would only require a decompression apparatus.
  • FIG. 2 schematically illustrates a video display system using video data decompression.
  • a compressed audio/video signal 100 is processed by a decompression apparatus 1 10 to provide a decompressed signal which can be displayed on a display 120.
  • the decompression apparatus 1 10 could be implemented as an integral part of the display 120, for example being provided within the same casing as the display device.
  • the decompression apparatus 1 10 might be provided as (for example) a so-called set top box (STB), noting that the expression "set-top” does not imply a requirement for the box to be sited in any particular orientation or position with respect to the display 120; it is simply a term used in the art to indicate a device which is connectable to a display as a peripheral device.
  • STB set top box
  • Figure 3 schematically illustrates an audio/video storage system using video data compression and decompression.
  • An input audio/video signal 130 is supplied to a compression apparatus 140 which generates a compressed signal for storing by a store device 150 such as a magnetic disk device, an optical disk device, a magnetic tape device, a solid state storage device such as a semiconductor memory or other storage device.
  • a store device 150 such as a magnetic disk device, an optical disk device, a magnetic tape device, a solid state storage device such as a semiconductor memory or other storage device.
  • compressed data is read from the store device 1 50 and passed to a decompression apparatus 1 60 for decompression to provide an output audio/video signal 170.
  • Figure 4 schematically illustrates a video camera using video data compression.
  • image capture device 180 such as a charge coupled device (CCD) image sensor and associated control and read-out electronics, generates a video signal which is passed to a compression apparatus 190.
  • a microphone (or plural microphones) 200 generates an audio signal to be passed to the compression apparatus 190.
  • the compression apparatus 190 generates a compressed audio/video signal 210 to be stored and/or transmitted (shown generically as a schematic stage 220).
  • the techniques to be described below relate primarily to video data compression. It will be appreciated that many existing techniques may be used for audio data compression in conjunction with the video data compression techniques which will be described, to generate a compressed audio/video signal. Accordingly, a separate discussion of audio data compression will not be provided. It will also be appreciated that the data rate associated with video data, in particular broadcast quality video data, is generally very much higher than the data rate associated with audio data (whether compressed or uncompressed). It will therefore be appreciated that uncompressed audio data could accompany compressed video data to form a compressed audio/video signal.
  • Figure 5 provides a schematic overview of a vi d eo d ata co m p ress i on a n d decompression apparatus.
  • Successive images of an input video signal 300 are supplied to an adder 310 and to an image predictor 320.
  • the image predictor 320 will be described below in more detail with reference to Figure 6.
  • the adder 310 in fact performs a subtraction (negative addition) operation, in that it receives the input video signal 300 on a "+" input and the output of the image predictor 320 on a "-" input, so that the predicted image is subtracted from the input image. The result is to generate a so-called residual image signal 330 representing the difference between the actual and projected images.
  • a residual image signal is generated.
  • the data coding techniques to be described that is to say the techniques which will be applied to the residual image signal, tends to work more efficiently when there is less "energy” in the image to be encoded.
  • the term “efficiently” refers to the generation of a small amount of encoded data; for a particular image quality level, it is desirable (and considered “efficient") to generate as little data as is practicably possible.
  • the reference to "energy” in the residual image relates to the amount of information contained in the residual image. If the predicted image were to be identical to the real image, the difference between the two (that is to say, the residual image) would contain zero information (zero energy) and would be very easy to encode into a small amount of encoded data. In general, if the prediction process can be made to work reasonably well, the expectation is that the residual image data will contain less information (less energy) than the input image and so will be easier to encode into a small amount of encoded data.
  • the residual image data 330 is supplied to a transform unit 340 which generates a discrete cosine transform (DOT) representation of the residual image data.
  • DOT discrete cosine transform
  • the output of the transform unit 340 which is to say, a set of DCT coefficients for each transformed block of image data, is supplied to a quantiser 350.
  • quantisation techniques are known in the field of video data compression, ranging from a simple multiplication by a quantisation scaling factor through to the application of complicated lookup tables under the control of a quantisation parameter. The general aim is twofold. Firstly, the quantisation process reduces the number of possible values of the transformed data. Secondly, the quantisation process can increase the likelihood that values of the transformed data are zero. Both of these can make the entropy encoding process, to be described below, work more efficiently in generating small amounts of compressed video data.
  • a data scanning process is applied by a scan unit 360.
  • the purpose of the scanning process is to reorder the quantised transformed data so as to gather as many as possible of the non-zero quantised transformed coefficients together, and of course therefore to gather as many as possible of the zero-valued coefficients together.
  • These features can allow so-called run-length coding or similar techniques to be applied efficiently.
  • the scanning process involves selecting coefficients from the quantised transformed data, and in particular from a block of coefficients corresponding to a block of image data which has been transformed and quantised, according to a "scanning order" so that (a) all of the coefficients are selected once as part of the scan, and (b) the scan tends to provide the desired reordering.
  • a scanning order which can tend to give useful results is a so-called zigzag scanning order.
  • CABAC Context Adaptive Binary Arithmetic Coding
  • CAVLC Context Adaptive Variable-Length Coding
  • the scanning process and the entropy encoding process are shown as separate processes, but in fact can be combined or treated together. That is to say, the reading of data into the entropy encoder can take place i n the scan order.
  • Correspond i ng considerations apply to the respective inverse processes to be described below.
  • the output of the entropy encoder 370 along with additional data (mentioned above and/or discussed below), for example defining the manner in which the predictor 320 generated the predicted image, provides a compressed output video signal 380.
  • a return path is also provided because the operation of the predictor 320 itself depends upon a decompressed version of the compressed output data.
  • the reason for this feature is as follows. At the appropriate stage in the decompression process (to be described below) a decompressed version of the residual data is generated. This decompressed residual data has to be added to a predicted image to generate an output image (because the original residual data was the difference between the input image and a predicted image). In order that this process is comparable, as between the compression side and the decompression side, the predicted images generated by the predictor 320 should be the same during the compression process and during the decompression process. Of course, at decompression, the apparatus does not have access to the original input images, but only to the decompressed images. Therefore, at compression, the predictor 320 bases its prediction (at least, for inter-image encoding) on decompressed versions of the compressed images.
  • the entropy encoding process carried out by the entropy encoder 370 is considered to be "lossless", which is to say that it can be reversed to arrive at exactly the same data which was first supplied to the entropy encoder 370. So, the return path can be implemented before the entropy encoding stage. Indeed, the scanning process carried out by the scan unit 360 is also considered lossless, but in the present embodiment the return path 390 is from the output of the quantiser 350 to the input of a complimentary inverse quantiser 420.
  • an entropy decoder 410, the reverse scan unit 400, an inverse quantiser 420 and an inverse transform unit 430 provide the respective inverse functions of the entropy encoder 370, the scan unit 360, the quantiser 350 and the transform unit 340.
  • the discussion will continue through the compression process; the process to decompress an input compressed video signal will be discussed separately below.
  • the scanned coefficients are passed by the return path 390 from the quantiser 350 to the inverse quantiser 420 which carries out the inverse operation of the scan unit 360.
  • An inverse quantisation and inverse transformation process are carried out by the units 420, 430 to generate a compressed-decompressed residual image signal 440.
  • the image signal 440 is added, at an adder 450, to the output of the predictor 320 to generate a reconstructed output image 460. This forms one input to the image predictor 320, as will be described below.
  • the signal is supplied to the entropy decoder 410 and from there to the chain of the reverse scan unit 400, the inverse quantiser 420 and the inverse transform unit 430 before being added to the output of the image predictor 320 by the adder 450.
  • the output 460 of the adder 450 forms the output decompressed video signal 480.
  • further filtering may be applied before the signal is output.
  • Figure 6 schematically illustrates the generation of predicted images, and in particular the operation of the image predictor 320.
  • Intra-image prediction bases a prediction of the content of a block of the image on data from within the same image. This corresponds to so-called l-frame encoding in other video compression techniques.
  • l-frame encoding where the whole image is intra- encoded
  • the choice between intra- and inter- encoding can be made on a block-by-block basis, though in other embodiments of the invention the choice is still made on an image-by-image basis.
  • Motion-compensated prediction makes use of motion information which attempts to define the source, in another adjacent or nearby image, of image detail to be encoded in the current image. Accordingly, in an ideal example, the contents of a block of image data in the predicted image can be encoded very simply as a reference (a motion vector) pointing to a corresponding block at the same or a slightly different position in an adjacent image.
  • a reference a motion vector
  • two image prediction arrangements (corresponding to intra- and inter-image prediction) are shown, the results of which are selected by a multiplexer 500 under the control of a mode signal 510 so as to provide blocks of the predicted image for supply to the adders 310 and 450.
  • the choice is made in dependence upon which selection gives the lowest "energy” (which, as discussed above, may be considered as information content requiring encoding), and the choice is signalled to the encoder within the encoded output datastream.
  • Image energy in this context, can be detected, for example, by carrying out a trial subtraction of an area of the two versions of the predicted image from the input image, squaring each pixel value of the difference image, summing the squared values, and identifying which of the two versions gives rise to the lower mean squared value of the difference image relating to that image area.
  • the actual prediction, in the intra-encoding system, is made on the basis of image blocks received as part of the signal 460, which is to say, the prediction is based upon encoded- decoded image blocks in order that exactly the same prediction ca n be m a d e at a decompression apparatus.
  • data can be derived from the input video signal 300 by an intra-mode selector 520 to control the operation of the intra-image predictor 530.
  • a motion compensated (MC) predictor 540 uses motion information such as motion vectors derived by a motion estimator 550 from the input video signal 300. Those motion vectors are applied to a processed version of the reconstructed image 460 by the motion compensated predictor 540 to generate blocks of the inter-image prediction.
  • the signal is filtered by a filter unit 560.
  • an adaptive loop filter is applied using coefficients derived by processing the reconstructed signal 460 and the input video signal 300.
  • the adaptive loop filter is a type of filter which, using known techniques, applies adaptive filter coefficients to the data to be filtered. That is to say, the filter coefficients can vary in dependence upon various factors. Data defining which filter coefficients to use is included as part of the encoded output datastream.
  • the filtered output from the filter unit 560 in fact forms the output video signal 480. It is also buffered in one or more image stores 570; the storage of successive images is a requirement of motion compensated prediction processing, and in particular the generation of motion vectors. To save on storage requirements, the stored images in the image stores 570 may be held in a compressed form and then decompressed for use in generating motion vectors. For this particular purpose, any known compression / decompression system may be used.
  • the stored images are passed to an interpolation filter 580 which generates a higher resolution version of the stored images; in this example, intermediate samples (sub-samples) are generated such that the resolution of the interpolated image is output by the interpolation filter 580 is 8 times (in each dimension) that of the images stored in the image stores 570.
  • the interpolated images are passed as an input to the motion estimator 550 and also to the motion compensated predictor 540.
  • a further optional stage is provided, which is to multiply the data values of the input video signal by a factor of four using a multiplier 600 (effectively just shifting the data values left by two bits), and to apply a corresponding divide operation (shift right by two bits) at the output of the apparatus using a divider or right-shifter 610. So, the shifting left and shifting right changes the data purely for the internal operation of the apparatus. This measure can provide for higher calculation accuracy within the apparatus, as the effect of any data rounding errors is reduced.
  • LCU 700 largest coding unit 700 ( Figure 7), which represents a square array of 64 x 64 samples.
  • LCU 700 largest coding unit 700
  • the discussion relates to luminance samples.
  • chrominance mode such as 4:4:4, 4:2:2, 4:2:0 or 4:4:4:4 (GBR plus key data)
  • coding units Three basic types of blocks will be described : coding units, prediction units and transform units.
  • the recursive subdividing of the LCUs allows an input picture to be partitioned in such a way that both the block sizes and the block coding parameters (such as prediction or residual coding modes) can be set according to the specific characteristics of the image to be encoded.
  • the LCU may be subdivided into so-called coding units (CU). Coding units are always square and have a size between 8x8 samples and the full size of the LCU 700.
  • the coding units can be arranged as a kind of tree structure, so that a first subdivision may take place as shown in Figure 8, giving coding units 710 of 32x32 samples; subsequent subdivisions may then take place on a selective basis so as to give some coding units 720 of 16x16 samples ( Figure 9) and potentially some coding units 730 of 8x8 samples ( Figure 10). Overall, this process can provide a content-adapting coding tree structure of CU blocks, each of which may be as large as the LCU or as small as 8x8 samples. Encoding of the output video data takes place on the basis of the coding unit structure.
  • Figure 1 1 schematically illustrates an array of prediction units (PU).
  • a prediction unit is a basic unit for carrying information relating to the image prediction processes, or in other words the additional data added to the entropy encoded residual image data to form the output video signal from the apparatus of Figure 5.
  • prediction units are not restricted to being square in shape. They can take other shapes, in particular rectangular shapes forming half of one of the square coding units, as long as the coding unit is greater than the minimum (8x8) size.
  • the aim is to allow the boundary of adjacent prediction units to match (as closely as possible) the boundary of real objects in the picture, so that different prediction parameters can be applied to different real objects.
  • Each coding unit may contain one or more prediction units.
  • FIG. 12 schematically illustrates an array of transform units (TU).
  • a transform unit is a basic unit of the transform and quantisation process. Transform units are always square and can take a size from 4x4 up to 32x32 samples. Each coding unit can contain one or more transform units.
  • the acronym SDIP-P in Figure 12 signifies a so-called short distance intra- prediction partition. In this arrangement only one dimensional transforms are used, so a 4xN block is passed through N transforms with input data to the transforms being based upon the previously decoded neighbouring blocks and the previously decoded neighbouring lines within the current SDIP-P.
  • intra-prediction involves generating a prediction of a current block (a prediction unit) of samples from previously-encoded and decoded samples in the same image.
  • Figure 13 schematically illustrates a partially encoded image 800. Here, the image is being encoded from top-left to bottom-right on an LCU basis.
  • An example LCU encoded partway through the handling of the whole image is shown as a block 810.
  • a shaded region 820 above and to the left of the block 810 has already been encoded.
  • the intra-image prediction of the contents of the block 810 can make use of any of the shaded area 820 but cannot make use of the unshaded area below that.
  • the block 810 represents an LCU; as discussed above, for the purposes of intra-image prediction processing, this may be subdivided into a set of smaller prediction units.
  • An example of a prediction unit 830 is shown within the LCU 810.
  • the intra-image prediction takes into account samples above and/or to the left of the current LCU 810.
  • Source samples from which the required samples are predicted, may be located at different positions or directions relative to a current prediction unit within the LCU 810.
  • To decide which direction is appropriate for a current prediction unit the results of a trial prediction based upon each candidate direction are compared in order to see which candidate direction gives an outcome which is closest to the corresponding block of the input image.
  • the candidate direction giving the closest outcome is selected as the prediction direction for that prediction unit.
  • the picture may also be encoded on a "slice" basis.
  • a slice is a horizontally adjacent group of LCUs. But in more general terms, the entire residual image could form a slice, or a slice could be a single LCU, or a slice could be a row of LCUs, and so on. Slices can give some resilience to errors as they are encoded as independent units.
  • the encoder and decoder states are completely reset at a slice boundary. For example, intra- prediction is not carried out across slice boundaries; slice boundaries are treated as image boundaries for this purpose.
  • Figure 14 schematically illustrates a set of possible (candidate) prediction directions.
  • the full set of 34 candidate directions is available to a prediction unit of 8x8, 16x16 or 32x32 samples.
  • the special cases of prediction unit sizes of 4x4 and 64x64 samples have a reduced set of candidate directions available to them (17 candidate directions and 5 candidate directions respectively).
  • the directions are determined by horizontal and vertical displacement relative to a current block position, but are encoded as prediction "modes", a set of which is shown in Figure 15. Note that the so-called DC mode represents a simple arithmetic mean of the surrounding upper and left-hand samples.
  • Figure 16 schematically illustrates a zigzag scan, being a scan pattern which may be applied by the scan unit 360.
  • the pattern is shown for an example block of 8x8 DCT coefficients, with the DC coefficient being positioned at the top left position 840 of the block, and increasing horizontal and vertical spatial frequencies being represented by coefficients at increasing distances downwards and to the right of the top-left position 840.
  • the coefficients may be scanned in a reverse order (bottom right to top left using the ordering notation of Figure 16). Also it should be noted that in some embodiments, the scan may pass from left to right across a few (for example between one and three) uppermost horizontal rows, before carrying out a zig-zag of the remaining coefficients.
  • Figure 17 schematically illustrates the operation of a CABAC entropy encoder.
  • the CABAC encoder operates in respect of binary data, that is to say, data represented by only the two symbols 0 and 1.
  • the encoder makes use of a so-called context modelling process which selects a "context" or probability model for subsequent data on the basis of previously encoded data.
  • the selection of the context is carried out in a deterministic way so that the same determination, on the basis of previously decoded data, can be performed at the decoder without the need for further data (specifying the context) to be added to the encoded datastream passed to the decoder.
  • input data to be encoded may be passed to a binary converter 900 if it is not already in a binary form; if the data is already in binary form, the converter 900 is bypassed (by a schematic switch 910).
  • conversion to a binary form is actually carried out by expressing the quantised DCT coefficient data as a series of binary "maps", which will be described further below.
  • the binary data may then be handled by one of two processing paths, a "regular” and a “bypass” path (which are shown schematically as separate paths but which, in embodiments of the invention discussed below, could in fact be implemented by the same processing stages, just using slightly different parameters).
  • the bypass path employs a so-called bypass coder 920 which does not necessarily make use of context modelling in the same form as the regular path.
  • this bypass path can be selected if there is a need for particularly rapid processing of a batch of data, but in the present embodiments two features of so-called "bypass" data are noted: firstly, the bypass data is handled by the CABAC encoder (950, 960), just using a fixed context model representing a 50% probability; and secondly, the bypass data relates to certain categories of data, one particular example being coefficient sign data. Otherwise, the regular path is selected by schematic switches 930, 940. This involves the data being processed by a context modeller 950 followed by a coding engine 960.
  • the entropy encoder shown in Figure 17 encodes a block of data (that is, for example, data corresponding to a block of coefficients relating to a block of the residual image) as a single value if the block is formed entirely of zero-valued data.
  • a "significance map" is prepared for each block that does not fall into th.
  • the significance map indicates whether, for each position in a block of data to be encoded , the corresponding coefficient in the block is non-zero.
  • the significance map data being in binary form , is itself CABAC encoded .
  • the significance map assists with compression because no data needs to be encoded for a coefficient with a magnitude that the significance map indicates to be zero.
  • the significance map can include a special code to indicate the final non-zero coefficient in the block, so that all of the final high frequency / trailing zero coefficients can be omitted from the encoding.
  • the significance map is followed, in the encoded bitstream, by data defining the values of the non-zero coefficients specified by the significance map.
  • the significance map and other maps are generated from the quantised DCT coefficients, for example by the scan unit 360, and is subjected to a zigzag scanning process (or a scanning process selected from zigzag, horizontal raster and vertical raster scanning according to the intra-prediction mode) before being subjected to CABAC encoding.
  • CABAC encoding involves predicting a context, or a probability model, for a next bit to be encoded, based upon other previously encoded data. If the next bit is the same as the bit identified as “most likely” by the probability model, then the encoding of the information that "the next bit agrees with the probability model" can be encoded with great efficiency. It is less efficient to encode that "the next bit does not agree with the probability model", so the derivation of the context data is important to good operation of the encoder.
  • adaptive means that the context or probability models are adapted, or varied during encoding, in an attempt to provide a good match to the (as yet uncoded) next data.
  • CABAC encoding is used, in the present arrangements, for at least the significance map and the maps indicating whether the non-zero values are one or two.
  • Bypass processing which in these embodiments is identical to CABAC encoding but for the fact that the probability model is fixed at an equal (0.5:0.5) probability distribution of 1 s and 0s, is used for at least the sign data and the map indicating whether a value is >2.
  • escape data encoding can be used to encode the actual value of the data. This may include a Golomb-Rice encoding technique.
  • WD4 Working Draft 4 of High-Efficiency Video Coding, JCTVC-F803_d5, Draft ISO/IEC 23008- HEVC; 201 x(E) 201 1 -10-28.
  • Figure 18 schematically illustrates a CAVLC entropy encoding process.
  • the entropy encoding process shown in Figure 18 follows the operation of the scan unit 360. It has been noted that the non-zero coefficients in the transformed and scanned residual data are often sequences of ⁇ 1 .
  • the CAVLC coder indicates the number of high-frequency ⁇ 1 coefficients by a variable referred to as "trailing 1 s" (T1 s). For these non-zero coefficients, the coding efficiency is improved by using different (context- adaptive) variable length coding tables.
  • a first step 1000 generates values "coeff_token" to encode both the total number of non-zero coefficients and the number of trailing ones.
  • the sign bit of each trailing one is encoded in a reverse scanning order.
  • Each remaining non-zero coefficient is encoded as a "level" variable at a step 1020, thus defining the sign and magnitude of those coefficients.
  • a variable total_zeros is used to code the total number of zeros preceding the last nonzero coefficient.
  • a variable run_before is used to code the number of successive zeros preceding each non-zero coefficient in a reverse scanning order.
  • a default scanning order for the scanning operation carried out by the scan unit 360 is a zigzag scan is illustrated schematically in Figure 16.
  • a choice may be made between zigzag scanning, a horizontal raster scan and a vertical raster scan depending on the image prediction direction ( Figure 15) and the transform unit (TU) size.
  • CABAC at least as far as it is used in the proposed HEVC system, involves deriving a "context" or probability model in respect of a next bit to be encoded.
  • the context defined by a context variable or CV, then influences how the bit is encoded.
  • CV context variable
  • the encoding process involves mapping a bit to be encoded onto a position within a range of code values.
  • m_range m_high - rnjow.
  • m_range The range of code values, m_range, is divided into two sub-ranges, by a boundary 1 100 defined with respect to the context variable as:
  • the context variable divides the total range of a set of code values into two complementary sub-ranges or sub-portions, one sub-range being associated with a value (of a next data bit) of zero, and the other being associated with a value (of the next data bit) of one.
  • the division of the range represents the probabilities assumed by the generation of the CV of the two bit values for the next bit to be encoded. So, if the sub-range associated with the value zero is less than half of the total range, this signifies that a zero is considered less probable, as the next symbol, than a one.
  • a lower region of the range (that is, from rnjow to the boundary) is by convention defined as being associated with the data bit value of zero.
  • the encoder and decoder maintain a record of which data bit value is the less probable (often termed the "least probable symbol” or LPS).
  • LPS least probable symbol
  • the CV refers to the LPS, so the CV always represents a value of between 0 and 0.5.
  • a next bit (a current input bit) is now mapped or assigned to a code value within an appropriate sub-range within the range m_range, as divided by the boundary. This is carried out deterministically at both the encoder and the decoder using a technique to be described in more detail below. If the next bit is a 0, a particular code value, representing a position within the sub-range from rnjow to the boundary, is assigned to that bit. If the next bit is a 1 , a particular code value in the sub-range from the boundary 1 100 to m_high is assigned to that bit.
  • the lower limit rnjow and the range m_range are then redefined so as to modify the set of code values in dependence upon the assigned code and the size of the selected sub-range. If the just-encoded bit is a zero, then rnjow is unchanged but m_range is redefined to equal m_range * CV. If the just-encoded bit is a one then rnjow is moved to the boundary position (rnjow + (CV * m_range)) and m_range is redefined as the difference between the boundary and m_high (that is, (1 -CV) * m_range). These alternatives are illustrated schematically in Figures 19B and 19C.
  • the data bit was a one and so mjow was moved up to the previous boundary position.
  • This provides a revised set of code values for use in a next bit encoding sequence.
  • the value of CV is changed for the next bit encoding, at least in part on the value of the just-encoded bit. This is why the technique refers to "adaptive" contexts.
  • the revised value of CV is used to generate a new boundary 1 100'.
  • the interval m_range is successively modified and renormalized in dependence upon the adaptation of the CV values (which can be reproduced at the decoder) and the encoded bit stream.
  • the resulting interval and the number of renormalizing stage uniquely defines the encoded bitstream.
  • a decoder which knows such a final interval would in principle be able to reconstruct the encoded data.
  • the underlying mathematics demonstrate that it is not actually necessary to define the interval to the decoder, but just to define one position within that interval. This is the purpose of the assigned code value, which is maintained at the encoder and passed to the decoder at the termination of encoding the data.
  • CV is changed from one bit to the next according to various known factors, which may be different depending on the block size of data to be encoded. In some instances, the state of neighbouring and previous image blocks may be taken into account.
  • the assigned code value is generated from a table which defines, for each possible value of CV and each possible value of bits 6 and 7 of m_range (noting that bit 8 of m_range is always 1 because of the constraint on the size of m_range), a position or group of positions at which a newly encoded bit should be allocated a code value in the relevant sub-range.
  • Figure 20 schematically illustrates a CABAC encoder using the techniques described above.
  • the CV is initiated (in the case of the first CV) or modified (in the case of subsequent CV).
  • CVs by a CV derivation unit 1 120 acting as a context variable modifying unit).
  • a code generator 1 130 divides the current m_range according to CV and generates an assigned data code within the appropriate sub_range, using the table mentioned above (and thereby acts as a selector and an assigning unit).
  • a range reset unit 1 140 acting as a code value modifying unit resets m_range to that of the selected sub-range.
  • a normaliser 1 150 (acting as a detector) renormalises the m_range, outputting an output bit for each such renormalisation operation. As mentioned, at the end of the process, the assigned code value is also output.
  • the CV is initiated (in the case of the first CV) or modified (in the case of subsequent CVs) by a CV derivation unit 1220 which operates in the same way as the unit 1 120 in the encoder (and also acts as a context variable modifying unit).
  • a code application unit 1230 divides the current m_range according to CV and detects in which sub-range the data code lies (and thereby acts as a selector and a detector).
  • a range reset unit 1240 acting as a code value modifying unit resets m_range to that of the selected sub-range.
  • a normaliser 1250 acting as a context value modifying unit renormalises the m_range in response to a received data bit.
  • Embodiments of the invention concern a technique to remove the look-up tables within the CABAC engine, replacing them with arithmetic operations. This could potentially reduce hardware size and complexity even if the Context Variables used are all increased from 7 to 8- bit.
  • CVs context variables
  • v binary value
  • MPS Most Probable Symbol
  • the MPS is either 1 or O, depending on previous state transitions.
  • the state of the CV does not directly represent the probability/expectation of the value V; rather, it is used as an index to a table that determines the split between Least Probable Symbol (LPS) and the Most Probable Symbol (MPS).
  • LPS Least Probable Symbol
  • MPS Most Probable Symbol
  • the state is updated, to reflect the change in expectation. This is achieved in some systems tables; if the MPS is encoded/decoded, the table m_aucNextStateMPS is indexed with the current state to give the next state, but if the LPS is encoded/decoded, the table m_aucNextStateLPS is used instead. These tables each have 128 elements, each 7-bits (in hardware) or 8-bits (in software).
  • the CVs are not transmitted in the bit-stream between encoder and decoder: the encoder and decoder initialize the CVs to the same value, and are updated in the same manner after the CV has been used to encode or decode a value.
  • CABAC's coding engine uses an internal variable called m_range that is a 9-bit number, always kept within the range of 256-51 1 . When the number falls below 256, it is 'normalised' (shifted left).
  • the range is an indication of the current fractional scale that is available for the next symbol, i.e., the value m_range should be split into two parts representing the MPS and LPS, with the splitting made according to the probabilities.
  • the split-point is calculated using a 2D table which gives the space allocated to the LPS (uiLPS).
  • the current 6-bit state of a CV is used to access a row in the table; bits ⁇ 7, 6 ⁇ of m_range are used to indicate the column (bit 8 is always ).
  • the MPS takes the split from 0 to "m_range - uiLPS”; the LPS takes the split from "m_range - uiLPS" to m_range.
  • the CVs 8-bit state state 0 directly represents the probability/expectation of the binary value being 0:
  • the context variable is expressed as an n-bit number (8 bits in this example, though other numbers of bits could be used) and the position of the boundary 1 100, specified by the probability of a next bit being zero, is defined by a proportion or fraction (of m_range) dependent on the context variable. In this example, that fraction is equal to the context variable divided by 2 n .
  • the adaptation of the CVs state uses the mathematical function "update" on the current state, to change the CV and thereby move the boundary 1 100, given the binary value v that was encoded/decoded:
  • nextState 0 update(sfafe 0 , ⁇ /)
  • the "update” function may be expressed in the following pseudo-code:
  • This function has the effect of modifying the context variable, for use in respect of a next input data bit, so as to increase the proportion or fraction of the set of code values in the subrange which was selected for the current data bit (and thereby move the boundary 1 100) by the greater of:
  • a predetermined minimum increment in this example, 1 .
  • the increment comprises a predetermined fractional amount of the current proportion (which, in the example above equates to 7/128 * the current proportion) subtracted from a first constant value (1 3 in the above example).
  • the update method described above uses the state to model the expectation; therefore, the update equation has to model the updating of the expectation.
  • the update equation was derived from the tables used by CVs in a CABAC encoder, by plotting the current state's fraction of the range of a ⁇ ' to be encoded against the fraction of the range given to the next symbol, given that the next symbol is also a ⁇ '.
  • ContextModel :m_aucNextStateMPS[va/t/e C v-H/w4] or
  • the probability is clipped so that the range will be at least '4' (the state is in the range of 4 to 251 ), similar to the limits placed in the HM4.0 tables.
  • oneRange m_range-zeroRange
  • Utilizing a multiplier on m_range is not ideal when the throughput of the entropy decoder is considered; it is better if m_range is required for a minimal number of simple operations thereby allowing multiple decodes within a single clock cycle.
  • the CV's state is used to index a row in table 'sm_aucLPSTable', giving four potential next range values for the LPS.
  • the choice of the next range value is a simple multiplexor operation utilizing two bits of the current range.
  • the following method pre-calculates the 4 possible next range values, and then the choice is made later utilizing the same two bits of the current range.
  • the method can be thought of as a multiplier operation, but using a two bit operand and rounding.
  • lpsRange candidateLPSRanges [m_uiRange bits 7 , 6 ] encode / decode in terms of LPS/MPS.
  • the first of the four values will be utilized when the current range, m_range, is between 256 (0b100000000) and 319 (0b1001 1 1 1 1 1 ) (inclusive), i.e. on average 288 (0b100100000), where the two bits that are used for the choice of result are highlighted.
  • m_range is between 320 and (0b101000000) and 383 (0b 101.1 1 1 1 1 1 ) (inclusive), i.e. on average 352 (0b10J.100000).
  • the third and the fourth values will be used when, on average m_range 416 and 480 respectively.
  • the four candidate values are therefore calculated as the current probability ⁇ state o) multiplied by the 4 average m_range values:
  • the 4 multipliers can be implemented as 4 adders, and since the 5 LSBs of each of the constant multiplicands are 0, the multiplicands and shifts can be reduced. Once the value of m_range is known, the zero range can be selected from the four candidates, and the process could proceed as with the multipler-based range method.
  • this system returns to essentially the same as HM4.0, except with arithmetic updating instead of tables: a LPS is identified (test on bit-7 of the state), and the next LPS range is selected (using two bits from the m_range) from 4 candidate LPS ranges.
  • the MPS can take the first section of the range, and the LPS can take the second section of the range.
  • the CV update table (RDOQ) has 128 table entries, corresponding to each of the 128 states of a CV.
  • the CVs have been extended to 256 possible states, and therefore the RDOQ table has 256 table entries, by default. It was noticed that this new table can be reduced to 128 entries, and indexed using the 7 MSBs of the state, without having any noticeable effect on the coding efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A data encoding method for encoding successive input data bits comprises the steps of: selecting one of two complementary sub-ranges of a set of code values according to whether a current input data bit is a one or a zero, the proportions of the two sub-ranges relative to the set of code values being defined by a context variable associated with that input data bit; assigning the current input data bit to a code value within the selected sub-range; modifying the set of code values in dependence upon the assigned code value and the size of the selected sub- range; detecting whether the set of code values is less than a predetermined minimum size and if so: successively increasing the size of the set of code values until it has at least the predetermined minimum size; and outputting an encoded data bit in response to each such size-increasing operation; modifying the context variable, for use in respect of a next input data bit, so as to increase the proportion of the set of code values in the sub-range which was selected for the current data bit by the greater of a predetermined minimum increment; and an increment derived in dependence upon a predetermined fractional amount of the current proportion.

Description

CONTEXT ADAPTIVE DATA ENCODING
Cross Reference to Related Application
The present application claims the benefit of the earlier filing date of GB1 1 19176.4 filed in the United Kingdom Intellectual Property Office on 7 November 201 1 , the entire content of which application is incorporated herein by reference.
Field of the Invention
This invention relates to context adaptive data encoding.
Description of the Related Art
The "background" description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present invention.
There are several video data compression and decompression systems which involve transforming video data into a frequency domain representation, quantising the frequency domain coefficients and then applying some form of entropy encoding to the quantised coefficients.
Entropy, in the present context, can be considered as representing the information content of a data symbol or series of symbols. The aim of entropy encoding is to encode a series of data symbols in a lossless manner using (ideally) the smallest number of encoded data bits which are necessary to represent the information content of that series of data symbols. In practice, entropy encoding is used to encode the quantised coefficients such that the encoded data is smaller (in terms of its number of bits) then the data size of the original quantised coefficients. A more efficient entropy encoding process gives a smaller output data size for the same input data size.
One technique for entropy encoding video data is the so-called CABAC (context adaptive binary arithmetic coding) technique. In an example implementation, the quantised coefficients are divided into data indicating positions, relative to an array of the coefficients, of coefficient values of certain magnitudes and their signs. So, for example, a so-called "significance map" may indicate positions in an array of coefficients where the coefficient at that position has a non-zero value. Other maps may indicate where the data has a value of one or more (and its sign); or where the data has a value of two or more.
In context adaptive encoding, a bit of data is encoded with respect to a probability model, or context, representing an expectation or prediction of how likely it is that the data bit will be a one or a zero. To do this, an input data bit is assigned a code value within one of two complementary sub-ranges of a range of code values, with the respective sizes of the sub- ranges being defined by the context. A next step is to modify the overall range (for use in respect of a next input data bit) in response to the assigned code value and the current size of the selected sub-range. If the modified range is then smaller than a threshold (for example, one half of an original range size) then it is increased in size, for example by doubling (shifting left) the modified range. At this point, an output encoded data bit is generated to indicate that a doubling operation took place. A further step is to modify the context for use with the next input data bit. In currently proposed systems this is carried out by using the current context and the identity of the current "most probable symbol" (either one or zero, whichever is indicated by the context to currently have a greater than 0.5 probability) as an index into a look-up table of new context values.
Summary
This invention provides a data encoding method for encoding successive input data bits, the method comprising the steps of:
selecting one of two complementary sub-ranges of a set of code values according to whether a current input data bit is a one or a zero, the proportions of the two sub-ranges relative to the set of code values being defined by a context variable associated with that input data bit; assigning the current input data bit to a code value within the selected sub-range;
modifying the set of code values in dependence upon the assigned code value and the size of the selected sub-range;
detecting whether the set of code values is less than a predetermined minimum size and if so:
successively increasing the size of the set of code values until it has at least the predetermined minimum size; and
outputting an encoded data bit in response to each such size-increasing operation;
modifying the context variable, for use in respect of a next input data bit, so as to increase the proportion of the set of code values in the sub-range which was selected for the current data bit by the greater of:
a predetermined minimum increment; and
an increment derived in dependence upon a predetermined fractional amount of the current proportion.
Further respective aspects and features of the present invention are defined in the appended claims.
It is to be understood that both the foregoing general description of the invention and the following detailed description are exemplary, but not restrictive of, the invention. Brief Description of the Drawings
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description of embodiments of the invention, when considered in connection with the accompanying drawings, wherein:
Figure 1 schematically illustrates an audio/video (A/V) data transmission and reception system using video data compression and decompression;
Figure 2 schematically illustrates a video display system using video data decompression;
Figure 3 schematically illustrates an audio/video storage system using video data compression and decompression;
Figure 4 schematically illustrates a video camera using video data compression;
Figure 5 provides a schematic overview of a video data compression and decompression apparatus;
Figure 6 schematically illustrates the generation of predicted images;
Figure 7 schematically illustrates a largest coding unit (LCU);
Figure 8 schematically illustrates a set of four coding units (CU);
Figures 9 and 10 schematically illustrate the coding units of Figure 8 sub-divided into smaller coding units;
Figure 1 1 schematically illustrates an array of prediction units (PU);
Figure 12 schematically illustrates an array of transform units (TU);
Figure 13 schematically illustrates a partially-encoded image;
Figure 14 schematically illustrates a set of possible prediction directions;
Figure 15 schematically illustrates a set of prediction modes;
Figure 16 schematically illustrates a zigzag scan;
Figure 17 schematically illustrates a CABAC entropy encoder;
Figure 18 schematically illustrates a CAVLC entropy encoding process;
Figures 19A to 19D schematically illustrate aspects of a CABAC encoding and decoding operation;
Figure 20 schematically illustrates a CABAC encoder;
Figure 21 schematically illustrates a CABAC decoder; and
Figure 22 is a schematic graph illustrating probability values for context variable values. Description of the Embodiments
Referring now to the drawings, Figures 1 -4 are provided to give schematic illustrations of apparatus or systems making use of the compression and/or decompression apparatus to be described below in connection with embodiments of the invention. All of the data compression and/or decompression apparatus is to be described below may be implemented in hardware, in software running on a general-purpose data processing apparatus such as a general-purpose computer, as programmable hardware such as an application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or as combinations of these. In cases where the embodiments are implemented by software and/or firmware, it will be appreciated that such software and/or firmware, and non-transitory machine- readable data storage media by which such software and/or firmware are stored or otherwise provided, are considered as embodiments of the present invention.
Figure 1 schematically illustrates an audio/video data transmission and reception system using video data compression and decompression.
An input audio/video signal 10 is supplied to a video data compression apparatus 20 which compresses at least the video component of the audio/video signal 10 for transmission along a transmission route 30 such as a cable, an optical fibre, a wireless link or the like. The compressed signal is processed by a decompression apparatus 40 to provide an output audio/video signal 50. For the return path , a compression apparatus 60 compresses an audio/video signal for transmission along the transmission route 30 to a decompression apparatus 70.
The compression apparatus 20 and decompression apparatus 70 can therefore form one node of a transmission link. The decompression apparatus 40 and decompression apparatus 60 can form another node of the transmission link. Of course, in instances where the transmission link is uni-directional, only one of the nodes would require a compression apparatus and the other node would only require a decompression apparatus.
Figure 2 schematically illustrates a video display system using video data decompression. In particular, a compressed audio/video signal 100 is processed by a decompression apparatus 1 10 to provide a decompressed signal which can be displayed on a display 120. The decompression apparatus 1 10 could be implemented as an integral part of the display 120, for example being provided within the same casing as the display device. Alternatively, the decompression apparatus 1 10 might be provided as (for example) a so-called set top box (STB), noting that the expression "set-top" does not imply a requirement for the box to be sited in any particular orientation or position with respect to the display 120; it is simply a term used in the art to indicate a device which is connectable to a display as a peripheral device.
Figure 3 schematically illustrates an audio/video storage system using video data compression and decompression. An input audio/video signal 130 is supplied to a compression apparatus 140 which generates a compressed signal for storing by a store device 150 such as a magnetic disk device, an optical disk device, a magnetic tape device, a solid state storage device such as a semiconductor memory or other storage device. For replay, compressed data is read from the store device 1 50 and passed to a decompression apparatus 1 60 for decompression to provide an output audio/video signal 170.
It will be appreciated that the compressed or encoded signal, and a storage medium storing that signal, are considered as embodiments of the present invention.
Figure 4 schematically illustrates a video camera using video data compression. In
Figure 4, and image capture device 180, such as a charge coupled device (CCD) image sensor and associated control and read-out electronics, generates a video signal which is passed to a compression apparatus 190. A microphone (or plural microphones) 200 generates an audio signal to be passed to the compression apparatus 190. The compression apparatus 190 generates a compressed audio/video signal 210 to be stored and/or transmitted (shown generically as a schematic stage 220).
The techniques to be described below relate primarily to video data compression. It will be appreciated that many existing techniques may be used for audio data compression in conjunction with the video data compression techniques which will be described, to generate a compressed audio/video signal. Accordingly, a separate discussion of audio data compression will not be provided. It will also be appreciated that the data rate associated with video data, in particular broadcast quality video data, is generally very much higher than the data rate associated with audio data (whether compressed or uncompressed). It will therefore be appreciated that uncompressed audio data could accompany compressed video data to form a compressed audio/video signal. It will further be appreciated that although the present examples (shown in Figures 1 -4) relate to audio/video data, the techniques to be described below can find use in a system which simply deals with (that is to say, compresses, decompresses, stores, displays and/or transmits) video data. That is to say, the embodiments can apply to video data compression without necessarily having any associated audio data handling at all.
Figure 5 provides a schematic overview of a vi d eo d ata co m p ress i on a n d decompression apparatus.
Successive images of an input video signal 300 are supplied to an adder 310 and to an image predictor 320. The image predictor 320 will be described below in more detail with reference to Figure 6. The adder 310 in fact performs a subtraction (negative addition) operation, in that it receives the input video signal 300 on a "+" input and the output of the image predictor 320 on a "-" input, so that the predicted image is subtracted from the input image. The result is to generate a so-called residual image signal 330 representing the difference between the actual and projected images.
One reason why a residual image signal is generated is as follows. The data coding techniques to be described, that is to say the techniques which will be applied to the residual image signal, tends to work more efficiently when there is less "energy" in the image to be encoded. Here, the term "efficiently" refers to the generation of a small amount of encoded data; for a particular image quality level, it is desirable (and considered "efficient") to generate as little data as is practicably possible. The reference to "energy" in the residual image relates to the amount of information contained in the residual image. If the predicted image were to be identical to the real image, the difference between the two (that is to say, the residual image) would contain zero information (zero energy) and would be very easy to encode into a small amount of encoded data. In general, if the prediction process can be made to work reasonably well, the expectation is that the residual image data will contain less information (less energy) than the input image and so will be easier to encode into a small amount of encoded data.
The residual image data 330 is supplied to a transform unit 340 which generates a discrete cosine transform (DOT) representation of the residual image data. The DCT technique itself is well known and will not be described in detail here. There are however aspects of the techniques used in the present apparatus which will be described in more detail below, in particular relating to the selection of different blocks of data to which the DCT operation is applied. These will be discussed with reference to Figures 7-12 below.
The output of the transform unit 340, which is to say, a set of DCT coefficients for each transformed block of image data, is supplied to a quantiser 350. Various quantisation techniques are known in the field of video data compression, ranging from a simple multiplication by a quantisation scaling factor through to the application of complicated lookup tables under the control of a quantisation parameter. The general aim is twofold. Firstly, the quantisation process reduces the number of possible values of the transformed data. Secondly, the quantisation process can increase the likelihood that values of the transformed data are zero. Both of these can make the entropy encoding process, to be described below, work more efficiently in generating small amounts of compressed video data.
A data scanning process is applied by a scan unit 360. The purpose of the scanning process is to reorder the quantised transformed data so as to gather as many as possible of the non-zero quantised transformed coefficients together, and of course therefore to gather as many as possible of the zero-valued coefficients together. These features can allow so-called run-length coding or similar techniques to be applied efficiently. So, the scanning process involves selecting coefficients from the quantised transformed data, and in particular from a block of coefficients corresponding to a block of image data which has been transformed and quantised, according to a "scanning order" so that (a) all of the coefficients are selected once as part of the scan, and (b) the scan tends to provide the desired reordering. Techniques for selecting a scanning order will be described below. One example scanning order which can tend to give useful results is a so-called zigzag scanning order.
The scanned coefficients are then passed to an entropy encoder (EE) 370. Again , various types of entropy encoding may be used. Two examples which will be described below are variants of the so-called CABAC (Context Adaptive Binary Arithmetic Coding) system and variants of the so-called CAVLC (Context Adaptive Variable-Length Coding) system. In general terms, CABAC is considered to provide a better efficiency, and in some studies has been shown to provide a 10-20% reduction in the quantity of encoded output data for a comparable image quality compared to CAVLC. However, CAVLC is considered to represent a much lower level of complexity (in terms of its implementation) than CABAC. The CABAC technique will be discussed with reference to Figure 17 below, and the CAVLC technique will be discussed with reference to Figures 18 and 19 below.
Note that the scanning process and the entropy encoding process are shown as separate processes, but in fact can be combined or treated together. That is to say, the reading of data into the entropy encoder can take place i n the scan order. Correspond i ng considerations apply to the respective inverse processes to be described below.
The output of the entropy encoder 370, along with additional data (mentioned above and/or discussed below), for example defining the manner in which the predictor 320 generated the predicted image, provides a compressed output video signal 380.
However, a return path is also provided because the operation of the predictor 320 itself depends upon a decompressed version of the compressed output data.
The reason for this feature is as follows. At the appropriate stage in the decompression process (to be described below) a decompressed version of the residual data is generated. This decompressed residual data has to be added to a predicted image to generate an output image (because the original residual data was the difference between the input image and a predicted image). In order that this process is comparable, as between the compression side and the decompression side, the predicted images generated by the predictor 320 should be the same during the compression process and during the decompression process. Of course, at decompression, the apparatus does not have access to the original input images, but only to the decompressed images. Therefore, at compression, the predictor 320 bases its prediction (at least, for inter-image encoding) on decompressed versions of the compressed images.
The entropy encoding process carried out by the entropy encoder 370 is considered to be "lossless", which is to say that it can be reversed to arrive at exactly the same data which was first supplied to the entropy encoder 370. So, the return path can be implemented before the entropy encoding stage. Indeed, the scanning process carried out by the scan unit 360 is also considered lossless, but in the present embodiment the return path 390 is from the output of the quantiser 350 to the input of a complimentary inverse quantiser 420.
In general terms, an entropy decoder 410, the reverse scan unit 400, an inverse quantiser 420 and an inverse transform unit 430 provide the respective inverse functions of the entropy encoder 370, the scan unit 360, the quantiser 350 and the transform unit 340. For now, the discussion will continue through the compression process; the process to decompress an input compressed video signal will be discussed separately below. In the compression process, the scanned coefficients are passed by the return path 390 from the quantiser 350 to the inverse quantiser 420 which carries out the inverse operation of the scan unit 360. An inverse quantisation and inverse transformation process are carried out by the units 420, 430 to generate a compressed-decompressed residual image signal 440.
The image signal 440 is added, at an adder 450, to the output of the predictor 320 to generate a reconstructed output image 460. This forms one input to the image predictor 320, as will be described below.
Turning now to the process applied to a received compressed video signal 470, the signal is supplied to the entropy decoder 410 and from there to the chain of the reverse scan unit 400, the inverse quantiser 420 and the inverse transform unit 430 before being added to the output of the image predictor 320 by the adder 450. In straightforward terms, the output 460 of the adder 450 forms the output decompressed video signal 480. In practice, further filtering may be applied before the signal is output.
Figure 6 schematically illustrates the generation of predicted images, and in particular the operation of the image predictor 320.
There are two basic modes of prediction: so-called intra-image prediction and so-called inter-image, or motion-compensated (MC), prediction.
Intra-image prediction bases a prediction of the content of a block of the image on data from within the same image. This corresponds to so-called l-frame encoding in other video compression techniques. In contrast to l-frame encoding, where the whole image is intra- encoded, in the present embodiments the choice between intra- and inter- encoding can be made on a block-by-block basis, though in other embodiments of the invention the choice is still made on an image-by-image basis.
Motion-compensated prediction makes use of motion information which attempts to define the source, in another adjacent or nearby image, of image detail to be encoded in the current image. Accordingly, in an ideal example, the contents of a block of image data in the predicted image can be encoded very simply as a reference (a motion vector) pointing to a corresponding block at the same or a slightly different position in an adjacent image.
Returning to Figure 6, two image prediction arrangements (corresponding to intra- and inter-image prediction) are shown, the results of which are selected by a multiplexer 500 under the control of a mode signal 510 so as to provide blocks of the predicted image for supply to the adders 310 and 450. The choice is made in dependence upon which selection gives the lowest "energy" (which, as discussed above, may be considered as information content requiring encoding), and the choice is signalled to the encoder within the encoded output datastream. Image energy, in this context, can be detected, for example, by carrying out a trial subtraction of an area of the two versions of the predicted image from the input image, squaring each pixel value of the difference image, summing the squared values, and identifying which of the two versions gives rise to the lower mean squared value of the difference image relating to that image area.
The actual prediction, in the intra-encoding system, is made on the basis of image blocks received as part of the signal 460, which is to say, the prediction is based upon encoded- decoded image blocks in order that exactly the same prediction ca n be m a d e at a decompression apparatus. However, data can be derived from the input video signal 300 by an intra-mode selector 520 to control the operation of the intra-image predictor 530.
For inter-image prediction, a motion compensated (MC) predictor 540 uses motion information such as motion vectors derived by a motion estimator 550 from the input video signal 300. Those motion vectors are applied to a processed version of the reconstructed image 460 by the motion compensated predictor 540 to generate blocks of the inter-image prediction.
The processing applied to the signal 460 will now be described. Firstly, the signal is filtered by a filter unit 560. This involves applying a "deblocking" filter to remove or at least tend to reduce the effects of the block-based processing carried out by the transform unit 340 and subsequent operations. Also, an adaptive loop filter is applied using coefficients derived by processing the reconstructed signal 460 and the input video signal 300. The adaptive loop filter is a type of filter which, using known techniques, applies adaptive filter coefficients to the data to be filtered. That is to say, the filter coefficients can vary in dependence upon various factors. Data defining which filter coefficients to use is included as part of the encoded output datastream.
The filtered output from the filter unit 560 in fact forms the output video signal 480. It is also buffered in one or more image stores 570; the storage of successive images is a requirement of motion compensated prediction processing, and in particular the generation of motion vectors. To save on storage requirements, the stored images in the image stores 570 may be held in a compressed form and then decompressed for use in generating motion vectors. For this particular purpose, any known compression / decompression system may be used. The stored images are passed to an interpolation filter 580 which generates a higher resolution version of the stored images; in this example, intermediate samples (sub-samples) are generated such that the resolution of the interpolated image is output by the interpolation filter 580 is 8 times (in each dimension) that of the images stored in the image stores 570. The interpolated images are passed as an input to the motion estimator 550 and also to the motion compensated predictor 540.
In embodiments of the invention, a further optional stage is provided, which is to multiply the data values of the input video signal by a factor of four using a multiplier 600 (effectively just shifting the data values left by two bits), and to apply a corresponding divide operation (shift right by two bits) at the output of the apparatus using a divider or right-shifter 610. So, the shifting left and shifting right changes the data purely for the internal operation of the apparatus. This measure can provide for higher calculation accuracy within the apparatus, as the effect of any data rounding errors is reduced.
The way in which an image is partitioned for compression processing will now be described. At a basic level, and image to be compressed is considered as an array of blocks of samples. For the purposes of the present discussion, the largest such block under consideration is a so-called largest coding unit (LCU) 700 (Figure 7), which represents a square array of 64 x 64 samples. Here, the discussion relates to luminance samples. Depending on the chrominance mode, such as 4:4:4, 4:2:2, 4:2:0 or 4:4:4:4 (GBR plus key data), there will be differing numbers of corresponding chrominance samples corresponding to the luminance block.
Three basic types of blocks will be described : coding units, prediction units and transform units. In general terms, the recursive subdividing of the LCUs allows an input picture to be partitioned in such a way that both the block sizes and the block coding parameters (such as prediction or residual coding modes) can be set according to the specific characteristics of the image to be encoded.
The LCU may be subdivided into so-called coding units (CU). Coding units are always square and have a size between 8x8 samples and the full size of the LCU 700. The coding units can be arranged as a kind of tree structure, so that a first subdivision may take place as shown in Figure 8, giving coding units 710 of 32x32 samples; subsequent subdivisions may then take place on a selective basis so as to give some coding units 720 of 16x16 samples (Figure 9) and potentially some coding units 730 of 8x8 samples (Figure 10). Overall, this process can provide a content-adapting coding tree structure of CU blocks, each of which may be as large as the LCU or as small as 8x8 samples. Encoding of the output video data takes place on the basis of the coding unit structure.
Figure 1 1 schematically illustrates an array of prediction units (PU). A prediction unit is a basic unit for carrying information relating to the image prediction processes, or in other words the additional data added to the entropy encoded residual image data to form the output video signal from the apparatus of Figure 5. In general, prediction units are not restricted to being square in shape. They can take other shapes, in particular rectangular shapes forming half of one of the square coding units, as long as the coding unit is greater than the minimum (8x8) size. The aim is to allow the boundary of adjacent prediction units to match (as closely as possible) the boundary of real objects in the picture, so that different prediction parameters can be applied to different real objects. Each coding unit may contain one or more prediction units.
Figure 12 schematically illustrates an array of transform units (TU). A transform unit is a basic unit of the transform and quantisation process. Transform units are always square and can take a size from 4x4 up to 32x32 samples. Each coding unit can contain one or more transform units. The acronym SDIP-P in Figure 12 signifies a so-called short distance intra- prediction partition. In this arrangement only one dimensional transforms are used, so a 4xN block is passed through N transforms with input data to the transforms being based upon the previously decoded neighbouring blocks and the previously decoded neighbouring lines within the current SDIP-P.
The intra-prediction process will now be discussed. In general terms, intra-prediction involves generating a prediction of a current block (a prediction unit) of samples from previously-encoded and decoded samples in the same image. Figure 13 schematically illustrates a partially encoded image 800. Here, the image is being encoded from top-left to bottom-right on an LCU basis. An example LCU encoded partway through the handling of the whole image is shown as a block 810. A shaded region 820 above and to the left of the block 810 has already been encoded. The intra-image prediction of the contents of the block 810 can make use of any of the shaded area 820 but cannot make use of the unshaded area below that.
The block 810 represents an LCU; as discussed above, for the purposes of intra-image prediction processing, this may be subdivided into a set of smaller prediction units. An example of a prediction unit 830 is shown within the LCU 810.
The intra-image prediction takes into account samples above and/or to the left of the current LCU 810. Source samples, from which the required samples are predicted, may be located at different positions or directions relative to a current prediction unit within the LCU 810. To decide which direction is appropriate for a current prediction unit, the results of a trial prediction based upon each candidate direction are compared in order to see which candidate direction gives an outcome which is closest to the corresponding block of the input image. The candidate direction giving the closest outcome is selected as the prediction direction for that prediction unit.
The picture may also be encoded on a "slice" basis. In one example, a slice is a horizontally adjacent group of LCUs. But in more general terms, the entire residual image could form a slice, or a slice could be a single LCU, or a slice could be a row of LCUs, and so on. Slices can give some resilience to errors as they are encoded as independent units. The encoder and decoder states are completely reset at a slice boundary. For example, intra- prediction is not carried out across slice boundaries; slice boundaries are treated as image boundaries for this purpose.
Figure 14 schematically illustrates a set of possible (candidate) prediction directions. The full set of 34 candidate directions is available to a prediction unit of 8x8, 16x16 or 32x32 samples. The special cases of prediction unit sizes of 4x4 and 64x64 samples have a reduced set of candidate directions available to them (17 candidate directions and 5 candidate directions respectively). The directions are determined by horizontal and vertical displacement relative to a current block position, but are encoded as prediction "modes", a set of which is shown in Figure 15. Note that the so-called DC mode represents a simple arithmetic mean of the surrounding upper and left-hand samples. Figure 16 schematically illustrates a zigzag scan, being a scan pattern which may be applied by the scan unit 360. In Figure 16, the pattern is shown for an example block of 8x8 DCT coefficients, with the DC coefficient being positioned at the top left position 840 of the block, and increasing horizontal and vertical spatial frequencies being represented by coefficients at increasing distances downwards and to the right of the top-left position 840.
Note that in some embodiments, the coefficients may be scanned in a reverse order (bottom right to top left using the ordering notation of Figure 16). Also it should be noted that in some embodiments, the scan may pass from left to right across a few (for example between one and three) uppermost horizontal rows, before carrying out a zig-zag of the remaining coefficients.
Figure 17 schematically illustrates the operation of a CABAC entropy encoder.
The CABAC encoder operates in respect of binary data, that is to say, data represented by only the two symbols 0 and 1. The encoder makes use of a so-called context modelling process which selects a "context" or probability model for subsequent data on the basis of previously encoded data. The selection of the context is carried out in a deterministic way so that the same determination, on the basis of previously decoded data, can be performed at the decoder without the need for further data (specifying the context) to be added to the encoded datastream passed to the decoder.
Referring to Figure 17, input data to be encoded may be passed to a binary converter 900 if it is not already in a binary form; if the data is already in binary form, the converter 900 is bypassed (by a schematic switch 910). In the present embodiments, conversion to a binary form is actually carried out by expressing the quantised DCT coefficient data as a series of binary "maps", which will be described further below.
The binary data may then be handled by one of two processing paths, a "regular" and a "bypass" path (which are shown schematically as separate paths but which, in embodiments of the invention discussed below, could in fact be implemented by the same processing stages, just using slightly different parameters). The bypass path employs a so-called bypass coder 920 which does not necessarily make use of context modelling in the same form as the regular path. In some examples of CABAC coding, this bypass path can be selected if there is a need for particularly rapid processing of a batch of data, but in the present embodiments two features of so-called "bypass" data are noted: firstly, the bypass data is handled by the CABAC encoder (950, 960), just using a fixed context model representing a 50% probability; and secondly, the bypass data relates to certain categories of data, one particular example being coefficient sign data. Otherwise, the regular path is selected by schematic switches 930, 940. This involves the data being processed by a context modeller 950 followed by a coding engine 960.
The entropy encoder shown in Figure 17 encodes a block of data (that is, for example, data corresponding to a block of coefficients relating to a block of the residual image) as a single value if the block is formed entirely of zero-valued data. For each block that does not fall into th is category, that is to say a block that contains at least some non-zero data, a "significance map" is prepared. The significance map indicates whether, for each position in a block of data to be encoded , the corresponding coefficient in the block is non-zero. The significance map data, being in binary form , is itself CABAC encoded . The use of the significance map assists with compression because no data needs to be encoded for a coefficient with a magnitude that the significance map indicates to be zero. Also, the significance map can include a special code to indicate the final non-zero coefficient in the block, so that all of the final high frequency / trailing zero coefficients can be omitted from the encoding. The significance map is followed, in the encoded bitstream, by data defining the values of the non-zero coefficients specified by the significance map.
Further levels of map data are also prepared and are CABAC encoded. An example is a map which defines, as a binary value (1 = yes, 0 = no) whether the coefficient data at a map position which the significance map has indicated to be "non-zero" actually has the value of "one". Another map specifies whether the coefficient data at a map position which the significance map has indicated to be "non-zero" actually has the value of "two". A further map indicates, for those map positions where the significance map has indicated that the coefficient data is "non-zero", whether the data has a value of "greater than two". Another map indicates, again for data identified as "non-zero", the sign of the data value (using a predetermined binary notation such as 1 for +, 0 for -, or of course the other way around).
In embodiments of the invention, the significance map and other maps are generated from the quantised DCT coefficients, for example by the scan unit 360, and is subjected to a zigzag scanning process (or a scanning process selected from zigzag, horizontal raster and vertical raster scanning according to the intra-prediction mode) before being subjected to CABAC encoding.
In general terms, CABAC encoding involves predicting a context, or a probability model, for a next bit to be encoded, based upon other previously encoded data. If the next bit is the same as the bit identified as "most likely" by the probability model, then the encoding of the information that "the next bit agrees with the probability model" can be encoded with great efficiency. It is less efficient to encode that "the next bit does not agree with the probability model", so the derivation of the context data is important to good operation of the encoder. The term "adaptive" means that the context or probability models are adapted, or varied during encoding, in an attempt to provide a good match to the (as yet uncoded) next data.
Using a simple analogy, in the written English language, the letter "U" is relatively uncommon. But in a letter position immediately after the letter "Q", it is very common indeed. So, a probability model might set the probability of a "U" as a very low value, but if the current letter is a "Q", the probability model for a "U" as the next letter could be set to a very high probability value.
CABAC encoding is used, in the present arrangements, for at least the significance map and the maps indicating whether the non-zero values are one or two. Bypass processing - which in these embodiments is identical to CABAC encoding but for the fact that the probability model is fixed at an equal (0.5:0.5) probability distribution of 1 s and 0s, is used for at least the sign data and the map indicating whether a value is >2. For those data positions identified as >2, a separate so-called escape data encoding can be used to encode the actual value of the data. This may include a Golomb-Rice encoding technique.
The CABAC context modelling and encoding process is described in more detail in
WD4: Working Draft 4 of High-Efficiency Video Coding, JCTVC-F803_d5, Draft ISO/IEC 23008- HEVC; 201 x(E) 201 1 -10-28.
Figure 18 schematically illustrates a CAVLC entropy encoding process.
As with CABAC discussed above, the entropy encoding process shown in Figure 18 follows the operation of the scan unit 360. It has been noted that the non-zero coefficients in the transformed and scanned residual data are often sequences of ±1 . The CAVLC coder indicates the number of high-frequency ±1 coefficients by a variable referred to as "trailing 1 s" (T1 s). For these non-zero coefficients, the coding efficiency is improved by using different (context- adaptive) variable length coding tables.
Referring to Figure 18, a first step 1000 generates values "coeff_token" to encode both the total number of non-zero coefficients and the number of trailing ones. At a step 1010, the sign bit of each trailing one is encoded in a reverse scanning order. Each remaining non-zero coefficient is encoded as a "level" variable at a step 1020, thus defining the sign and magnitude of those coefficients. At a step 1030 a variable total_zeros is used to code the total number of zeros preceding the last nonzero coefficient. Finally, at a step 1040, a variable run_before is used to code the number of successive zeros preceding each non-zero coefficient in a reverse scanning order. The collected output of the variables defined above forms the encoded data.
As mentioned above, a default scanning order for the scanning operation carried out by the scan unit 360 is a zigzag scan is illustrated schematically in Figure 16. In other arrangements, four blocks where intra-image encoding is used, a choice may be made between zigzag scanning, a horizontal raster scan and a vertical raster scan depending on the image prediction direction (Figure 15) and the transform unit (TU) size.
The CABAC process, discussed above, will now be described in a little more detail. CABAC, at least as far as it is used in the proposed HEVC system, involves deriving a "context" or probability model in respect of a next bit to be encoded. The context, defined by a context variable or CV, then influences how the bit is encoded. In general terms, if the next bit is the same as the value which the CV defines as the expected more probable value, then there are advantages in terms of reducing the number of output bits needed to define that data bit.
The encoding process involves mapping a bit to be encoded onto a position within a range of code values. The range of code values is shown schematically in Figure 19A as a series of adjacent integer numbers extending from a lower limit, rnjow, to an upper limit, m_high. The difference between these two limits is m_range, where m_range = m_high - rnjow. By various techniques to be described below, in a basic CABAC system m_range is constrained to lie between 128 and 256. rnjow can be any value. It can start at (say) zero, but can vary as part of the encoding process to be described.
The range of code values, m_range, is divided into two sub-ranges, by a boundary 1 100 defined with respect to the context variable as:
boundary = rnjow + (CV * m_range)
So, the context variable divides the total range of a set of code values into two complementary sub-ranges or sub-portions, one sub-range being associated with a value (of a next data bit) of zero, and the other being associated with a value (of the next data bit) of one. The division of the range represents the probabilities assumed by the generation of the CV of the two bit values for the next bit to be encoded. So, if the sub-range associated with the value zero is less than half of the total range, this signifies that a zero is considered less probable, as the next symbol, than a one.
Various different possibilities exist for defining which way round the sub-ranges apply to the possible data bit values. In one example, a lower region of the range (that is, from rnjow to the boundary) is by convention defined as being associated with the data bit value of zero.
The encoder and decoder maintain a record of which data bit value is the less probable (often termed the "least probable symbol" or LPS). The CV refers to the LPS, so the CV always represents a value of between 0 and 0.5.
A next bit (a current input bit) is now mapped or assigned to a code value within an appropriate sub-range within the range m_range, as divided by the boundary. This is carried out deterministically at both the encoder and the decoder using a technique to be described in more detail below. If the next bit is a 0, a particular code value, representing a position within the sub-range from rnjow to the boundary, is assigned to that bit. If the next bit is a 1 , a particular code value in the sub-range from the boundary 1 100 to m_high is assigned to that bit.
The lower limit rnjow and the range m_range are then redefined so as to modify the set of code values in dependence upon the assigned code and the size of the selected sub-range. If the just-encoded bit is a zero, then rnjow is unchanged but m_range is redefined to equal m_range * CV. If the just-encoded bit is a one then rnjow is moved to the boundary position (rnjow + (CV * m_range)) and m_range is redefined as the difference between the boundary and m_high (that is, (1 -CV) * m_range). These alternatives are illustrated schematically in Figures 19B and 19C.
In Figure 19B, the data bit was a one and so mjow was moved up to the previous boundary position. This provides a revised set of code values for use in a next bit encoding sequence. Note that in some embodiments, the value of CV is changed for the next bit encoding, at least in part on the value of the just-encoded bit. This is why the technique refers to "adaptive" contexts. The revised value of CV is used to generate a new boundary 1 100'.
In Figure 19C, a value of zero was encoded, and so mjow remained unchanged but m_high was moved to the previous boundary position. The value m_range is redefined as the new values of m_high - mjow. In this example, this has resulted in m_range falling below its minimum allowable value (such as 128). When this outcome is detected, the value m_range is doubled, that is, shifted left by one bit, as many times as are necessary to restore m_range to the required range of 128 to 256. In other words, the set of code values is successively increased in size until it has at least a predetermined minimum size (128 in this case). An example of this is illustrated in Figure 19D, which represents the range of Figure 19C, doubled so as to comply with the required constraints. A new boundary 1 100" is derived from the next value of CV and the revised m_range.
Whenever the range has to be multiplied by two in this way, a process often called "renormalizing", an output bit is generated (as an output encoded data bit), one such bit for each renormalizing stage.
I n this way, the interval m_range is successively modified and renormalized in dependence upon the adaptation of the CV values (which can be reproduced at the decoder) and the encoded bit stream. After a series of bits has been encoded, the resulting interval and the number of renormalizing stage uniquely defines the encoded bitstream. A decoder which knows such a final interval would in principle be able to reconstruct the encoded data. However, the underlying mathematics demonstrate that it is not actually necessary to define the interval to the decoder, but just to define one position within that interval. This is the purpose of the assigned code value, which is maintained at the encoder and passed to the decoder at the termination of encoding the data.
The context variable CV is defined as having 64 possible states which successively indicate different probabilities from a lower limit (such as 1 %) at CV = 63 through to a 50% probability at CV = 0.
CV is changed from one bit to the next according to various known factors, which may be different depending on the block size of data to be encoded. In some instances, the state of neighbouring and previous image blocks may be taken into account.
The assigned code value is generated from a table which defines, for each possible value of CV and each possible value of bits 6 and 7 of m_range (noting that bit 8 of m_range is always 1 because of the constraint on the size of m_range), a position or group of positions at which a newly encoded bit should be allocated a code value in the relevant sub-range.
Figure 20 schematically illustrates a CABAC encoder using the techniques described above.
The CV is initiated (in the case of the first CV) or modified (in the case of subsequent
CVs) by a CV derivation unit 1 120 acting as a context variable modifying unit). A code generator 1 130 divides the current m_range according to CV and generates an assigned data code within the appropriate sub_range, using the table mentioned above (and thereby acts as a selector and an assigning unit). A range reset unit 1 140 acting as a code value modifying unit resets m_range to that of the selected sub-range. If necessary, a normaliser 1 150 (acting as a detector) renormalises the m_range, outputting an output bit for each such renormalisation operation. As mentioned, at the end of the process, the assigned code value is also output.
In a decoder, shown schematically in Figure 21 , the CV is initiated (in the case of the first CV) or modified (in the case of subsequent CVs) by a CV derivation unit 1220 which operates in the same way as the unit 1 120 in the encoder (and also acts as a context variable modifying unit). A code application unit 1230 divides the current m_range according to CV and detects in which sub-range the data code lies (and thereby acts as a selector and a detector). A range reset unit 1240 acting as a code value modifying unit resets m_range to that of the selected sub-range. If necessary, a normaliser 1250 (acting as a context value modifying unit) renormalises the m_range in response to a received data bit.
Embodiments of the invention concern a technique to remove the look-up tables within the CABAC engine, replacing them with arithmetic operations. This could potentially reduce hardware size and complexity even if the Context Variables used are all increased from 7 to 8- bit.
In an example CABAC encoder, context variables (CVs) are used by the CABAC engine to encode binary values. CVs, in effect, represent the expected probability of the binary value, v, that they are about to encode (or decode), with the expectation held within the 7-bit state of the CV. The state is split into a 6-bit code 'valueCv-HM4 and a 1 -bit Most Probable Symbol (MPS) field:
if P(\z=1 )>50%, the MPS=1 ;
if P(v=0)>50%, the MPS=0;
if P(1 )=P(0)=50%, the MPS is either 1 or O, depending on previous state transitions.
It is important to note that the state of the CV does not directly represent the probability/expectation of the value V; rather, it is used as an index to a table that determines the split between Least Probable Symbol (LPS) and the Most Probable Symbol (MPS).
After encoding (or decoding) a binary value, the state is updated, to reflect the change in expectation. This is achieved in some systems tables; if the MPS is encoded/decoded, the table m_aucNextStateMPS is indexed with the current state to give the next state, but if the LPS is encoded/decoded, the table m_aucNextStateLPS is used instead. These tables each have 128 elements, each 7-bits (in hardware) or 8-bits (in software).
The CVs are not transmitted in the bit-stream between encoder and decoder: the encoder and decoder initialize the CVs to the same value, and are updated in the same manner after the CV has been used to encode or decode a value.
CABAC's coding engine uses an internal variable called m_range that is a 9-bit number, always kept within the range of 256-51 1 . When the number falls below 256, it is 'normalised' (shifted left). The range is an indication of the current fractional scale that is available for the next symbol, i.e., the value m_range should be split into two parts representing the MPS and LPS, with the splitting made according to the probabilities.
When encoding/decoding, the split-point is calculated using a 2D table which gives the space allocated to the LPS (uiLPS). The current 6-bit state of a CV is used to access a row in the table; bits {7, 6} of m_range are used to indicate the column (bit 8 is always ). The MPS takes the split from 0 to "m_range - uiLPS"; the LPS takes the split from "m_range - uiLPS" to m_range.
The use of three tables may not be ideal for hardware applications, especially as "speculation" may be required for high throughput (which may lead to multiple instances of the tables). Instead a means to update the context variables and produce the range split with arithmetic operations may be useful. This proposal leads to such a design, where the context variables will contain the actual probability.
Algorithm
This technique:
• changes the CVs from a 7-bit state to an 8-bit state,
· replaces the CV table-based update procedure to an arithmetic function,
• replaces the range-split calculation from a table-based system to another arithmetic function.
In this description, the following methods are demonstrated:
• one common method for the CV updating procedure,
· two methods for the range-split calculation.
Context Variable updating
The CVs 8-bit state state0 directly represents the probability/expectation of the binary value being 0:
P(0)=siaie0/256
In other words, the context variable is expressed as an n-bit number (8 bits in this example, though other numbers of bits could be used) and the position of the boundary 1 100, specified by the probability of a next bit being zero, is defined by a proportion or fraction (of m_range) dependent on the context variable. In this example, that fraction is equal to the context variable divided by 2n.
The adaptation of the CVs state uses the mathematical function "update" on the current state, to change the CV and thereby move the boundary 1 100, given the binary value v that was encoded/decoded:
nextState0 = update(sfafe0,\/)
The "update" function may be expressed in the following pseudo-code:
Function update ( state0, v) :
Let xorVal = (v==l ?255 : 0 )
Let prob v =(state0 XOR xorVal)
Let increment=Max (1, 13- (( (prob v<<3)-(prob v))»7))
Return M±n(251, prob v+increment) XOR xorVal
[notation: » n is a right shift by n bits; « n is a left shift by n bits] [notation: Min = minimum of the arguments; Max = maximum of the arguments] This function requires the use of a 4-bit, a 12-bit and an 8-bit adder.
This function has the effect of modifying the context variable, for use in respect of a next input data bit, so as to increase the proportion or fraction of the set of code values in the subrange which was selected for the current data bit (and thereby move the boundary 1 100) by the greater of:
a predetermined minimum increment (in this example, 1 ); and
an increment derived in dependence upon a predetermined fractional amount of the current proportion. In this example of 8-bit context variable values, the increment comprises a predetermined fractional amount of the current proportion (which, in the example above equates to 7/128 * the current proportion) subtracted from a first constant value (1 3 in the above example).
The "return" function above specifies that the modified context variable is subject to predetermined minimum and maximum allowable values of the context variable.
Note that in embodiments of the invention the same update function is used in the encoder and the decoder.
Derivation of the update function
The update method described above uses the state to model the expectation; therefore, the update equation has to model the updating of the expectation. The update equation was derived from the tables used by CVs in a CABAC encoder, by plotting the current state's fraction of the range of a Ό' to be encoded against the fraction of the range given to the next symbol, given that the next symbol is also a Ό'.
That is, calculate the fraction of the current range given to symbol Ό' using a CV; update the CV; calculate the new fraction given to symbol 'Ο'. The fraction is derived from the third column of the known table TComCABACTables::sm_aucLPSTable, that is:
TComCABACTables::sm_aucLPSTable[state][3]/48
and the adaptation uses
ContextModel::m_aucNextStateMPS[va/t/eCv-H/w4] or
ContextModel::m_aucNextStateLPS[va/t/ecv-H/w4], depending on whether Ό' was the LPS or MPS for the CV state.
This yields an approximate straight line, from (0%, 5%) to (100%, 100%), as shown in Figure 22, which is approximately (0,13) to (256,256) for an 8-bit fixed point representation: y=nextState0= state0)={state0 * (256-13) / 256) +13 = state0 + 13 - ((sfafe0* 13)»8)
Eq.1
However, if this equation is used, then the number of steps that a CV goes through from the point where a Ό' is least expected to where it is most expected is significantly shorter than in a known CABAC encoder, as illustrated below in Table 1 .
Considering this, instead the following equation (also illustrated in Table 1 ) was used: y=nextState0=i{state0)= state0 + 13 - ((sfafe0x 14)»8)
Eq.2
Table 1 : CV state transitions from 0 being extremely unlikely to extremely likely
HM4.0 Arithmetic CVs
state0 + 13 state0 + 13 ((sfafe0x 13)»8). ((sfafe0x 14)»8).
State Fraction State Fraction State Fraction
62 1 .88% 4 1 .56% 4 1 .56%
MPS=1
38 6.88% 17 6.64% 17 6.64%
MPS=1
28 1 1 .67 30 1 1 .72 30 1 1 .72
MPS=1 % % %
22 15.83 42 16.41 42 16.41
MPS=1 % % %
44 95.00 253 98.83 239 93.36 MPS=0 % % %
45 95.21 254 99.22 240 93.75 MPS=0 % % % 46 95.42 255 99.61 241 94.14 MPS=0 % % %
47 95.63 242 94.53 MPS=0 % %
58 97.50 253 98.83 MPS=0 % %
59 97.71 254 99.22 MPS=0 % %
60 97.71 255 99.61 MPS=0 % %
61 97.92
MPS=0 %
62 98.13
MPS=0 %
Unfortunately this adapted equation (Eq 2) can result in the state0 not increasing after "state0 = 237", and therefore a mechanism to ensure the state is always increased needs to be included in the equation, and hence the 'Max' function in the pseudo-code of the "update" function below.
The update process above describes the next probability of 0 (nextStateo/256) given the current probability of 0 {stateo/256) and the fact that a 0 has been encoded/decoded {v=0). By symmetry, the same process would describe the next probability of 1 (nextStatei/256) given the current probability of 1 (statei/256) and the fact that a 1 has been encoded/decoded (v=1 ). The probability of a 1 is Ί -Ρ(0)", and therefore "state ^256-stateo" and "nextState0=256-nextState1".
As an approximation , it can be said that "state i~255-state0" and "nextState0~255 - nextStatei", where the subtractions can then be replaced with XORs.
Finally, the probability is clipped so that the range will be at least '4' (the state is in the range of 4 to 251 ), similar to the limits placed in the HM4.0 tables.
These stages therefore give rise to the "update" function described above.
Range calculation
Multiplier-based range calculation
Calculating the amount of the range (m_range) allocated to the value "v=0", given the current state of the current CV is achieved with a single 9-bit multiplier:
zeroRange=(m_range χ state0) » 8
The amount of the range allocated to the value \/=1 is calculated using:
oneRange=m_range-zeroRange The range is therefore divided into two sections, the first being zeroRange in length, and allocated to the case where v=0, and the second being oneRange in length and allocated to the case where v= .
Adder-based range calculation
Utilizing a multiplier on m_range is not ideal when the throughput of the entropy decoder is considered; it is better if m_range is required for a minimal number of simple operations thereby allowing multiple decodes within a single clock cycle.
In table based CABAC encoders, the CV's state is used to index a row in table 'sm_aucLPSTable', giving four potential next range values for the LPS. The choice of the next range value is a simple multiplexor operation utilizing two bits of the current range.
The following method pre-calculates the 4 possible next range values, and then the choice is made later utilizing the same two bits of the current range. The method can be thought of as a multiplier operation, but using a two bit operand and rounding.
In summary, the operation is:
If state0≥128 lps=l, lpFraction=256- state0
else lps=0 , lpFraction=state0 candidateLPSRanges [ 4 ] = { (IpFraction χ 9 ) »3 , (IpFraction xll) »3 ,
(lpFraction χ 13 ) »3 , (IpFraction χ 15 ) »3 }
lpsRange= candidateLPSRanges [m_uiRange bits 7 , 6 ] encode / decode in terms of LPS/MPS.
where the 4 multipliers can be implemented with adders/subtractors - and only 4 adders would be required to calculate all 4 candidate entries.
Theory
The first of the four values will be utilized when the current range, m_range, is between 256 (0b100000000) and 319 (0b1001 1 1 1 1 1 ) (inclusive), i.e. on average 288 (0b100100000), where the two bits that are used for the choice of result are highlighted.
The second of the four values will be utilized when m_range is between 320 and (0b101000000) and 383 (0b 101.1 1 1 1 1 1 ) (inclusive), i.e. on average 352 (0b10J.100000).
Similarly, the third and the fourth values will be used when, on average m_range 416 and 480 respectively. The four candidate values are therefore calculated as the current probability {state o) multiplied by the 4 average m_range values:
i.e.
candidateZeroRange[4]={(siaie0 χ 288)»8, {state0 * 352)»8, {state0 * 416)»8, {state0 x 480)»8}
The 4 multipliers can be implemented as 4 adders, and since the 5 LSBs of each of the constant multiplicands are 0, the multiplicands and shifts can be reduced. Once the value of m_range is known, the zero range can be selected from the four candidates, and the process could proceed as with the multipler-based range method.
However, as an expected m_range value is used, it is possible for the entries to exceed the actual m_range value, e.g. if "state0 = 240" (P(0)=93.75%), then "candidateZeroRangefO] = 270". However, if m_range was a value between 256 and 269 (inclusive), then the resulting zero range would exceed the total available range. One method to alleviate the problem is to clip the resulting zeroRange value, ensuring that there is adequate oneRange. However, it is better if a least-probable-symbol mechanism is employed, as this will in effect average out the errors caused by the truncation. Therefore, if "state0≥128" (P(0)≥50%), the candidate range values will be the 'one' ranges:
Figure imgf000024_0001
x 480)»8}
where statei = 256 - state0.
Therefore, this system returns to essentially the same as HM4.0, except with arithmetic updating instead of tables: a LPS is identified (test on bit-7 of the state), and the next LPS range is selected (using two bits from the m_range) from 4 candidate LPS ranges. The MPS can take the first section of the range, and the LPS can take the second section of the range.
Discussion
In a typical CABAC encoder, the CV update table (RDOQ) has 128 table entries, corresponding to each of the 128 states of a CV. However, the CVs have been extended to 256 possible states, and therefore the RDOQ table has 256 table entries, by default. It was noticed that this new table can be reduced to 128 entries, and indexed using the 7 MSBs of the state, without having any noticeable effect on the coding efficiency.
Initialization of the CVs uses a table of values and the target QP to linearly interpolate:
Figure imgf000024_0002
Two methods have been investigated for initializing arithmetically updated CVs, those being:
1 . The conversion of the CV_initial_state, by using the same stateHM4 to probability curve as described earlier.
2. The conversion of mCv and cCv table values, by fitting a straight line through a curve defined by the first method at QP=22, 27, 32 and 37.
There is no significant difference in terms of coding efficiency between the two methods; however, the latter would be simpler for implementation, and is the chosen configuration for the simulated system.
It should be noted that the arithmetic values can be pre-calculated and stored in tables, if required, resulting in a similar system to the previous table based system. Finally, although the replacement of look-up tables (used to update the CVs and calculate the range allocations) has the potential to reduce hardware size and simplify the design, there is potentially a cost for hardware by increasing the CV state from 7 to 8 bits. However, often internal RAMs, which are likely to be used to store the CVs, are in multiples of 4-bits internally, and therefore this increase from 7 to 8-bits may be less critical.
In conclusion, an alternative context variable is proposed, which, although the number of bits is increased from 7 to 8, removes the requirement of look-up tables, replacing them with arithmetic.
Two systems are described, with both using the same 3-adder update system.
The first system utilizes a multiplier on the CABAC range, and the encoder and decoder processing times are, to within error, the same as HM4.0. Results show that this system has a coding efficiency change of 0.0%, -1 .1 %, and -1 .1 % for Y, U and V respectively, giving an overall weighted average (= (6*Y + U + V)/8) of -0.3% (for sequences A-E only and A-F).
The second system utilizes 5 adders without requiring CABAC range, and then a multiplexor selects one of four values. It is believed that this system is capable of higher throughput than the first, although there is an increase in encoder processing time of 1 %. Results show that this system has a coding efficiency change of 0.0%, -0.9%, and -1 .0% for Y, U and V respectively, giving an overall weighted average (= (6*Y + U + V)/8) of -0.2% (for sequences A-E only and A-F).
Thus, the foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting of the scope of the invention, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.

Claims

1 . A data encoding method for encoding successive input data bits, the method comprising the steps of:
selecting one of two complementary sub-ranges of a set of code values according to whether a current input data bit is a one or a zero, the proportions of the two sub-ranges relative to the set of code values being defined by a context variable associated with that input data bit; assigning the current input data bit to a code value within the selected sub-range;
modifying the set of code values in dependence upon the assigned code value and the size of the selected sub-range;
detecting whether the set of code values is less than a predetermined minimum size and if so:
successively increasing the size of the set of code values until it has at least the predetermined minimum size; and
outputting an encoded data bit in response to each such size-increasing operation;
modifying the context variable, for use in respect of a next input data bit, so as to increase the proportion of the set of code values in the sub-range which was selected for the current data bit by the greater of:
a predetermined minimum increment; and
an increment derived in dependence upon a predetermined fractional amount of the current proportion.
2. A method according to claim 1 , comprising:
defining the set of code values as a range of code values, with the context variable defining the boundary between the two sub-ranges within that range of code values as a fraction such that the extent of a predetermined one or the sub-ranges is equal in extent to the extent of the range of code values multiplied by the fraction; and
applying the increment so as to move the boundary between the two sub-ranges.
3. A method according to claim 2, in which the context variable comprises an n-bit digital value which is divided by 2n to obtain the respective fraction.
4. A method according to claim 3, in which the increment comprises a predetermined fractional amount of the current proportion, subtracted from a first constant value.
5. A method according to claim 4, in which: the context variable is an 8-bit digital number; and
the increment is equivalent to 13 - ((7/128) x the current extent of the sub-range to be increased).
6. A method according to any one of the preceding claims, in which the modifying step modifies the context variable subject to predetermined maximum and minimum allowable values of the context variable.
7. A video coding method comprising the steps of:
generating frequency domain coefficients dependent upon respective portions of an input video signal and ordering the coefficients for encoding according to an encoding order; and
entropy encoding the ordered coefficients by applying the method of any one of the preceding claims.
8. Video data encoded by the encoding method of claim 7.
9. A data carrier storing video data according to claim 8.
10. A data decoding method for decoding successive encoded data bits, the method comprising the steps of:
detecting which one of two complementary sub-ranges a current assigned data value lies in, the proportions of the two sub-ranges relative to the set of code values being defined by a context variable associated with that input data bit;
modifying the set of code values in dependence upon the assigned code value and the size of the selected sub-range;
modifying the context variable, for use in respect of a next input data bit, so as to increase the proportion of the set of code values in the sub-range which was selected for the current data bit by the greater of:
a predetermined minimum increment; and
an increment derived in dependence upon a predetermined fractional amount of the current proportion.
1 1 . A video decoding method comprising the step of entropy decoding encoded video data by applying the method of claim 10.
12. A data encoding apparatus for encoding successive input data bits, the apparatus comprising:
a selector to select one of two complementary sub-ranges of a set of code values according to whether a current input data bit is a one or a zero, the proportions of the two subranges relative to the set of code values being defined by a context variable associated with that input data bit;
an assigning unit to assign the current input data bit to a code value within the selected sub-range;
a code value modifying unit to modify the set of code values in dependence upon the assigned code value and the size of the selected sub-range;
a detector to detect whether the set of code values is less than a predetermined minimum size and if so:
to successively increase the size of the set of code values until it has at least the predetermined minimum size; and
to output an encoded data bit in response to each such size-increasing operation;
a context variable modifying unit to modify the context variable, for use in respect of a next input data bit, so as to increase the proportion of the set of code values in the sub-range which was selected for the current data bit by the greater of:
a predetermined minimum increment; and
an increment derived in dependence upon a predetermined fractional amount of the current proportion.
13 A data decoding apparatus for decoding successive encoded data bits, the apparatus comprising:
a detector to detect which one of two complementary sub-ranges a current assigned data value lies in, the proportions of the two sub-ranges relative to the set of code values being defined by a context variable associated with that input data bit;
for a code value modifying unit to modify the set of code values in dependence upon the assigned code value and the size of the selected sub-range;
a context variable modifying unit to modify the context variable, for use in respect of a next input data bit, so as to increase the proportion of the set of code values in the sub-range which was selected for the current data bit by the greater of:
a predetermined minimum increment; and
an increment derived in dependence upon a predetermined fractional amount of the current proportion.
14. Video coding apparatus comprising:
a frequency domain transformer for generating frequency domain coefficients dependent upon respective portions of an input video signal and ordering the coefficients for encoding according to an encoding order; and
an entropy encoder for encoding the ordered coefficients, the entropy encoder comprising apparatus according to claim 12.
15. Video decoding apparatus having an entropy decoder comprising apparatus according to claim 13.
16. Computer software which, when executed by a computer, causes the computer to carry out the method of any one of claims 1 to 7, 10 and 1 1.
17. A non-transitory machine-readable storage medium on which computer software according to claim 16 is stored.
18. Video data capture, transmission and/or storage apparatus comprising apparatus according to any one of claims 12 to 15.
PCT/GB2012/052759 2011-11-07 2012-11-06 Context adaptive data encoding WO2013068732A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1119176.4 2011-11-07
GB1119176.4A GB2496193A (en) 2011-11-07 2011-11-07 Context adaptive data encoding and decoding

Publications (1)

Publication Number Publication Date
WO2013068732A1 true WO2013068732A1 (en) 2013-05-16

Family

ID=45421369

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2012/052759 WO2013068732A1 (en) 2011-11-07 2012-11-06 Context adaptive data encoding

Country Status (3)

Country Link
GB (1) GB2496193A (en)
TW (1) TW201334427A (en)
WO (1) WO2013068732A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10939103B2 (en) 2013-10-01 2021-03-02 Sony Corporation Data encoding and decoding

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005184232A (en) * 2003-12-17 2005-07-07 Sony Corp Coder, program, and data processing method
KR100644713B1 (en) * 2005-10-31 2006-11-10 삼성전자주식회사 Method of decoding syntax element in cabac decoder and decoding apparatus therefor
US7557740B1 (en) * 2008-04-18 2009-07-07 Realtek Semiconductor Corp. Context-based adaptive binary arithmetic coding (CABAC) decoding apparatus and decoding method thereof
US7982641B1 (en) * 2008-11-06 2011-07-19 Marvell International Ltd. Context-based adaptive binary arithmetic coding engine
JP4962476B2 (en) * 2008-11-28 2012-06-27 ソニー株式会社 Arithmetic decoding device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"WD4: Working Draft 4 of High-Efficiency Video Coding", JCTVC-F803_D5, DRAFT ISO/IEC 23008-HEVC, vol. 201X, no. E, 28 October 2011 (2011-10-28)
ALSHIN A ET AL: "Multi-parameter probability up-date for CABAC", 6. JCT-VC MEETING; 97. MPEG MEETING; 14-7-2011 - 22-7-2011; TORINO; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JCTVC-F254, 12 July 2011 (2011-07-12), XP030009277 *
MARPE D ET AL: "Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 13, no. 7, 1 July 2003 (2003-07-01), pages 620 - 636, XP011099255, ISSN: 1051-8215, DOI: 10.1109/TCSVT.2003.815173 *
SHARMAN K ET AL: "CABAC with Arithmetic Context Variables", 7. JCT-VC MEETING; 98. MPEG MEETING; 21-11-2011 - 30-11-2011; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JCTVC-G501, 8 November 2011 (2011-11-08), XP030110485 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10939103B2 (en) 2013-10-01 2021-03-02 Sony Corporation Data encoding and decoding

Also Published As

Publication number Publication date
TW201334427A (en) 2013-08-16
GB201119176D0 (en) 2011-12-21
GB2496193A (en) 2013-05-08

Similar Documents

Publication Publication Date Title
US11671599B2 (en) Data encoding and decoding
US11039142B2 (en) Encoding and decoding of significant coefficients in dependence upon a parameter of the significant coefficients
US10893273B2 (en) Data encoding and decoding
JP6400092B2 (en) Data encoding and decoding
US9544599B2 (en) Context adaptive data encoding
WO2013068733A1 (en) Context adaptive data encoding
WO2013068732A1 (en) Context adaptive data encoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12784661

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12784661

Country of ref document: EP

Kind code of ref document: A1