WO2013001278A1 - Video encoding and decoding using transforms - Google Patents

Video encoding and decoding using transforms Download PDF

Info

Publication number
WO2013001278A1
WO2013001278A1 PCT/GB2012/051412 GB2012051412W WO2013001278A1 WO 2013001278 A1 WO2013001278 A1 WO 2013001278A1 GB 2012051412 W GB2012051412 W GB 2012051412W WO 2013001278 A1 WO2013001278 A1 WO 2013001278A1
Authority
WO
WIPO (PCT)
Prior art keywords
transform
coefficients
skip mode
block
sub
Prior art date
Application number
PCT/GB2012/051412
Other languages
French (fr)
Inventor
Marta Mrak
Andrea GABRIELLINI
Nikola Sprljan
David Flynn
Original Assignee
British Broadcasting Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Broadcasting Corporation filed Critical British Broadcasting Corporation
Publication of WO2013001278A1 publication Critical patent/WO2013001278A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Definitions

  • This invention is related to video compression and decompression systems, and in particular to a framework to adaptively model signal representation between prediction and entropy coding, by the adaptive use of transform functions and related tools, including scaling, quantisation, scanning, and signalling.
  • Transmission and storage of video sequences are employed in several applications like e.g. TV broadcasts, internet video streaming services and video conferencing.
  • Video sequences in a raw format require a very large amount of data to be represented, as each second of a sequence may consist of tens of individual frames and each frame is represented by typically at least 8 bit per pixel, with each frame requiring several hundreds or thousands of pixels.
  • video compression is used on the raw video data. The aim is to represent the original information with as little capacity as possible, i.e., with as few bits as possible. The reduction of the capacity needed to represent a video sequence will affect the video quality of the compressed sequence, i.e. its similarity to the original uncompressed video sequence.
  • State-of-the-art video encoders such as AVC/H.264, utilise four main processes to achieve the maximum level of video compression while achieving a desired level of video quality for the compressed video sequence: prediction,
  • the prediction process exploits the temporal and spatial redundancy found in video sequences to greatly reduce the capacity required to represent the data.
  • the mechanism used to predict data is known to both encoder and decoder, thus only an error signal, or residual, must be sent to the decoder to reconstruct the original signal.
  • This process is typically performed on blocks of data (e.g. 8x8 pixels) rather than entire frames.
  • the prediction is typically performed against already reconstructed frames or blocks of reconstructed pixels belonging to the same frame.
  • the transformation process aims to exploit the correlation present in the residual signals. It does so by concentrating the energy of the signal into few coefficients. Thus the transform coefficients typically require fewer bits to be represented than the pixels of the residual.
  • H.264 uses 4x4 and 8x8 integer type transforms based on the Discrete Cosine Transform (DCT).
  • DCT Discrete Cosine Transform
  • the capacity required to represent the data in output of the transformation process may still be too high for many applications. Moreover, it is not possible to modify the transformation process in order to achieve the desired level of capacity for the compressed signal.
  • the quantisation process takes care of that, by allowing a further reduction of the capacity needed to represent the signal. It should be noted that this process is destructive, i.e. the reconstructed sequence will look different to the original
  • the entropy coding process takes all the non-zero quantised transform
  • a video decoder will perform all of the above processes in roughly reverse order.
  • the transformation process on the decoder side will require the use of the inverse of the transform being used on the encoder.
  • entropy coding becomes entropy decoding and the quantisation process becomes inverse scaling.
  • the prediction process is typically performed in the same exact fashion on both encoder and decoder.
  • the present invention relates to the transformation part of the coding, thus a more thorough review of the transform process is presented here.
  • the statistical properties of the residual affect the ability of the transform (i.e.
  • the residual shows very different statistical properties depending on the quality of the prediction and whether the prediction exploits spatial or temporal redundancy.
  • DCT Discrete Sine Transform
  • the DCT and DST in image and video coding are normally used on blocks, i.e. 2D signals; this means that a one dimensional transform is first performed in one direction (e.g., horizontal) followed by a one dimensional transform performed in the other direction.
  • the energy compaction ability of a transform is dependent on the statistics of the input signal. It is possible, and indeed it is also common under some circumstances, for the two-dimensional signal input to the transform to display different statistics along the two vertical and horizontal axes. In this case it would be desirable to choose the best performing transform on each axis.
  • HEVC High Efficiency Video Coding
  • a combination of two one dimensional separable transforms such as a DCT-like [2] and DST [3] has been used in HEVC standard under development.
  • DCT two-dimensional transform
  • DCT and DST integer approximations of DCT and DST
  • DCT and DST integer approximations of DCT and DST
  • One of solutions for integer DCT-like transform uses 16-bit intermediate data representation and is known as partial butterfly. Its main properties are same (anti)symmetry properties as of DCT, almost orthogonal basis vectors, 16 bit data representation before and after each transform stage, 16 bit multipliers for all internal multiplications and no need for correction of different norms of basis vectors during (de)quantisation.
  • the present invention consists in, in one aspect, a method of video encoding utilising a spatial transform operating on rows and columns of a block, comprising the steps of establishing a set of transform skip modes including:
  • Figure 1 is a block diagram illustrating a feature on an encoder according to an embodiment of the invention
  • Figure 2 is a block diagram illustrating the feature on a decoder according to the embodiment
  • Figure 3 is a diagram illustrating an alternative to the known zig-zag scanning approach
  • Figure 4 is a diagram illustrating a further alternative scanning approach
  • Figure 5 is a block diagram illustrating a feature on an encoder according to a further embodiment of the invention.
  • Figure 6 is a block diagram illustrating the feature on a decoder according to the embodiment.
  • FIG. 7 is a block diagram illustrating a feature on a decoder according to a further embodiment of the invention
  • This invention presents a mode to perform the transformation process - Transform Skip Mode (TSM).
  • TSM Transform Skip Mode
  • the most common transform used in video coding is the DCT. Its energy compacting performance depends on the correlation of the residual. It has also been described how the residual can be highly decorrelated, or correlated in one direction only, making the 2D DCT less efficient. It is proposed to skip the transformation process when the encoder makes such decision in a rate-distortion sense.
  • the selected transform mode must be signalled to the decoder, which then performs a combination of transform/ skip transform as defined in signalling.
  • TSO mode corresponds to 2D transform, i.e. 2D DCT.
  • TS1 mode defines application of one dimensional horizontal DCT followed by a transform skip in the orthogonal direction, i.e. transform of columns is skipped.
  • TS2 defines skipping of horizontal transform, while only columns are transformed.
  • TS3 mode completely skips transforms in both axes, i.e., no transform is applied to the input signal.
  • FIGS 1 and 2 show core transform skip mode block diagrams, for encoder and decoder, respectively.
  • Each transform skip mode is selected with corresponding (TfO, Tf1 ) pair of flags, such that TSO: (1 , 1 ), TS1 : (1 , 0), TS2: (0, 1 ) and TS3: (0, 0).
  • TSM options can be signalled using carefully designed code words. Those code words do not need to be transmitted for each block, but some other methods can be used to save necessary bit-rate. Some of possibilities for reducing the signalling cost are listed in the following; each option influencing transform-related parts of the encoder and decoder:
  • TSM modes (2D transform, two 1 D block transforms and skipped transform on a block) can be defined with various code words, e.g. with simple 2-bit words, or with more bits (i.e. with unary codes):
  • each bin of the code word can be encoded with different probability models (i.e. initial context states for each slice), depending on the current block size and on QP value.
  • probability models i.e. initial context states for each slice
  • QP value i.e. initial context states for each slice
  • TSM code words can be encoded independently of or merged with other syntax elements, to reduce the signalling overhead.
  • a block is not always transformed at once, but rather options for its partitioning into smaller sub-units are applied, and transforms are applied on each sub-units.
  • Representative of such transform structure is Residual QuadTree (RQT) method. While application of TSM on blocks that are not further divided into smaller unit has been assumed so far, TSM can also be applied on such multi-split transform structures.
  • TSM is decided on a block level, and the same transform choice is applied on each sub-unit.
  • TSM is enabled only at the root level of transformation structure, i.e. when a block is not further partitioned into smaller units when a multi-split structure is enabled; if a block is split into smaller units, each unit is transformed using 2D transform.
  • TSM is decided and signalled for each sub-unit, independently of its
  • TSM is decided and signalled for sub-units, up to specific depth (size) of units; for lower sub-units, when TSB is not signalled, 2D transform is used. Coefficients within a block can have different characteristics when the transform is not performed in one or both directions. Therefore different coding strategies can be applied, depending on the transform skip mode, to better compress given coefficients.
  • adaptive scanning can be used. For example, a row-by-row, or a column-by-column scanning can be used for TS2 and TS1 cases respectively, since one can expect that applied transform concentrates the coefficients towards lower frequencies.
  • a transform is not applied in any direction, a
  • a different scanning pattern may be employed which takes into account the probability (implicit in the decision to conduct no transform) that non- zero coefficients are not uniformly distributed.
  • coefficients may be grouped in "islands” surrounded by "seas" of zero coefficients.
  • positions of the first and the last significant coefficients within a block can be transmitted in the bit-stream, and a
  • a double zig-zag scan is used, as depicted in Figure 4, where a block of transform coefficients is represented with sub-blocks of coefficients. Each sub-block is visited in sub-block level zig-zag scan, and inside each block a zig-zag scan (or any other scan) is used. This enables better grouping of nonzero coefficients, which tend to be spatially close.
  • Scaling is performed by scaling each input pixel value by a factor that is derived from norm-2 of corresponding transform vectors (which would be used to obtain a transform coefficient value, at the same position in a row/column, if the transform was selected).
  • Some transforms have close to orthonormal properties of each vector and this property can further simplify the scaling design since a single value can be used to suitably scale whole row/column on which the transform is skipped.
  • Transforms used in HEVC have the norms (TN N ), where N is size of the transform, close to the following numbers:
  • TNS 32 8.5;
  • TNS is corresponding Transform Norm Shift parameter (power of 2 represented by left bit-shifting).
  • each transform vector may have slightly different norm, but these numbers are good approximations for practical implementations. This fact is also reflected in the designs of quantisation and in the transform level adjustment to preserve 16-bit intermediate data representation. For example, in HEVC decoder design, 16-bit value enters inverse transform. In order to reach 16-bit precision between column (1 st stage inverse) and row (2nd stage inverse) transforms, and 9+DB precision after the row transform, the following signal level bit-shifts occur (considering N x N block size):
  • the adaptive transform stage is designed in a way that it can be interleaved within the integer DCT transform with 16-bit intermediate data representation, i.e. with the goal to replace some of its parts and to be compatible with the rest of the codec that supports original 2D transform. For example, not applying transform can be used on rows in a way which is still compatible with the part of 2D transform that is applied on columns. This means that quantisation applied for 2D transform can also be used with adaptive transform choice.
  • the forward transform skip is defined for rows and columns separately.
  • N log2(N), where N is row/column size in the number of pixels, and scale is an unsigned integer multiplier.
  • scaling can be moved to quantisation. Moreover (for example), if only the vertical transform is kept, it can be adapted, to ensure maximal 16-bit representation of pixels. This enables full use to be made of the available bit width. Therefore, scaling in quantisation has to be adapted not only because of the scaling related to skipped transform but also related to new scaling within a transform.
  • x is the original value of residual block
  • Residual pixels are directly quantised using the flat matrix so that the level of signal corresponds to the levels of quantised coefficients that are 2D transformed and quantised.
  • Another example of how the level of the signal can be adjusted when a transform is skipped is presented in the following, with reference to Figure 7. In this example the aim is to reduce a number of operations required to achieve desired performance. In that context, where a transform or its parts can be skipped or replaced, this technique uses a combination of one or more basic operations:
  • Each scaling of the signal can be represented by scaling by a factor of 2 (where N is a positive integer) and by scaling by a factor M that is smaller than 2.
  • N is the transform size as in the previous example).
  • Operation 1 enables signal scaling by a factor of 2 N (bit-shifting) and Operation 2 enables scaling by M.
  • M is typically limited and depends on the quantisation design.
  • a typical component of a 1 D transform in video coding is bit-shifting. Therefore Operation 1 applied here readily enables adjustment of a signal level by a factor of 2 N .
  • adjustment of the level of the signal can be performed in the "Scaling" block from Figure 7, which corresponds to Operation 3.
  • a quantisation parameter offset or quantisation scaling factor
  • HEVC High Efficiency Video Coding
  • positions of the first and the last coefficients to be encoded /decoded within a block are signalled to the decoder and a scanning of coefficients is performed between said first and the last coefficients;
  • a double scan is performed, where a block of transform coefficients is represented with sub-blocks of coefficients; each sub-block is visited in sub-block level zig-zag scan, and inside each sub-block additional scan pattern is used;
  • transform skip mode may be useful beyond the case of transform skip mode.

Abstract

Video encoding or decoding utilising a spatial transform operating on rows and columns of a block, with a set of transform skip modes including: transform on rows and columns; transform on rows only;transform on columns only; no transform. An indication of the selected mode is provided to the decoder.

Description

VIDEO ENCODING AND DECODING USING TRANSFORMS
FIELD OF THE INVENTION
This invention is related to video compression and decompression systems, and in particular to a framework to adaptively model signal representation between prediction and entropy coding, by the adaptive use of transform functions and related tools, including scaling, quantisation, scanning, and signalling.
BACKGROUND OF THE INVENTION
Transmission and storage of video sequences are employed in several applications like e.g. TV broadcasts, internet video streaming services and video conferencing.
Video sequences in a raw format require a very large amount of data to be represented, as each second of a sequence may consist of tens of individual frames and each frame is represented by typically at least 8 bit per pixel, with each frame requiring several hundreds or thousands of pixels. In order to minimise the transmission and storage costs video compression is used on the raw video data. The aim is to represent the original information with as little capacity as possible, i.e., with as few bits as possible. The reduction of the capacity needed to represent a video sequence will affect the video quality of the compressed sequence, i.e. its similarity to the original uncompressed video sequence.
State-of-the-art video encoders, such as AVC/H.264, utilise four main processes to achieve the maximum level of video compression while achieving a desired level of video quality for the compressed video sequence: prediction,
transformation, quantisation and entropy coding. The prediction process exploits the temporal and spatial redundancy found in video sequences to greatly reduce the capacity required to represent the data. The mechanism used to predict data is known to both encoder and decoder, thus only an error signal, or residual, must be sent to the decoder to reconstruct the original signal. This process is typically performed on blocks of data (e.g. 8x8 pixels) rather than entire frames. The prediction is typically performed against already reconstructed frames or blocks of reconstructed pixels belonging to the same frame. The transformation process aims to exploit the correlation present in the residual signals. It does so by concentrating the energy of the signal into few coefficients. Thus the transform coefficients typically require fewer bits to be represented than the pixels of the residual. H.264 uses 4x4 and 8x8 integer type transforms based on the Discrete Cosine Transform (DCT).
The capacity required to represent the data in output of the transformation process may still be too high for many applications. Moreover, it is not possible to modify the transformation process in order to achieve the desired level of capacity for the compressed signal. The quantisation process takes care of that, by allowing a further reduction of the capacity needed to represent the signal. It should be noted that this process is destructive, i.e. the reconstructed sequence will look different to the original
The entropy coding process takes all the non-zero quantised transform
coefficients and processes them to be efficiently represented into a stream of bits. This requires reading, or scanning, the transform coefficients in a certain order to minimise the capacity required to represent the compressed video sequence.
The above description applies to a video encoder; a video decoder will perform all of the above processes in roughly reverse order. In particular, the transformation process on the decoder side will require the use of the inverse of the transform being used on the encoder. Similarly, entropy coding becomes entropy decoding and the quantisation process becomes inverse scaling. The prediction process is typically performed in the same exact fashion on both encoder and decoder.
The present invention relates to the transformation part of the coding, thus a more thorough review of the transform process is presented here.
The statistical properties of the residual affect the ability of the transform (i.e.
DCT) to compress the energy of the input signal in a small number of coefficients.
The residual shows very different statistical properties depending on the quality of the prediction and whether the prediction exploits spatial or temporal redundancy.
Other factors affecting the quality of the prediction are the size of the blocks being used and the spatial / temporal characteristics of the sequence being processed.
It is well known that the DCT approaches maximum energy compaction performance for highly correlated Markov-I signals. DCT's energy compaction performance starts dropping as the signal correlation becomes weaker. For instance, it is possible to show how the Discrete Sine Transform (DST) can outperform the DCT for input signals with lower adjacent correlation
characteristics.
The DCT and DST in image and video coding are normally used on blocks, i.e. 2D signals; this means that a one dimensional transform is first performed in one direction (e.g., horizontal) followed by a one dimensional transform performed in the other direction. As already mentioned the energy compaction ability of a transform is dependent on the statistics of the input signal. It is possible, and indeed it is also common under some circumstances, for the two-dimensional signal input to the transform to display different statistics along the two vertical and horizontal axes. In this case it would be desirable to choose the best performing transform on each axis. A similar approach has already been attempted within the new ISO and ITU video coding standard under development, High Efficiency Video Coding (HEVC). In particular, a combination of two one dimensional separable transforms such as a DCT-like [2] and DST [3] has been used in HEVC standard under development.
While previous coding standards based on DCT use a two-dimensional transform (2D DCT), newer solutions apply a combination of DCT and DST to intra predicted blocks, i.e. on blocks that are spatially predicted. It has been shown that DST is a better choice than DCT for transformation of rows, when the directional prediction is from a direction that is closer to horizontal then vertical, and, similarly, is a better choice for transformation of columns when the directional prediction is closer to vertical. In the remaining direction (e.g. on rows, when DST is applied on columns), DCT is used.
For implementation purposes, in video coding it is common to use integer approximations of DCT and DST, which will in rest of this text be simply referred to as DCT and DST. One of solutions for integer DCT-like transform uses 16-bit intermediate data representation and is known as partial butterfly. Its main properties are same (anti)symmetry properties as of DCT, almost orthogonal basis vectors, 16 bit data representation before and after each transform stage, 16 bit multipliers for all internal multiplications and no need for correction of different norms of basis vectors during (de)quantisation. SUMMARY OF THE INVENTION
The present invention consists in, in one aspect, a method of video encoding utilising a spatial transform operating on rows and columns of a block, comprising the steps of establishing a set of transform skip modes including:
transform on rows and columns;
transform on rows only;
transform on columns only;
no transform;
selecting one of the said modes; and providing an indication of the selected mode for a decoder.
There are described in the following a transform mode and a system to apply a combination of transforms minimising the capacity required to represent a signal for a given output signal quality target. Moreover, a system to signal the selected combination of transform modes is presented.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
The present invention will now be described by way of example with reference to the accompanying drawings, in which:
Figure 1 is a block diagram illustrating a feature on an encoder according to an embodiment of the invention;
Figure 2 is a block diagram illustrating the feature on a decoder according to the embodiment;
Figure 3 is a diagram illustrating an alternative to the known zig-zag scanning approach;
Figure 4 is a diagram illustrating a further alternative scanning approach;
Figure 5 is a block diagram illustrating a feature on an encoder according to a further embodiment of the invention;
Figure 6 is a block diagram illustrating the feature on a decoder according to the embodiment;
Figure 7 is a block diagram illustrating a feature on a decoder according to a further embodiment of the invention This invention presents a mode to perform the transformation process - Transform Skip Mode (TSM). As described above, the most common transform used in video coding is the DCT. Its energy compacting performance depends on the correlation of the residual. It has also been described how the residual can be highly decorrelated, or correlated in one direction only, making the 2D DCT less efficient. It is proposed to skip the transformation process when the encoder makes such decision in a rate-distortion sense. The selected transform mode must be signalled to the decoder, which then performs a combination of transform/ skip transform as defined in signalling.
Four transform modes are defined as shown in Table 1.
Figure imgf000006_0001
Table 1 - Transform Skip Mode options
TSO mode corresponds to 2D transform, i.e. 2D DCT. TS1 mode defines application of one dimensional horizontal DCT followed by a transform skip in the orthogonal direction, i.e. transform of columns is skipped. TS2 defines skipping of horizontal transform, while only columns are transformed. Finally, TS3 mode completely skips transforms in both axes, i.e., no transform is applied to the input signal.
Figures 1 and 2 show core transform skip mode block diagrams, for encoder and decoder, respectively. Each transform skip mode is selected with corresponding (TfO, Tf1 ) pair of flags, such that TSO: (1 , 1 ), TS1 : (1 , 0), TS2: (0, 1 ) and TS3: (0, 0).
As for any other additional bits from a compressed bit-stream that enable adaptive option, signalling of the transform skip mode can be costly. Therefore several strategies are devised to maximise the coding efficiency.
Four TSM options can be signalled using carefully designed code words. Those code words do not need to be transmitted for each block, but some other methods can be used to save necessary bit-rate. Some of possibilities for reducing the signalling cost are listed in the following; each option influencing transform-related parts of the encoder and decoder:
1. The same transform mode used on all components (luminance - Y and chrominance - U and V) of a YUV block; therefore, for Y, U and V collocated blocks only one TSM choice is transmitted.
2. TSM not signalled when all quantised blocks (Y, U and V) have only
coefficients with zero values.
3. TSM not signalled for blocks when Y block has only zero-value
coefficients, and then 2D DCT is used on U and V components.
4. TSM signalled only for blocks with certain other modes (e.g. bidirectional predicted); otherwise 2D-DCT is applied.
5. Application of TSM signalled on a set of blocks (if "on" then TS modes signalled for each block from the set).
6. TSM signalled on a set of blocks (e.g. all sub-blocks share the same
TSM).
7. TSM signalled if certain other block characteristics are present; e.g. TSM not signalled when Y block has only one non-zero value, and that value is in top-left corner of the block (DC component); in that case 2D-DCT is used for all components.
Four TSM modes (2D transform, two 1 D block transforms and skipped transform on a block) can be defined with various code words, e.g. with simple 2-bit words, or with more bits (i.e. with unary codes):
Figure imgf000007_0001
If arithmetic coding is used, each bin of the code word can be encoded with different probability models (i.e. initial context states for each slice), depending on the current block size and on QP value. On the other hand, if variable length coding is used, TSM code words can be encoded independently of or merged with other syntax elements, to reduce the signalling overhead.
In some approaches, a block is not always transformed at once, but rather options for its partitioning into smaller sub-units are applied, and transforms are applied on each sub-units. Representative of such transform structure is Residual QuadTree (RQT) method. While application of TSM on blocks that are not further divided into smaller unit has been assumed so far, TSM can also be applied on such multi-split transform structures. Several options are identified:
1. TSM is decided on a block level, and the same transform choice is applied on each sub-unit.
2. TSM is enabled only at the root level of transformation structure, i.e. when a block is not further partitioned into smaller units when a multi-split structure is enabled; if a block is split into smaller units, each unit is transformed using 2D transform.
3. TSM is decided and signalled for each sub-unit, independently of its
depth.
4. TSM is decided and signalled for sub-units, up to specific depth (size) of units; for lower sub-units, when TSB is not signalled, 2D transform is used. Coefficients within a block can have different characteristics when the transform is not performed in one or both directions. Therefore different coding strategies can be applied, depending on the transform skip mode, to better compress given coefficients.
When a 2D transform is applied on a block, the resulting coefficients are often grouped towards top-left corner of a block, that is to say they are low-frequency components. Conventional scanning, e.g. zig-zag scanning, is therefore a good choice for coding of such signals.
If only 1 D transform is applied (TS1 or TS2), adaptive scanning can be used. For example, a row-by-row, or a column-by-column scanning can be used for TS2 and TS1 cases respectively, since one can expect that applied transform concentrates the coefficients towards lower frequencies. For the TS3 case, where a transform is not applied in any direction, a
conventional scan (used for 2D transformed block) scan may be used.
Alternatively, a different scanning pattern may be employed which takes into account the probability (implicit in the decision to conduct no transform) that non- zero coefficients are not uniformly distributed. For example, coefficients may be grouped in "islands" surrounded by "seas" of zero coefficients.
Thus, in one new arrangement, positions of the first and the last significant coefficients within a block can be transmitted in the bit-stream, and a
conventional scanning of coefficients within a block can then be performed. This is shown in Figure 3 where white squares represent coefficients that are not encoded and have zero value, gray squares represent coefficients that will be encoded, i.e. include significant (non-zero) coefficients), where the first coded coefficient is labelled with "F" and the last encoded coefficient is labelled with "L". Scanning is performed only on rows and columns that belong to area defined by the first and the last coefficient. In this scanning method, x and y coordinates of the first coefficient must be the same or smaller than the x and y coordinates of the last significant coefficient.
This arrangement should lead to highly efficient coding in the case where nonzero coefficients are clustered, but requires the additional complexity in the encoder of determining the positions of the first and the last significant
coefficients within a block, together with the need to signal those positions to the decoder.
In an alternative, a double zig-zag scan is used, as depicted in Figure 4, where a block of transform coefficients is represented with sub-blocks of coefficients. Each sub-block is visited in sub-block level zig-zag scan, and inside each block a zig-zag scan (or any other scan) is used. This enables better grouping of nonzero coefficients, which tend to be spatially close.
It will be desirable, where a decision is taken to skip either or both 1 D transforms, to minimise or remove the need to change other elements of the process to accommodate the skipped transform or transforms.
Here, two implementation strategies for the adaptive transform stage are identified: 1 ) skipping selected transform of rows / columns, and modifying
quantisation stage.
2) replacing selected transform of rows / columns by suitable scaling step and adapting the quantisation step if required.
While the first strategy is suitably presented with Figures 1 and 2, the second strategy that employs scaling is depicted in Figures 5 and 6. One of the main reasons why scaling is performed is to maintain levels of signal, with the highest supported precision, between transform blocks. This is indicated using dashed line in Figures 5 and 6.
Scaling is performed by scaling each input pixel value by a factor that is derived from norm-2 of corresponding transform vectors (which would be used to obtain a transform coefficient value, at the same position in a row/column, if the transform was selected). Some transforms have close to orthonormal properties of each vector and this property can further simplify the scaling design since a single value can be used to suitably scale whole row/column on which the transform is skipped.
In the following, scaling strategies are discussed in the context of integer DCT transform with 16 bit intermediate data representation. It will be recognised, however, that this is only an example.
Transforms used in HEVC have the norms (TNN), where N is size of the transform, close to the following numbers:
- 4-point transform: TN4 = 128 = 27; TNS4 = 7;
- 8-point transform: TN8 = 181 = 27 5; TNS8 = 7.5;
- 16-point transform: TN16 = 256 = 28; TNSi6 = 8;
- 32-point transform: TN32 = 362 = 28 5. TNS32 = 8.5;
where TNS is corresponding Transform Norm Shift parameter (power of 2 represented by left bit-shifting). Note that in HEVC each transform vector may have slightly different norm, but these numbers are good approximations for practical implementations. This fact is also reflected in the designs of quantisation and in the transform level adjustment to preserve 16-bit intermediate data representation. For example, in HEVC decoder design, 16-bit value enters inverse transform. In order to reach 16-bit precision between column (1 st stage inverse) and row (2nd stage inverse) transforms, and 9+DB precision after the row transform, the following signal level bit-shifts occur (considering N x N block size):
SHIFT = TNSN - SHIFTJNVJ ST + TNSN - (SHIFT_INV_2ND - DB), where, by the standard, SHIFTJNVJ ST = 7 and SHIFT_INV_2ND = 12, and DB is the bit-depth increment for processing (e.g. 0 or 2). Internal processing bit- depth is B = 8 + DB. Therefore, SHIFT equals:
SHI FT = 2 · TNSN - 19 + DB = 2 · TNSN - 27 + B .
This corresponds to the parameter transform shift used in the HEVC quantisation. This leads to, for the example where 4 x 4 block is considered (TNS4 = 7), to
-SHIFT4 = 13 - B,
i.e. right shift by 13 - B.
While this example may be used to address signal level adjustment for TS3, some additional considerations have to be taken into account when the transform is applied in one direction only. That is because TNSN are not always integer numbers, thus bit-shifting is not the only option for level adjustment. Other options for addressing unified designs for such combinations are addressed in the following text.
Where a transform is replaced with scaling, the adaptive transform stage is designed in a way that it can be interleaved within the integer DCT transform with 16-bit intermediate data representation, i.e. with the goal to replace some of its parts and to be compatible with the rest of the codec that supports original 2D transform. For example, not applying transform can be used on rows in a way which is still compatible with the part of 2D transform that is applied on columns. This means that quantisation applied for 2D transform can also be used with adaptive transform choice.
The forward transform skip is defined for rows and columns separately.
On samples x of rows the transform skip is applied as:
y = (x scale + offset) right shifted by S bits (a)
where:
S = M 1 + DB
offset 1 left shifted by (S - 1 ) bits DB = B - 8 bit-depth increment for processing
M = log2(N), where N is row/column size in the number of pixels, and scale is an unsigned integer multiplier.
On columns, the transform skip is applied as in (a) where x are samples of columns, but with:
S = M + 6
offset = 1 left shifted by (S - 1 ) bits
In this way a bit-width of 16 after each transform stage is ensured, as in the 2D transform.
Again, scale factors are designed in a way to be near the norm-2 of related transform vectors (scaleN2 = TNN 2 = N · 642) and to be an integer number. On samples x of columns the inverse transform skip is applied as
y = (x · scale + offset) right shifted by S bits
where:
S = 7
offset = 1 left shifted by (S - 1 ) bits
and scale is the same as in the forward skip.
On rows the same transform skip operation is applied, but with:
S = 12 - DB, where DB is the same as in the forward transform skip.
In order to save unnecessary processing of pixels, where one or both 1 D transforms are skipped, scaling can be moved to quantisation. Moreover (for example), if only the vertical transform is kept, it can be adapted, to ensure maximal 16-bit representation of pixels. This enables full use to be made of the available bit width. Therefore, scaling in quantisation has to be adapted not only because of the scaling related to skipped transform but also related to new scaling within a transform.
TSM = TS0 (2D transform)
Regular 2D transform and corresponding quantisation is used.
TSM - TS1 (1 D transform on rows) and TS2 (1 D transform on columns)
In both cases the forward transform corresponds to the original transform of rows y = (x + offset) right shifted by S bits, (b)
where:
x is the original value of residual block,
S = M - 1 + DB,
offset = 1 left shifted by (S - 1 ) bits
and M and DB are the same as in (a).
This ensures 16-bit intermediate data precision.
Quantisation is adapted and takes into account the level at which signal is now. TSM = TS3 (no transform)
Residual pixels are directly quantised using the flat matrix so that the level of signal corresponds to the levels of quantised coefficients that are 2D transformed and quantised. Another example of how the level of the signal can be adjusted when a transform is skipped is presented in the following, with reference to Figure 7. In this example the aim is to reduce a number of operations required to achieve desired performance. In that context, where a transform or its parts can be skipped or replaced, this technique uses a combination of one or more basic operations:
1. Changes to bit-shifting within transform stages;
2. Adjustment of quantisation that correspond to the scaling a signal by a factor smaller than 2;
3. Replacement of the transform or its parts by a scalar outside the quantisation.
Each scaling of the signal can be represented by scaling by a factor of 2 (where N is a positive integer) and by scaling by a factor M that is smaller than 2. Note that in this case N is the transform size as in the previous example). In this invention, Operation 1 enables signal scaling by a factor of 2N (bit-shifting) and Operation 2 enables scaling by M. The choice of M is typically limited and depends on the quantisation design. A typical component of a 1 D transform in video coding is bit-shifting. Therefore Operation 1 applied here readily enables adjustment of a signal level by a factor of 2N. In the case where both transforms are skipped, adjustment of the level of the signal can be performed in the "Scaling" block from Figure 7, which corresponds to Operation 3. In any case, adjustment of the signal by a factor smaller than 2, a quantisation parameter offset, or quantisation scaling factor, can be suitably chosen to perform required signal level adjustment. For example, in High Efficiency Video Coding (HEVC), adding an offset of 3 to a quantisation parameter is equivalent to adjusting the level of the signal by sqrt(2) (root 2).
It will be understood that the invention has been described by way of example only and that a wide variety of modifications are possible without departing from the scope of the invention as set forth in the appended claims. Features which are here described in certain combinations may find useful application in other combinations beyond those specifically mentioned and may in certain cases be used alone. For example, the scanning approaches in video coding or decoding where:
positions of the first and the last coefficients to be encoded /decoded within a block are signalled to the decoder and a scanning of coefficients is performed between said first and the last coefficients; or
a double scan is performed, where a block of transform coefficients is represented with sub-blocks of coefficients; each sub-block is visited in sub-block level zig-zag scan, and inside each sub-block additional scan pattern is used;
may be useful beyond the case of transform skip mode.

Claims

1. A method of video encoding utilising a spatial transform operating on rows and columns of a block, comprising the steps of establishing a set of transform skip modes including:
transform on rows and columns;
transform on rows only;
transform on columns only;
no transform;
selecting one of the said modes; and providing an indication of the selected mode for a decoder.
2. A method of decoding video which has been encoded utilising a spatial transform operating on rows and columns of a block with transform skip modes including:
transform on rows and columns;
transform on rows only;
transform on columns only;
no transform;
comprising the steps of providing an indication of the transform skip mode and applying inverse transforms in accordance with the mode.
3. A method according to Claim 1 or Claim 2, wherein mode selection is signalled to the decoder with each mode assigned a codeword.
4. A method according to any one of the preceding claims, where the order in which coefficients within a block are scanned in the entropy coding stage is adapted in accordance with the transform skip mode.
5. A method according to Claim 4, wherein row-by-row scanning is employed where the row transform is skipped and transform of columns is kept, and column-by-column scanning is employed where the column transform is skipped and transform on rows is kept.
6. A method according to any one of the preceding claims, wherein in the entropy coding stage positions of the first and the last coefficients to be encoded /decoded within a block are signalled to the decoder and a scanning of coefficients is performed between said first and the last coefficients.
7. A method according to any one of the preceding claims, wherein, a double scan is performed, where a block of transform coefficients is represented with sub-blocks of coefficients; each sub-block is visited in sub-block level scan, and inside each sub-block a different scan is used.
8. A method according to any one of the preceding claims, wherein the same transform skip mode is used on all components (luminance - Y and chrominance - U and V) of a YUV block.
9. A method according to any one of the preceding claims, wherein the transform skip mode is not signalled for blocks having only zero-value
coefficients.
10. A method according to Claim 9, wherein the transform skip mode is not signalled when the luminance component has only zero values; in this case 2D transform is used on chroma components.
1 1. A method according to Claim 9, wherein the transform skip mode is not signalled when the only non-zero-value coefficient of the luminance component is the top-left corner of the block (DC component) in this case 2D transform is used on chroma components.
12. A method according to any one of the preceding claims, wherein the transform skip mode is signalled only for blocks with predefined other modes (e.g. predicted from other frames only).
13. A method according to any one of the preceding claims, wherein the transform skip mode is signalled on a set of blocks.
14. A method according to any one of the preceding claims, where the transform provides options for its partitioning into smaller sub-units and
transforms are applied on each sub-units (for example the Residual QuadTree (RQT) method) and wherein:
the transform skip mode is enabled on a block level, and the same transform mode is applied on each sub-unit; or
transform skip mode is enabled only on the root level of transformation structure; for lower sub-units, when the transform skip mode is disabled, 2D transform is used; or the transform skip mode is enabled for each sub-unit, independently of its depth; or
the transform skip mode is enabled for sub-units up to a specific depth of units; for lower sub-units, when the transform skip mode is disabled, 2D transform is used; or
transform skip mode is enabled for sub-units under a specific depth of units; for higher sub-units, when the transform skip mode is disabled, 2D transform is used.
15. A method according to any one of the preceding claims, wherein a quantisation stage is adapted according to the selected transform skip mode.
16. A method according to any one of the preceding claims, wherein a quantisation matrix that has the same values in each column is applied when vertical transform is skipped, a quantisation matrix that has the same values in each row is applied when horizontal transform is skipped.
17. A method according to any one of the preceding claims, comprising the step of scaling of coefficients that are not transformed, where the scaling factors are dependent upon the norms of corresponding transform vectors to bring the untransformed coefficients to the same level as transformed coefficients.
18. A method according to Claim 17, wherein the same scaling factors are used for all coefficients in scaled row or column.
19. A method according to any one of the preceding claims, wherein the row transform differs in dependence upon whether or not the column transfer is skipped and wherein the column transform differs in dependence upon whether or not the row transfer is skipped.
20. A method according to Claim 19 where shifting in the remaining 1 D transform is adjusted so that the level of signal is adjusted by D=2N, where N is an integer.
21. A method according to Claim 20 where any remaining scaling of M < 2 is approximated in quantisation.
22. A computer program product containing instructions causing programmable means to implement a method according to any one of the preceding claims.
23. A video encoder adapted and configured to operate in accordance with any one of Claim 1 and any claim when dependent on Claim 1.
24. A video decoder adapted and configured to operate in accordance with any one of Claim 2 and any claim when dependent on Claim 2.
PCT/GB2012/051412 2011-06-27 2012-06-19 Video encoding and decoding using transforms WO2013001278A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1110873.5A GB2492333B (en) 2011-06-27 2011-06-27 Video encoding and decoding using transforms
GB1110873.5 2011-06-27

Publications (1)

Publication Number Publication Date
WO2013001278A1 true WO2013001278A1 (en) 2013-01-03

Family

ID=44485219

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/GB2012/051413 WO2013001279A2 (en) 2011-06-27 2012-06-19 Video encoding and decoding using transforms
PCT/GB2012/051412 WO2013001278A1 (en) 2011-06-27 2012-06-19 Video encoding and decoding using transforms

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/GB2012/051413 WO2013001279A2 (en) 2011-06-27 2012-06-19 Video encoding and decoding using transforms

Country Status (11)

Country Link
US (1) US8923406B2 (en)
EP (2) EP2652954B1 (en)
JP (2) JP6063935B2 (en)
KR (1) KR101622450B1 (en)
CN (2) CN103404141B (en)
ES (1) ES2574278T3 (en)
GB (1) GB2492333B (en)
PL (1) PL2652954T3 (en)
PT (1) PT2652954E (en)
TW (1) TWI516095B (en)
WO (2) WO2013001279A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578190A (en) * 2016-02-03 2016-05-11 珠海全志科技股份有限公司 Lossless compression method and system for video hard decoding
CN105594208A (en) * 2013-10-11 2016-05-18 索尼公司 Decoding device, decoding method, encoding device, and encoding method
CN105900424A (en) * 2013-10-11 2016-08-24 索尼公司 Decoding device, decoding method, encoding device, and encoding method
US9774871B2 (en) 2014-02-13 2017-09-26 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding image
CN109451311A (en) * 2013-09-25 2019-03-08 索尼公司 Video data encoding, decoding apparatus and method
WO2022037701A1 (en) * 2020-08-21 2022-02-24 Beijing Bytedance Network Technology Co., Ltd. Coefficient reordering in video coding
US11722698B2 (en) 2016-08-24 2023-08-08 Sony Corporation Image processing apparatus and image processing method

Families Citing this family (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2648605C1 (en) * 2011-10-17 2018-03-26 Кт Корпорейшен Method of video signal decoding
KR101549910B1 (en) 2011-10-17 2015-09-03 주식회사 케이티 Adaptive transform method based on in-screen rediction and apparatus using the method
CN104378637B (en) * 2011-10-18 2017-11-21 株式会社Kt Video signal decoding method
CN107257460B (en) 2011-10-19 2019-08-16 株式会社Kt The method of decoding video signal
US20130188736A1 (en) 2012-01-19 2013-07-25 Sharp Laboratories Of America, Inc. High throughput significance map processing for cabac in hevc
JP6480186B2 (en) * 2012-01-19 2019-03-06 ヴィド スケール インコーポレイテッド Video coding quantization and dynamic range control system and method
US9860527B2 (en) 2012-01-19 2018-01-02 Huawei Technologies Co., Ltd. High throughput residual coding for a transform skipped block for CABAC in HEVC
US9743116B2 (en) 2012-01-19 2017-08-22 Huawei Technologies Co., Ltd. High throughput coding for CABAC in HEVC
US10616581B2 (en) 2012-01-19 2020-04-07 Huawei Technologies Co., Ltd. Modified coding for a transform skipped block for CABAC in HEVC
US9654139B2 (en) 2012-01-19 2017-05-16 Huawei Technologies Co., Ltd. High throughput binarization (HTB) method for CABAC in HEVC
CN109905710B (en) * 2012-06-12 2021-12-21 太阳专利托管公司 Moving picture encoding method and apparatus, and moving picture decoding method and apparatus
US9426466B2 (en) 2012-06-22 2016-08-23 Qualcomm Incorporated Transform skip mode
GB2503875B (en) * 2012-06-29 2015-06-10 Canon Kk Method and device for encoding or decoding an image
US9877025B2 (en) 2013-07-12 2018-01-23 British Broadcasting Corporation Video encoding and decoding with prediction at higher precision
JP6139774B2 (en) * 2013-07-15 2017-05-31 ホアウェイ・テクノロジーズ・カンパニー・リミテッド Modified coding for transform skipped blocks for CABAC in HEVC
US10645399B2 (en) 2013-07-23 2020-05-05 Intellectual Discovery Co., Ltd. Method and apparatus for encoding/decoding image
US9445132B2 (en) 2013-09-09 2016-09-13 Qualcomm Incorporated Two level last significant coefficient (LSC) position coding
EP3222044A1 (en) * 2014-11-21 2017-09-27 VID SCALE, Inc. One-dimensional transform modes and coefficient scan order
KR102365685B1 (en) 2015-01-05 2022-02-21 삼성전자주식회사 Method for operating of encoder, and devices having the encoder
KR102390407B1 (en) * 2015-03-18 2022-04-25 한화테크윈 주식회사 Decoder and inverse transformation method in decoder
CN108028930A (en) * 2015-09-10 2018-05-11 三星电子株式会社 Encoding device, decoding device and its coding method and coding/decoding method
US20180278943A1 (en) * 2015-09-21 2018-09-27 Lg Electronics Inc. Method and apparatus for processing video signals using coefficient induced prediction
US20170150176A1 (en) * 2015-11-25 2017-05-25 Qualcomm Incorporated Linear-model prediction with non-square prediction units in video coding
US10244248B2 (en) 2016-02-25 2019-03-26 Mediatek Inc. Residual processing circuit using single-path pipeline or multi-path pipeline and associated residual processing method
AU2017264000A1 (en) * 2016-05-13 2018-11-22 Sony Corporation Image processing device and method
KR102397673B1 (en) * 2016-05-13 2022-05-16 소니그룹주식회사 Image processing apparatus and method
CN113411579B (en) * 2016-05-13 2024-01-23 夏普株式会社 Image decoding device and method, image encoding device and method
CN109792522B (en) * 2016-09-30 2021-10-15 索尼公司 Image processing apparatus and method
CA3041856A1 (en) * 2016-12-28 2018-07-05 Sony Corporation Image processing apparatus and method
BR112020000876A2 (en) * 2017-07-28 2020-07-21 Panasonic Intellectual Property Corporation Of America encoding device, decoding device, encoding method, and decoding method
CN116055721A (en) 2017-07-28 2023-05-02 松下电器(美国)知识产权公司 Encoding device and encoding method
CN115460403A (en) * 2017-07-31 2022-12-09 韩国电子通信研究院 Method of encoding and decoding image and computer readable medium storing bitstream
EP3661214B1 (en) * 2017-08-04 2022-07-20 LG Electronics Inc. Method and apparatus for configuring transform for video compression
EP3484151A1 (en) * 2017-11-13 2019-05-15 Thomson Licensing Method and apparatus for generating quantization matrices in video encoding and decoding
CN116132673A (en) * 2017-12-13 2023-05-16 三星电子株式会社 Video decoding method and apparatus thereof, and video encoding method and apparatus thereof
BR122021019719B1 (en) * 2017-12-21 2022-05-24 Lg Electronics Inc Image decoding/coding method performed by a decoding/coding apparatus, decoding/coding apparatus for image decoding/coding, data transmission method and apparatus comprising a bit stream for an image and readable digital storage media by computer non-transient
CN111727606B (en) * 2018-01-02 2023-04-11 三星电子株式会社 Video decoding method and apparatus thereof, and video encoding method and apparatus thereof
JP6477930B2 (en) * 2018-01-17 2019-03-06 ソニー株式会社 Encoding apparatus and encoding method
CN112806018A (en) 2018-10-05 2021-05-14 韩国电子通信研究院 Image encoding/decoding method and apparatus, and recording medium storing bit stream
US11412260B2 (en) * 2018-10-29 2022-08-09 Google Llc Geometric transforms for image compression
CN113632493A (en) * 2019-03-13 2021-11-09 北京字节跳动网络技术有限公司 Sub-block transform in transform skip mode
WO2020185027A1 (en) * 2019-03-13 2020-09-17 현대자동차주식회사 Method and device for efficiently applying transform skip mode to data block
JP6891325B2 (en) * 2019-03-20 2021-06-18 キヤノン株式会社 Image coding method
JP6743225B2 (en) * 2019-03-20 2020-08-19 キヤノン株式会社 Image decoding apparatus, image decoding method and program
US11695960B2 (en) * 2019-06-14 2023-07-04 Qualcomm Incorporated Transform and last significant coefficient position signaling for low-frequency non-separable transform in video coding
CN112135147B (en) * 2019-06-24 2023-02-28 杭州海康威视数字技术股份有限公司 Encoding method, decoding method and device
CN110418138B (en) * 2019-07-29 2021-08-27 北京奇艺世纪科技有限公司 Video processing method and device, electronic equipment and storage medium
CN114270817A (en) 2019-08-20 2022-04-01 北京字节跳动网络技术有限公司 Location-based coefficient scaling
CN117499661A (en) * 2019-09-09 2024-02-02 北京字节跳动网络技术有限公司 Coefficient scaling for high precision image and video codecs
JP7323712B2 (en) 2019-09-21 2023-08-08 北京字節跳動網絡技術有限公司 Precision transform and quantization for image and video coding
WO2021117500A1 (en) * 2019-12-11 2021-06-17 ソニーグループ株式会社 Image processing device, bit stream generation method, coefficient data generation method, and quantization coefficient generation method
IL295916A (en) * 2020-03-12 2022-10-01 Interdigital Vc Holdings France Method and apparatus for video encoding and decoding

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2192783A1 (en) * 2006-01-09 2010-06-02 Matthias Narroschke Adaptive coding of the prediction error in hybrid video coding

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69227352T2 (en) * 1991-11-12 1999-04-15 Japan Broadcasting Corp Method and system for performing highly effective image signal coding
JP3361543B2 (en) * 1992-01-27 2003-01-07 日本放送協会 Image signal encoding device
JP3211989B2 (en) * 1992-08-31 2001-09-25 日本ビクター株式会社 Orthogonal transform encoding device and decoding device
KR0134504B1 (en) * 1992-09-09 1998-04-23 배순훈 Image coder with adaptive frequency converter
JPH06217280A (en) * 1993-01-14 1994-08-05 Sony Corp Moving picture encoding and decoding device
US6215898B1 (en) * 1997-04-15 2001-04-10 Interval Research Corporation Data processing system and method
KR20020064913A (en) * 2000-09-27 2002-08-10 코닌클리케 필립스 일렉트로닉스 엔.브이. Decodong of data
US7206459B2 (en) * 2001-07-31 2007-04-17 Ricoh Co., Ltd. Enhancement of compressed images
JP4267848B2 (en) * 2001-09-25 2009-05-27 株式会社リコー Image encoding device, image decoding device, image encoding method, and image decoding method
CN101448162B (en) * 2001-12-17 2013-01-02 微软公司 Method for processing video image
US7242713B2 (en) * 2002-05-02 2007-07-10 Microsoft Corporation 2-D transforms for image and video coding
US6795584B2 (en) * 2002-10-03 2004-09-21 Nokia Corporation Context-based adaptive variable length coding for adaptive block transforms
KR20050026318A (en) * 2003-09-09 2005-03-15 삼성전자주식회사 Video encoding and decoding device comprising intra skip mode
CN101005620B (en) * 2004-09-03 2011-08-10 微软公司 Innovations in coding and decoding macroblock and motion information for interlaced and progressive video
CN100488254C (en) * 2005-11-30 2009-05-13 联合信源数字音视频技术(北京)有限公司 Entropy coding method and decoding method based on text
CN101106721A (en) * 2006-07-10 2008-01-16 华为技术有限公司 An encoding and decoding device and related coder
CN101267553A (en) * 2007-03-12 2008-09-17 华为技术有限公司 A method and device for coding and decoding
KR101885258B1 (en) * 2010-05-14 2018-08-06 삼성전자주식회사 Method and apparatus for video encoding, and method and apparatus for video decoding
CN102447907A (en) * 2012-01-31 2012-05-09 北京工业大学 Video sequence coding method aiming at HEVC (High Efficiency Video Coding)

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2192783A1 (en) * 2006-01-09 2010-06-02 Matthias Narroschke Adaptive coding of the prediction error in hybrid video coding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FATIH KAMISLI ET AL: "Video compression with 1-D directional transforms in H.264/AVC", IEEE INTERNATIONAL CONFERENCE ONACOUSTICS SPEECH AND SIGNAL PROCESSING (ICASSP), 2010 , IEEE, PISCATAWAY, NJ, USA, 14 March 2010 (2010-03-14), pages 738 - 741, XP031697009, ISBN: 978-1-4244-4295-9 *
MARTA MRAK ET AL: "Transform skip mode", 7. JCT-VC MEETING; 98. MPEG MEETING; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JCTVC-G575, 8 November 2011 (2011-11-08), XP030110559 *
YUMI SOHN ET AL: "One Dimensional Transform For H.264 Based Intra Coding (Abstract)", 26. PICTURE CODING SYMPOSIUM;7-11-2007 - 9-11-2007; LISBON,, 7 November 2007 (2007-11-07), XP030080458 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109451311A (en) * 2013-09-25 2019-03-08 索尼公司 Video data encoding, decoding apparatus and method
CN105594208A (en) * 2013-10-11 2016-05-18 索尼公司 Decoding device, decoding method, encoding device, and encoding method
CN105900424A (en) * 2013-10-11 2016-08-24 索尼公司 Decoding device, decoding method, encoding device, and encoding method
CN105900424B (en) * 2013-10-11 2019-05-28 索尼公司 Decoding apparatus, coding/decoding method, code device and coding method
US9774871B2 (en) 2014-02-13 2017-09-26 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding image
US10609388B2 (en) 2014-02-13 2020-03-31 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding image
CN105578190A (en) * 2016-02-03 2016-05-11 珠海全志科技股份有限公司 Lossless compression method and system for video hard decoding
US10681363B2 (en) 2016-02-03 2020-06-09 Allwinner Technology Co., Ltd. Lossless compression method and system applied to hardware video decoding
US11722698B2 (en) 2016-08-24 2023-08-08 Sony Corporation Image processing apparatus and image processing method
WO2022037701A1 (en) * 2020-08-21 2022-02-24 Beijing Bytedance Network Technology Co., Ltd. Coefficient reordering in video coding

Also Published As

Publication number Publication date
ES2574278T3 (en) 2016-06-16
KR101622450B1 (en) 2016-05-18
EP3026911A1 (en) 2016-06-01
PL2652954T3 (en) 2016-10-31
US8923406B2 (en) 2014-12-30
JP2014523175A (en) 2014-09-08
KR20140027932A (en) 2014-03-07
GB2492333A (en) 2013-01-02
JP6063935B2 (en) 2017-01-18
EP2652954B1 (en) 2016-03-30
WO2013001279A3 (en) 2013-03-07
EP2652954A2 (en) 2013-10-23
GB201110873D0 (en) 2011-08-10
CN105847815A (en) 2016-08-10
CN105847815B (en) 2019-05-10
JP6328220B2 (en) 2018-05-23
CN103404141B (en) 2017-06-06
WO2013001279A2 (en) 2013-01-03
CN103404141A (en) 2013-11-20
JP2017098975A (en) 2017-06-01
US20140056362A1 (en) 2014-02-27
TWI516095B (en) 2016-01-01
PT2652954E (en) 2016-06-07
GB2492333B (en) 2018-12-12
TW201320751A (en) 2013-05-16

Similar Documents

Publication Publication Date Title
US8923406B2 (en) Video encoding and decoding using transforms
US10708584B2 (en) Image decoding method using intra prediction mode
EP2942954A2 (en) Image decoding apparatus
EP3402202A1 (en) Image decoding apparatus
WO2012161445A2 (en) Decoding method and decoding apparatus for short distance intra prediction unit
KR20130029130A (en) Method of short distance intra prediction unit decoding and decoder
GB2559912A (en) Video encoding and decoding using transforms
US20240129512A1 (en) Encoding and decoding method, encoder, decoder, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12729697

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12729697

Country of ref document: EP

Kind code of ref document: A1