GB2534604A - Video encoding and decoding with adaptive quantisation - Google Patents

Video encoding and decoding with adaptive quantisation Download PDF

Info

Publication number
GB2534604A
GB2534604A GB1501505.0A GB201501505A GB2534604A GB 2534604 A GB2534604 A GB 2534604A GB 201501505 A GB201501505 A GB 201501505A GB 2534604 A GB2534604 A GB 2534604A
Authority
GB
United Kingdom
Prior art keywords
values
block
quantisation
contouring
offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1501505.0A
Other versions
GB2534604B (en
GB201501505D0 (en
Inventor
Naccari Matteo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Broadcasting Corp
Original Assignee
British Broadcasting Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Broadcasting Corp filed Critical British Broadcasting Corp
Priority to GB1501505.0A priority Critical patent/GB2534604B/en
Publication of GB201501505D0 publication Critical patent/GB201501505D0/en
Priority to PCT/GB2016/050197 priority patent/WO2016120630A1/en
Publication of GB2534604A publication Critical patent/GB2534604A/en
Application granted granted Critical
Publication of GB2534604B publication Critical patent/GB2534604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Abstract

Disclosed are methods for encoding a video stream which aim to reduce contouring artefacts in areas with small variations in pixel intensities. In one aspect during the video encoding process a quantisation operation is carried out on a block of pixels by adding a rounding offset followed by division by a quantisation step and reduction to integer values. Blocks of the image prone to contouring are identified and, for a contouring-prone block only, the rounding offset is varied between values of the block. The quantised block of pixels may first has been subjected to predictive and transform coding. In a further aspect a quantisation operation is carried out on the transform coefficients of an image block containing residual prediction values that have been subjected to a frequency transformation. An offset used in the dead zone is adjusted to avoid non-DC quantised coefficients being rounded to zero.

Description

VIDEO ENCODING AND DECODING WITH ADAPTIVE QUANTISATION
FIELD OF THE INVENTION
This invention is related to video compression and decompression systems, notably to a method for improving picture quality over blocks which present low contrast or smooth gradient variations.
BACKGROUND TO THE INVENTION
This invention is directed to the video compression area which aims to reduce the bitrate required to transmit and store video content while at the same time maintaining an acceptable visual quality (lossy coding). Data compression in lossy video coding is achieved by discarding some redundant information in the source data. The processing stage responsible for discarding the redundant information is called quantisation. Accordingly, each source data sample is divided by a quantity called quantisation step. The result of this division is rounded to the nearest integer value. The obtained result is called reproduction level and is then encoded using entropy coding techniques known by those skilled in the art and it is eventually written in the bitstream (i.e. the compressed version of the input video content). During decompression, the received data are firstly decoded by inverting the adopted entropy coding process and then the obtained value is multiplied by the same quantisation step so that a lossy version of the original source data sample can be reconstructed. Quantisation can take place in the pixel or the frequency domain. Frequency domain is employed when a spatial transformation is applied on the source data in order to further compact the energy in few coefficients. Moreover, quantisation can be either directly applied to the source data (or its frequency representation) or to a residual signal obtained by subtracting a predictor from the source whereby the prediction is computed according to some rules defined in the coding process. The lost information cannot be recovered at the decoder and this loss may introduce some quality degradation (artefacts). The nature of coding artefacts mainly depends on both the type of selected coding technique and the source content. Considering the hybrid block-based predictive video codec architecture like the one specified by the H.264/AVC and H.265/HEVC standards, the most prominent coding artefacts are: blocking, ringing and blurring. As stated above, the source content also influences the type of artefacts and its visibility. In image regions with low contrast and smooth gradient variation, another type of coding artefact may be visible. It is called contouring or sometimes also referred as banding. Contouring is characterised by false edges created in image areas with the aforementioned characteristics. Moreover, small temporal variations of the pixel intensities in these areas also change the spatial location of these false edges, resulting in false motion which might attract users attention.
Contouring mainly appears in these areas because the prediction cannot compensate for the smooth variations which are therefore retained in the residual signal. Given the small magnitude of these ripples (both in the spatial and frequency domains) quantisation will round all the values to zero. During reconstruction, the zero value signal will be added to the predictor which will contain only limited pixel variations (i.e. the false edges) and the overall reconstructed image area will be affected by contouring. To limit the visibility of contouring artefacts, the quantisation process should be carefully designed to retain the small variations of the sample data associated with either the residual signal or the source. One way to achieve this is to vary the quantisation process over those data over image areas which might be affected by contouring.
The invention described in US patent 8503536, "Quantization adjustments for DC shift artifacts", specifies a step variation method where the quantisation step may be varied over contour-prone blocks so that the associated transform coefficients will not all be rounded to zero. The method specifies an iterative procedure to vary the quantisation step for either all coefficients or a given subset. The main idea of this method is that for a scalar uniform quantizer as the one specified in both the H.264/AVC and H.265/HEVC standards, the extent of the dead-zone depends on the quantisation step. Whilst variation of the quantisation step may effectively prevent that transform coefficients are rounded to zero, the value which indeed prevents this rounding may be very small so that the overall coding rate will significantly increase. Signalling of the change in quantisation step to the encoder will add to the increase in the overall coding rate.
SUMMARY OF THE INVENTION
This invention aims to improve the picture quality of compressed images by removing or limiting the visibility of the contouring artefact in areas with small variation of pixel intensities. Typically lossy video compression reduces the amount of information associated with the source data by quantising the original signal s, or any other representation computed from s, to a finite set of levels which will be then encoded according to the selected entropy coding technique. Limiting s to a finite set of levels leads to the introduction of coding artefacts which may be visible in the decompressed image. In particular, for hybrid predictive image and video codecs, a residual representation (r) of s is first obtained from a predictor computed over the already reconstructed data. Over the residual signal r frequency transformation may be applied and quantisation can be then performed. Since the predictor can only approximate an average trend of the whole pixel variation, some fluctuations with small magnitude will be still present in the residual signal and may be quantised to zero so that the only possible pixel values will be the ones obtainable from the predictor. Therefore, in the reconstructed picture, contouring is likely to appear and be visible. In order to prevent contouring artefacts, the quantisation process needs to be modified accordingly. In the prior art example referred to above, the quantisation step for each sample inside a contouring-prone block can be varied to retain a sufficient amount of dynamics so that false edges are avoided. It should be noted that this case can be efficiently implemented when frequency transformation is firstly applied. In fact, the ultimate goal of a frequency transformation is energy compaction of the input data into a small subset of coefficients. For image blocks with smooth gradient variations, the overall signal dynamic can be concentrated in few coefficients and therefore few different quantisation steps can be used. However, there is a significant price to be paid in the bitrate for addressing the contouring problem in this manner and the decoder is required to respond to the changes in quantisation step.
According to one aspect of the present invention, there is provided a method of video encoding comprising the steps of dividing an input image into spatial blocks of pixel values; optionally generating residual values by subtracting prediction values from the pixel values; optionally applying a frequency transformation to the residual or pixel values to form transform coefficient values; and applying a quantisation operation to the values of the block, the quantisation operation comprising addition of a rounding offset, division by a quantisation step and reduction to integer values; wherein: blocks prone to contouring are identified; and for a contouring-prone block only, the rounding offset is varied between values of the block.
Where a frequency transformation is applied to the residual or pixel values to form transform coefficient values, the rounding offset for the DC coefficient may remain unchanged.
Variation of the rounding offset may be confined to a predefined subset of mid frequency range transform coefficients and may be asymmetric between horizontal and vertical frequencies.
Where no frequency transformation is applied, the rounding offset may remain unchanged for pixel or residual values in excess of a given threshold.
The manner of variation of the rounding offset within a block may be determined by the values of the block. For example, the rounding offset may be varied by that amount sufficient to prevent the selected value being quantised to zero.
According to another aspect of the present invention, there is provided a method of video encoding comprising the steps of: dividing an input image into a non overlapping grid of spatial blocks; generating prediction values for the block; generating difference values by subtracting the prediction values from the pixel values of the block; applying a frequency transformation on the difference values of the block generating transform coefficients; applying a quantisation operation on the transform coefficients of the block; wherein the offset used in the dead-zone is adjusted to avoid that non-DC quantised coefficients are rounded to 0.
Preferably, the offset used in the dead-zone is adjusted only for those blocks which are affected by contouring.
Suitably, the offset used in the dead-zone of the quantiser is adjusted according to an iterative procedure which optimises a given cost metric.
In accordance with an embodiment of this invention, the dead-zone of the quantiser can be varied on a per sample basis rather than just on a per block basis. The quantiser dead-zone refers to the range of input values which will be quantised to zero. Typically, the larger this zone, the higher the number of coefficients quantised to zero and therefore the lower the coding rate. Conversely, the narrower the dead-zone, the more coefficients will be retained after quantisation and the higher the coding rate. In order to prevent contouring, the right extension of the dead-zone should be found so that the perceived quality is improved and the coding efficiency penalty is minimised. It should be noted that in the presented examples three main issues need to be understood and addressed: first which are the blocks potentially affected by contouring, second which data in those blocks need to be modified and third how much the data values need to be modified so that false edges are not introduced.
For the first issue, a contouring detection algorithm can be provided so that the quantiser is only modified over the detected image areas. One possible detection algorithm may measure the amount of contrast for a given region by computing the ratio between the peak pixel intensity difference and the maximum pixel value. This value can be also compared with the one obtained from the surrounding image areas in order to homogenously detect large regions where false edges are more noticeable.
For the second issue, i.e. which input data to the quantiser need to be modified, pixel variations in contour-prone blocks can be modelled. In particular from this modelling the relevant data which contribute to the smooth gradient variation will be highlighted and quantised accordingly. It should be noted that if the data is represented in the frequency domain, then the modelling will provide the relevant coefficients while if the data is represented in the spatial domain (i.e. either pixel or residual) some pixel or residual values will be highlighted to be adaptively quantised, thus preventing contouring.
Finally, for the third issue, that is, by how much to vary the quantisation over contouring blocks, two main approaches can be followed: an empirical one where the quantisation amount is decided based on some a priori knowledge or a deterministic one whereby the exact amount of quantisation is determined based on the knowledge of the input data and the quantisation process.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows an example of the quantisation process where the reproduction levels, decision intervals and dead-zone are highlighted.
Figure 2 shows an example of frame containing large areas which are potentially affected by contouring as highlighted in box.
Figure 3 shows an example of pixel variations inside the area highlighted in Figure 2.
Figure 4 shows an example of contouring created in the highlighted area of pixels in Figure 2 when compression is applied.
Figure 5 shows an example of frequency transformation applied to a block of residuals (left) and the result of quantisation applied to the same block (right).
Figure 6 shows an example of a system for removing or reducing contouring. Figure 7 shows an example of distribution for the input data to the quantiser. 5 DETAILED DESCRIPTION OF THE PRESENT INVENTION The present invention is now described in detail by a way of examples. Figure 1 sketches the quantisation process where the horizontal straight line represents the range of values assumed by the input to the quantiser. This range is divided in intervals called decision intervals where the one centred around zero is called dead-zone. If a value v falls inside a given interval, it would get reconstructed after quantisation with the reproduction level associated with the corresponding interval. All the intervals are indexes and these indexes are entropy encoded and written in the bitstream. At the decoder side, the reproduction level is simply obtained by multiplying the index with the used quantisation step q (which is also transmitted). The described process can be also performed using a closed form expression as follows: Where / denotes the quantiser level, q is the quantisation step, a is the rounding offset and [*j returns the nearest integer less than or equal to its argument.
In an example of the present invention, the rounding offset a can be further written as: a = (1-m)z, where z denotes the offset that would have been used in the absence of the dead-zone adjustment method described in this invention. Parameter m controls the dead-zone extension (if necessary sample-by-sample or coefficient-by-coefficient) by varying in the range [0, 1] where the closer to one, the smaller is this extension. As noted, this manner of adjustment of the dead-zone is particularly useful for image regions affected by contouring artefacts. In fact, smooth gradient image regions are characterised by small variations of the pixel intensities around the average value. Figure 2 depicts an example of image containing large areas potentially affected by contouring. Figure 3 shows the pixel intensity variations in the area highlighted by the box in Figure 2. The values depicted in Figure 3 refer to luminance component only and the colour mapping in this figure has been changed in order to highlight small fluctuations as shown by the colour bar on the right side of the figure. The luminance component is shown because this is the only component affected by contouring artefact for image and video represented in the YCbCr colour space. This colour space is widely adopted in video compression since it provides high degree of decorrelation between each colour component. Figure 4 shows the effect of compression on the highlighted area in Figure 2. The values depicted in Figure 4 refer to luminance component only and the gray level mapping in this figure has been changed in order to highlight the contouring artefact which is characterised by false edges created in the region. Contouring is introduced because the data associated with small pixel variations are removed by quantisation. Considering hybrid predictive video codecs such as the ones specified by the H.264/AVC and H.265/HEVC standards, the source data is usually subtracted by a predictor obtained from reconstructed data and computed according to the rules specified by the standard. Over the resulting residual signal a frequency transformation is applied to represent the signal with few coefficients. Accordingly, Figure 5 shows on the left an example of transform coefficients obtained from the residual of the luma component related to one image block affected by contouring. The level bar for this figure highlights the small magnitude for these coefficients. When quantisation is applied, all values are rounded to zero are shown in the right part of Figure 5. In this case contouring is likely to be introduced since most of the pixel dynamics was contained in the residuals signal which has now all samples equal to zero. From the example shown in Figure 5, it may be understood that contouring can be prevented in hybrid predictive video codecs by modifying the quantisation stage so that the transform coefficients will be retained after quantisation. As specified at the beginning of this section, the size of the dead-zone can be adjusted by modifying the rounding offset a. This rounding offset is used to minimise the quantisation error; for the case of contouring considered in this invention, it would be understood that the rounding offset variation guarantees that appropriate samples (whether pixel values, residuals or transform coefficients) are retained after quantisation and contouring is prevented. It will be understood by the embodiments of this invention that by adjusting the rounding offset only, the quantisation step does not need to be modified and therefore a lower increment of the bitrate can be generally achieved. Moreover, since the rounding offset can be varied on a coefficient basis, a finer tuning of the available bitrate can be achieved with respect to a solution where the quantisation step is varied on a block basis. By adapting the rounding offset rather than the quantisation step, is is also assured that the output bitstream remains compliant with no adaptation in a downstream coder being required.
Figure 6 shows a block scheme for the quantisation process specified in this invention. The transform coefficients constitute the data presented as input the quantiser together with a set of modulation factors mi which will be used to modify the rounding offset used for each coefficients. Accordingly, the associated dead-zone for each coefficient is modified as well so that each coefficient can be quantised to a value different from zero, thus contouring is prevented. Once the value for the rounding offset is obtained from mi, quantisation can be performed with step q so that reproduction level I is obtained as depicted in Figure 6. Generally, video coding standards specify only the syntax and semantics of reproduction levels and quantisation steps. Consequently the set of modulation factors tni does not need to be communicated at the decoder. It will be therefore understood from the embodiments of this invention that the method produces a bitstream which is still compliant with the adopted compression standard. For the methods described here, selection of the modulation factors m, can consider two aspects: which input data require dead-zone adjustment and which value should be used. These two aspects are addressed differently depending on whether frequency transformation is applied over the residual signal r.
For the first aspect and when frequency transformation is applied, the variation of pixels in contour-prone image areas needs to be modelled. Accordingly, the residual signal associated with these image areas is considered. If the value for residual samples can be characterised using closed form or numerical models, a frequency analysis can be then performed to identify the signal spectrum. In one example, the value for the residual r signal at location (x, y), can be predicted using a first order autoregressive model: r(x,y)= 0 -r(x,y-1)+Ti(p,o-2) where e is the correlation coefficient and 17 is a white Gaussian noise with average p and variance &. Model parameters, (e, q, p, can can be estimated using least square from the data collected over real video sequences encoded with one of the aforementioned video coding standards. Once the model is parametrised, the Fourier transform can be computed so that the main frequency components can be identified. It will be understood that once the signal bandwidth has been determined, the transform coefficients which require dead-zone adjustment as specified by the method described in this invention can be identified. The following table lists some specific values form; used over a 4x4 block of coefficients where a zero entry in this table means that the conventional rounding parameter or dead-zone extension z will be used.
0 0.8 0.2 0 0.7 0.3 0 0 0.5 0 0 0 0 0 0 0 It will be noted that three ranges of coefficients can be identified: * a DC coefficient which is unchanged * mid frequency range coefficients which are changed by an amount which reduces with frequency and can if necessary be asymmetric with respect to horizontal and vertical frequencies * high frequency coefficients which are unchanged.
When frequency transformation is not applied, the values for the residual signal are considered to identify over which samples the method proposed in this invention is applied. As mentioned before, contour-prone blocks are characterised by small pixel variations around the average value. When the residual signal is obtained, these small variations are shifted around the zero value. Figure 7 shows an example of the distribution of the values for the residual signal in one contour-prone block. Small pixel value variations around zero are limited by a given value R which can be associated with the variance of the residual data. Figure 7 also shows some other residual values which are located far away from zero and therefore their quantisation will not lead to the introduction of contouring artefacts. From the histogram, it will be understood that identifying the samples where the dead-zone proposed in this invention is applied translates into finding which samples have values restricted to a given range bounded by R. For the remaining aspect, that is which value should be used for the modulation factors mi, an empirical and a deterministic approach can be followed. Accordingly, for the empirical one, an iterative procedure can be devised whereby for each coefficient c,, the quantiser dead-zone is adjusted by varying mi in a given range. The minimum value which prevents c, being quantised to zero is selected as the final one. This procedure can be repeated for all remaining coefficients so that the final values for each mi can be obtained. The deterministic approach computes mi so that the value v for the current residual will have a reproduction level different from zero. The can be achieved by solving the following inequality: ->(1-in 11 It will be understood from the embodiments of this invention that the empirical as well as the deterministic approach can be used for both the case when frequency transform is applied or not to the residual signal. In the method described in this invention and when transformation is applied, the dead-zone for the frequency coefficients associated to the frequency value 0, denoted as DC component of the residual signal, is not adjusted. As stated in the previous sections, contouring appears because small variations in the pixel values are cancelled after quantisation. Given the property of frequency transforms such as the Discrete Cosine Transform (DCT) commonly adopted in image and video coding standards, all variations in the signal values are captured by coefficients associated with frequency values higher than zero. Therefore the method proposed in this invention modifies the quantiser dead-zone for non DC-coefficients. When quantisation is directly applied to the residual samples, one method proposed in this invention does not adjust the dead-zone of the quantiser for those samples whose value is outside a given range.

Claims (11)

  1. CLAIMSA method of video encoding comprising the steps of: dividing an input image into spatial blocks of pixel values; optionally generating residual values by subtracting prediction values from the pixel values; optionally applying a frequency transformation to the residual or pixel values to form transform coefficient values; and applying a quantisation operation to the values of the block, the quantisation operation comprising addition of a rounding offset, division by a quantisation step and reduction to integer values; wherein: blocks prone to contouring are identified; and for a contouring-prone block only, the rounding offset is varied between values of the block.
  2. 2. The method of claim 1, wherein a frequency transformation is applied to the residual or pixel values to form transform coefficient values and wherein the rounding offset for the DC coefficient remains unchanged.
  3. 3. The method of claim 2, wherein variation of the rounding offset is confined to a predefined subset of mid frequency range transform coefficients.
  4. 4. The method of claim 2 or claim 3, wherein variation of the rounding offset is asymmetric between horizontal and vertical frequencies.
  5. 5. The method of claim 1, wherein no frequency transformation is applied to the residual or pixel values and wherein the rounding offset remains unchanged for pixel or residual values in excess of a given threshold.
    -12 -
  6. 6. The method of any one of the preceding claims; wherein the manner of variation of the rounding offset within a block is determined by the values of the block.
  7. 7. The method of claim 6, wherein the rounding offset is varied by that amount sufficient to prevent the selected value being quantised to zero.
  8. A method of video encoding comprising the steps of: - Dividing an input image into a non overlapping grid of spatial blocks; - Generating prediction values for the block; -Generating difference values by subtracting the prediction values from the pixel values of the block; - Applying a frequency transformation on the difference values of the block generating transform coefficients; - Applying a quantisation operation on the transform coefficients of the block; wherein the offset used in the dead zone is adjusted to avoid that non-DC quantised coefficients are rounded to 0.
  9. 9. A method of video encoding according to Claim 8 where the offset used in the dead zone is adjusted only for those blocks which are affected by contouring.
  10. 10. A method of video encoding according to Claim 9 where the offset used in the dead zone of the quantiser is adjusted according to an iterative procedure which optimises a given cost metric.
  11. 11. A non-transitory computer program product containing instructions which cause a computer to implement the method of any one of the preceding claims.
GB1501505.0A 2015-01-29 2015-01-29 Video encoding and decoding with adaptive quantisation Active GB2534604B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1501505.0A GB2534604B (en) 2015-01-29 2015-01-29 Video encoding and decoding with adaptive quantisation
PCT/GB2016/050197 WO2016120630A1 (en) 2015-01-29 2016-01-29 Video encoding and decoding with adaptive quantisation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1501505.0A GB2534604B (en) 2015-01-29 2015-01-29 Video encoding and decoding with adaptive quantisation

Publications (3)

Publication Number Publication Date
GB201501505D0 GB201501505D0 (en) 2015-03-18
GB2534604A true GB2534604A (en) 2016-08-03
GB2534604B GB2534604B (en) 2021-11-10

Family

ID=52705463

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1501505.0A Active GB2534604B (en) 2015-01-29 2015-01-29 Video encoding and decoding with adaptive quantisation

Country Status (2)

Country Link
GB (1) GB2534604B (en)
WO (1) WO2016120630A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0440189A (en) * 1990-06-05 1992-02-10 Matsushita Electric Ind Co Ltd Quantizer
US20110075729A1 (en) * 2006-12-28 2011-03-31 Gokce Dane method and apparatus for automatic visual artifact analysis and artifact reduction
US20120051421A1 (en) * 2009-05-16 2012-03-01 Xiaoan Lu Methods and apparatus for improved quantization rounding offset adjustment for video encoding and decoding
US20120307890A1 (en) * 2011-06-02 2012-12-06 Microsoft Corporation Techniques for adaptive rounding offset in video encoding

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006007279A2 (en) * 2004-06-18 2006-01-19 Thomson Licensing Method and apparatus for video codec quantization
US20070237237A1 (en) * 2006-04-07 2007-10-11 Microsoft Corporation Gradient slope detection for video compression
US8498335B2 (en) * 2007-03-26 2013-07-30 Microsoft Corporation Adaptive deadzone size adjustment in quantization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0440189A (en) * 1990-06-05 1992-02-10 Matsushita Electric Ind Co Ltd Quantizer
US20110075729A1 (en) * 2006-12-28 2011-03-31 Gokce Dane method and apparatus for automatic visual artifact analysis and artifact reduction
US20120051421A1 (en) * 2009-05-16 2012-03-01 Xiaoan Lu Methods and apparatus for improved quantization rounding offset adjustment for video encoding and decoding
US20120307890A1 (en) * 2011-06-02 2012-12-06 Microsoft Corporation Techniques for adaptive rounding offset in video encoding

Also Published As

Publication number Publication date
WO2016120630A1 (en) 2016-08-04
GB2534604B (en) 2021-11-10
GB201501505D0 (en) 2015-03-18

Similar Documents

Publication Publication Date Title
JP7309822B2 (en) Encoding device, decoding device, encoding method, and decoding method
US10841605B2 (en) Apparatus and method for video motion compensation with selectable interpolation filter
EP2005754B1 (en) Quantization adjustment based on texture level
EP2005755B1 (en) Quantization adjustments for dc shift artifacts
US8189933B2 (en) Classifying and controlling encoding quality for textured, dark smooth and smooth video content
EP1513349B1 (en) Bitstream-controlled post-processing video filtering
US10820008B2 (en) Apparatus and method for video motion compensation
US20110069752A1 (en) Moving image encoding/decoding method and apparatus with filtering function considering edges
US9270993B2 (en) Video deblocking filter strength derivation
US20110150080A1 (en) Moving-picture encoding/decoding method and apparatus
WO2014139396A1 (en) Video coding method using at least evaluated visual quality and related video coding apparatus
KR20050061303A (en) Method and apparatus for mpeg artifacts reduction
KR20150095591A (en) Perceptual video coding method using visual perception characteristic
Francisco et al. A generic post-deblocking filter for block based image compression algorithms
US10129565B2 (en) Method for processing high dynamic range video in order to improve perceived visual quality of encoded content
US8369423B2 (en) Method and device for coding
US11343494B2 (en) Intra sharpening and/or de-ringing filter for video coding
GB2534604A (en) Video encoding and decoding with adaptive quantisation
KR101307431B1 (en) Encoder and method for frame-based adaptively determining use of adaptive loop filter
Menon et al. Gain of Grain: A Film Grain Handling Toolchain for VVC-based Open Implementations
CN114390290A (en) Video processing method, device, equipment and storage medium