WO2020007827A1 - Encoder, decoder and method for adaptive quantization in multi-channel picture coding - Google Patents

Encoder, decoder and method for adaptive quantization in multi-channel picture coding Download PDF

Info

Publication number
WO2020007827A1
WO2020007827A1 PCT/EP2019/067678 EP2019067678W WO2020007827A1 WO 2020007827 A1 WO2020007827 A1 WO 2020007827A1 EP 2019067678 W EP2019067678 W EP 2019067678W WO 2020007827 A1 WO2020007827 A1 WO 2020007827A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
quantization parameter
block
component
chroma
Prior art date
Application number
PCT/EP2019/067678
Other languages
French (fr)
Inventor
Christian Helmrich
Christian Lehmann
Heiko Schwarz
Detlev Marpe
Thomas Wiegand
Original Assignee
Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. filed Critical Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
Publication of WO2020007827A1 publication Critical patent/WO2020007827A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Definitions

  • Embodiments of the present disclosure relate to an encoder for encoding a picture into a data stream, a decoder for decoding a picture from a data stream, corresponding methods for encoding and decoding, a computer readable digital storage medium, and a data stream.
  • Particular embodiments may describe a concept of a luma-dependent adaptation of a chroma quantization parameter.
  • the so-called quantization parameter defines the granularity of the quantization.
  • the quantization parameter may be adaptively adjusted based on the content of the picture.
  • the picture content may be indicated in terms of an activity. For example, a picture or a region of said picture, which contains a high activity may contain a high variance (disper- sion) in the texture and/or in the color of the picture. In turn, a picture or a region of said picture, which contains a low activity may contain a low variance (dispersion) in the texture and/or in the color of the picture.
  • An example for a picture region comprising a high activity may be a crown of a tree, which may comprise a plurality of small leaves that may further comprise different green or other color shades. Owing to the leaves of the tree, the dispersion will be relatively high. This dispersion may, thus, be used to set the QP relatively high for this picture-region since the non-smooth or non-flat leaf texture hides for the human eye a relatively high amount of coding errors owing to quantization errors.
  • an example for a picture region comprising a low activity may be a wall of a house, that may be monochrome or may at least comprise large monochrome areas (i.e. has a smooth or flat texture).
  • a picture that is to be compressed may be a still picture or a moving picture, the latter comprising a plurality of consecutive single frames.
  • the picture can be a singlecomponent picture or a multi-component picture.
  • a single-component picture may comprise a single color component, e.g. a grayscale or luminance (or luma) compo- nent only, while a multi-component picture may additionally comprise two or more color components, i.e. a luminance component and one or more chrominance (or chroma) components.
  • the Human Visual System is considerably more sensitive to luma variations than to chroma variations. That means, the human eye perceives a variation in the luma compo- nent (brightness) of a picture to a considerably higher extent than a variation in the available chroma components (color shade). Accordingly, a variation in the luma component is more perceptual to a viewer than a variation of the chroma component.
  • a concept called perceptual AdaptiveQP may be used, wherein the quantization parameter may be adjusted based on the spatial activity of the pixel data in a luma coding block. That is, the more luma activity in a picture-region the higher the adaptive QP.
  • B denotes the sub-blocks (here CTUs) of the picture, indexed via k, w k is the perceptual weight (also called visual sensitivity measure), a k is the block’s visual activity, a pic is the picture’s mean visual activity, and s, s' are the original and reconstructed pel values, respectively, of the picture’s luma component.
  • the perceptual QPA makes use of w k to adapt in each CTU at k the perceptually optimized QP and Lagrange parameter based on the default pre-assigned fixed QP slice and Lagrange parameter A slice :
  • each CTU block ensures that, in terms of visual quality and WPSNR, the coding distortion is optimally distributed within the given picture and within the pictures of the video signal.
  • weight exponent 0 ⁇ b ⁇ 1, lower activity limit 0 ⁇ a min ⁇ 8, and h[ ] filter(s[ ]) are the high-pass filtered input (i.e., original) samples of the picture’s luminance compo- nent, as described in detail in [2].
  • the inventive encoder is configured to compute a quantization parameter QP K for a luma component in units of blocks B K of the picture, using an activity measure of the luma com- ponent.
  • the encoder is further configured to com- pute a quantization parameter QP C for a chroma component based on a ratio— a Y between an activity value a Y of the luma component and an activity value a c of the chroma compo- nent. That is, in a first step the quantization parameter QP K for the luma component of the picture may be calculated for one or more blocks of the picture, and preferably for each block of the picture, i.e. on a block-wise basis.
  • a QP can be obtained that may be adaptable to the luma component of the respective block.
  • the adaptation of the luma-QP may be based on a measure of the visual activity of the luma component, i.e. of the luma coding block.
  • the visual activity is indicated by an activity value a Y of the luma component.
  • Said activity value a Y of the luma component of the respective block may be set in relation to an activity value a c of a chroma component of the same block or of the entire picture.
  • an adaptive QP for the chroma component is computed based on the above mentioned relation between the activity value a Y of the luma component and the activity value a c of the chroma component, i.e. on a ratio— a Y .
  • the adaptive QP for the chroma component may be computed for one or more blocks separately, i.e. block-wise, or for a certain arrangement of blocks, e.g. slice-wise, or for the entire picture, i.e. frame-wise or picture-wise. In either case, the computed adaptive QP for the chroma component may then be transmitted in the bit stream to the decoder.
  • the inventive decoder may derive the above mentioned adaptive QPs from the bit stream.
  • the decoder may derive the quantization parameter QP K for the luma component in units of blocks of the picture, i.e. the block-wise computed QP K .
  • the decoder may further derive the quantization parameter QP C for the chroma component in units of further blocks of the picture.
  • Said further blocks may represent a single block or sub-block (e.g. a block-wise QP-adaptation) or a plurality of blocks being arranged in a certain order (e.g, a picture-wise QP-adaptation, e.g. slice-wise) or the entire picture (e.g. a picture-wise QP- adaptation, e.g.
  • the adaptive QP for the chroma component may have been computed for one or more blocks separately, i.e. block-wise, or for a certain arrangement of blocks, e.g. slice-wise, or for the entire picture, i.e. frame-wise or picture-wise.
  • a method for encoding a picture corn- prises steps of computing a quantization parameter QP K for a luma component in units of blocks B K of the picture, using an activity measure of the luma component, and computing a quantization parameter QP C for a chroma component based on a ratio— ay between an activity value a Y of the luma component and an activity value a c of the chroma component.
  • a method for decoding a picture comprises steps of deriving from a bit stream a quantization parameter QP K for a luma component of the picture, and a quantization parameter for a chroma component QP C of the picture, wherein the quantization parameter (QP C ) for the chroma component is calculated based on a ratio between an activity a Y of the luma component and an activity a c of the chroma component.
  • FIG. 1 shows a schematic block diagram of an encoder for block-based coding, which encoder may be used to apply the inventive principle
  • Fig. 2 shows a schematic block diagram of a decoder for block-based decoding, which decoder may be used to apply the inventive principle
  • Fig. 3 shows a schematic drawing of a picture being partitioned into block and sub blocks
  • Fig. 4 shows a schematic block diagram of an encoder according to an embodi- ment
  • Fig. 5 shows a schematic block diagram of an decoder according to an embodi- ment
  • Fig. 6 shows a schematic block diagram of a method for encoding according to an embodiment
  • Fig. 7 shows a schematic block diagram of method for decoding according to an embodiment.
  • Method steps which are depicted by means of a block diagram and which are described with reference to said block diagram may also be executed in an order different from the depicted and/or described order. Furthermore, method steps concerning a particular fea- ture of a device may be replaceable with said feature of said device, and the other way around.
  • a computation may be executed block-wise or picture-wise.
  • the result of a block-wise computation may provide a block-related value being specific for the particular block on which the computation was executed.
  • a picture- wise computation may be executed slice-wise, i.e. on a slice-by-slice basis, or frame-wise, i.e. on a frame-by-frame basis.
  • the result of a picture-wise computation may provide a picture-related value being specific for the particular slice or frame on which the computa- tion was executed.
  • Figure 1 shows an apparatus for predictively coding a picture 12 into a data stream 14 exemplary using transform-based residual coding.
  • the apparatus, or encoder is indicated using reference sign 10.
  • Figure 2 shows a corresponding decoder 20, i.e. an apparatus 20 configured to predictively decode the picture 12’ from the data stream 14 also using transform-based residual decoding, wherein the apostrophe has been used to indicate that the picture 12’ as reconstructed by the decoder 20 deviates from picture 12 originally encoded by apparatus 10 in terms of coding loss introduced by a quantization of the prediction residual signal.
  • Figure 1 and Figure 2 exemplarily use transform based prediction residual coding, although embodiments of the present application are not restricted to this kind of prediction residual coding. This is true for other details described with respect to Figures 1 and 2, too, as will be outlined hereinafter.
  • the encoder 10 is configured to subject the prediction residual signal to spatial-to-spectral transformation and to encode the prediction residual signal, thus obtained, into the data stream 14.
  • the decoder 20 is configured to decode the prediction residual signal from the data stream 14 and subject the prediction residual signal thus obtained to spec- tral-to-spatial transformation.
  • the encoder 10 may comprise a prediction residual signal former 22 which generates a prediction residual 24 so as to measure a deviation of a prediction signal 26 from the original signal, i.e. from the picture 12.
  • the prediction residual signal former 22 may, for instance, be a subtractor which subtracts the prediction signal from the original signal, i.e. from the picture 12.
  • the encoder 10 then further comprises a transformer 28 which subjects the prediction residual signal 24 to a spatial-to-spectral transformation to obtain a spectral-domain prediction residual signal 24’ which is then subject to quantization by a quantizer 32, also comprised by the encoder 10.
  • the thus quantized prediction residual signal 24” is coded into bitstream 14.
  • encoder 10 may optionally comprise an entropy coder 34 which entropy codes the prediction residual signal as transformed and quantized into data stream 14.
  • the prediction signal 26 is generated by a prediction stage 36 of encoder 10 on the basis of the prediction residual signal 24” encoded into, and de- codable from, data stream 14.
  • the prediction stage 36 may internally, as is shown in Figure 1 , comprise a dequantizer 38 which dequantizes prediction residual sig- nal 24” so as to gain spectral-domain prediction residual signal 24’”, which corresponds to signal 24’ except for quantization loss, followed by an inverse transformer 40 which sub- jects the latter prediction residual signal 24”’ to an inverse transformation, i.e.
  • prediction residual signal 24 a spectral- to-spatial transformation, to obtain prediction residual signal 24””, which corresponds to the original prediction residual signal 24 except for quantization loss.
  • a combiner 42 of the prediction stage 36 then recombines, such as by addition, the prediction signal 26 and the prediction residual signal 24”” so as to obtain a reconstructed signal 46, i.e. a reconstruction of the original signal 12.
  • Reconstructed signal 46 may correspond to signal 12’.
  • a prediction module 44 of prediction stage 36 then generates the prediction signal 26 on the basis of signal 46 by using, for instance, spatial prediction, i.e. intra-picture prediction, and/or temporal prediction, i.e. inter-picture prediction.
  • decoder 20 may be internally composed of components corresponding to, and interconnected in a manner corresponding to, prediction stage 36.
  • entropy decoder 50 of decoder 20 may entropy decode the quantized spectral-domain prediction residual signal 24” from the data stream, whereupon dequantizer 52, inverse transformer 54, combiner 56 and prediction module 58, interconnected and cooperating in the manner described above with respect to the modules of prediction stage 36, recover the reconstructed signal on the basis of prediction residual signal 24” so that, as shown in Figure 2, the output of combiner 56 results in the reconstructed signal, namely picture 12’.
  • the encoder 10 may set some coding parameters including, for instance, prediction modes, motion parameters and the like, according to some optimization scheme such as, for instance, in a manner optimizing some rate and distortion related criterion, i.e. coding cost.
  • en- coder 10 and decoder 20 and the corresponding modules 44, 58, respectively may support different prediction modes such as intra-coding modes and inter-coding modes.
  • the granularity at which encoder and decoder switch between these prediction mode types may correspond to a subdivision of picture 12 and 12’, respectively, into coding segments or coding blocks. In units of these coding segments, for instance, the picture may be sub- divided into blocks being intra-coded and blocks being inter-coded.
  • Intra-coded blocks are predicted on the basis of a spatial, already coded/decoded neighborhood of the respective block as is outlined in more detail below.
  • Several intra-coding modes may exist and be selected for a respective intra-coded segment including directional or angular intra-coding modes according to which the respective segment is filled by extrapolating the sample values of the neighborhood along a certain direction which is specific for the respective directional intra-coding mode, into the respective intra-coded segment.
  • the intra-coding modes may, for instance, also comprise one or more further modes such as a DC coding mode, according to which the prediction for the respective intra-coded block assigns a DC value to all samples within the respective intra-coded segment, and/or a planar intra-coding mode according to which the prediction of the respective block is approximated or determined to be a spatial distribution of sample values described by a two-dimensional linear function over the sample positions of the respective intra-coded block with driving tilt and offset of the plane defined by the two-dimensional linear function on the basis of the neighboring samples.
  • inter-coded blocks may be predicted, for instance, temporally.
  • motion vectors may be signaled within the data stream, the motion vectors indicating the spatial displacement of the portion of a previously coded picture of the video to which picture 12 belongs, at which the previously coded/decoded picture is sampled in order to obtain the prediction signal for the respec- tive inter-coded block.
  • data stream 14 may have encoded thereinto coding mode parameters for assigning the coding modes to the various blocks, prediction parameters for some of the blocks, such as motion parameters for inter- coded segments, and optional further parameters such as parameters for controlling and signaling the subdivision of picture 12 and 12’, respectively, into the segments.
  • the decoder 20 uses these parameters to subdivide the picture in the same manner as the encoder did, to assign the same prediction modes to the segments, and to perform the same prediction to result in the same prediction signal.
  • Figure 3 illustrates the relationship between the reconstructed signal, i.e. the reconstructed picture 12’, on the one hand, and the combination of the prediction residual signal 24”” as signaled in the data stream 14, and the prediction signal 26, on the other hand.
  • the combination may be an addition.
  • the prediction signal 26 is illustrated in Figure 3 as a subdivision of the picture area into intra-coded blocks which are illustratively indicated using hatching, and inter-coded blocks which are illustratively indicated not-hatched.
  • the subdivision may be any subdivision, such as a regular subdivision of the picture area into rows and columns of square blocks or non-square blocks, or a multi-tree subdivision of picture 12 from a tree root block into a plurality of leaf blocks of varying size, such as a quadtree subdivision or the like, wherein a mixture thereof is illustrated in Figure 3 in which the picture area is first subdivided into rows and columns of tree root blocks which are then further subdivided in accordance with a recursive multi- tree subdivisioning into one or more leaf blocks.
  • data stream 14 may have an intra-coding mode coded thereinto for intra-coded blocks 80, which assigns one of several supported intra-coding modes to the respective intra-coded block 80.
  • inter-coded blocks 82 the data stream 14 may have one or more motion parameters coded thereinto.
  • inter-coded blocks 82 are not restricted to being temporally coded.
  • inter-coded blocks 82 may be any block predicted from previously coded portions beyond the current picture 12 itself, such as previously coded pictures of a video to which picture 12 belongs, or picture of another view or an hierarchically lower layer in the case of encoder and decoder being scalable encoders and decoders, respectively.
  • the prediction residual signal 24” in Figure 3 is also illustrated as a subdivision of the picture area into blocks 84. These blocks might be called transform blocks in order to distinguish same from the coding blocks 80 and 82.
  • Figure 3 illustrates that encoder 10 and decoder 20 may use two different subdivisions of picture 12 and picture 12’, re- spectively, into blocks, namely one subdivisioning into coding blocks 80 and 82, respectively, and another subdivision into transform blocks 84. Both subdivisions might be the same, i.e.
  • each coding block 80 and 82 may concurrently form a transform block 84, but Figure 3 illustrates the case where, for instance, a subdivision into transform blocks 84 forms an extension of the subdivision into coding blocks 80, 82 so that any border be- tween two blocks of blocks 80 and 82 overlays a border between two blocks 84, or alter- natively speaking each block 80, 82 either coincides with one of the transform blocks 84 or coincides with a cluster of transform blocks 84.
  • the subdivisions may also be determined or selected independent from each other so that transform blocks 84 could alternatively cross block borders between blocks 80, 82.
  • similar statements are thus true as those brought forward with respect to the subdivision into blocks 80, 82, i.e.
  • the blocks 84 may be the result of a regular subdivision of picture area into blocks (with or without arrangement into rows and columns), the result of a recursive multi-tree subdivisioning of the picture area, or a combination thereof or any other sort of blockation.
  • blocks 80, 82 and 84 are not restricted to being of quadratic, rectangular or any other shape.
  • Figure 3 further illustrates that the combination of the prediction signal 26 and the prediction residual signal 24”” directly results in the reconstructed signal 12’. However, it should be noted that more than one prediction signal 26 may be combined with the prediction residual signal 24”” to result into picture 12’ in accordance with alternative embodiments.
  • the transform blocks 84 shall have the following significance.
  • Transformer 28 and inverse transformer 54 perform their transformations in units of these transform blocks 84. For instance, many codecs use some sort of DST or DCT for all transform blocks 84. Some codecs allow for skipping the transformation so that, for some of the transform blocks 84, the prediction residual signal is coded in the spatial domain directly.
  • encoder 10 and decoder 20 are configured in such a manner that they support several transforms.
  • the transforms supported by encoder 10 and decoder 20 could comprise: o DCT-II (or DCT-III), where DCT stands for Discrete Cosine Transform
  • transformer 28 would support all of the forward transform versions of these transforms, the decoder 20 or inverse transformer 54 would support the corresponding backward or inverse versions thereof: o Inverse DCT-II (or inverse DCT-III)
  • Figures 1 to 3 have been presented as an example where the inventive concept described further below may be implemented in order to form specific examples for encoders and decoders according to the present application.
  • the encoder and decoder of Figures 1 and 2 respectively, may represent possible implementa- tions of the encoders and decoders described herein below.
  • Figures 1 and 2 are, however, only examples.
  • An encoder may, however, perform block-based encoding of a picture 12 using the concept outlined in more detail below and being different from the encoder of Figure 1 such as, for instance, in that same is no video encoder, but a still picture encoder, in that same does not support inter-prediction, or in that the sub-division into blocks 80 is performed in a manner different than exemplified in Figure 3.
  • decoders may perform block-based decoding of picture 12’ from data stream 14 using the coding concept further outlined below, but may differ, for instance, from the decoder 20 of Figure 2 in that same is no video decoder, but a still picture decoder, in that same does not support intra-prediction, or in that same sub-divides picture 12’ into blocks in a manner different than described with respect to Figure 3 and/or in that same does not derive the prediction residual from the data stream 14 in transform domain, but in spatial domain, for instance.
  • Figure 4 shows an inventive encoder 10 for encoding a picture 12 exploiting the innovative QP-adaptation approach.
  • the picture 12 may be a still picture or a moving picture sequence of consecutive images or frames.
  • the encoder 10 may, for instance, comprise an above mentioned quantizer 32 for quantizing the picture 12.
  • the quantizer 32 and, thus, the encoder 10 may be configured to compute a quantization parameter QP K for a luma component in units of blocks 80, 82, 84 of the picture 12. That is, for one or more blocks 80, 82, 84, and preferably for each block of the picture 12, its corresponding block-related or block-specific quantization parameter QP K for the luma component may be determined. More generally speaking, the quantization parameter QP K for the luma component may be determined in units of blocks B K .
  • the quantization parameter QP K for the luma component may also be referred to as the luma-QP. Since the luma- QP is block-related or block-specific, the luma-QP is denoted by QP K .
  • the luma-QP QP K may be determined by means of an activity measure of the luma component of the respective block. The activity measure gives an activity value a Y of the luma component.
  • the encoder 10 may be configured to compute a quantization parameter for a chroma component depending on the luma component.
  • the quantization parameter for the chroma component may also be referred to as the chroma-QP and may be denoted with QP C .
  • An activity measure for the chroma component may be executed, which gives an activity value a c of the luma component.
  • the computation of the chroma-QP QP C is based on a ratio— between the activity value a Y of the luma component and the activity value a c of the chroma component.
  • the innovative encoder 10 may be configured to determine an adaptive luma-dependent chroma-QP.
  • the block-based luma-QP QP K and the chroma-QP QP C may be signaled in the data stream 14.
  • the chroma-QP QP C may be signaled by means of a QP index value indicating the luma-QP plus/minus a chroma-QP offset value O c or by means of the chroma-QP offset value O c only. Details about the signaling will follow somewhat later in the text.
  • the computation of the luma-QP QP K may be executed on a block- wise basis.
  • the block-wise computed luma-QP QP K may be determined by an approach according to [2], the content of which is explicitly incorporated herein by reference.
  • the luma-dependent computation of the chroma-QP QP C may be executed on a block-wise basis or on a picture-wise basis.
  • the following description firstly discusses the picture-wise computation of the luma- dependent chroma QP QP C and subsequently the block-wise computation of the luma-dependent chroma QP QP C .
  • the chroma-QP QP C may be computed on a picture- wise basis. That is, an adaptive chroma-QP QP C may be computed that is applicable to the entire picture 12, which may be a single image or picture, respectively, or a plurality of consecutive images in a video. In the picture- wise approach, the chroma-QP QP C may be referred to as a picture-related chroma-QP or as a picture-related quantization parameter QP C for the chroma component, respectively.
  • the picture-related quantization parameter QP C for the chroma component may be computed on a frame-by-frame basis, wherein the picture-related quantization parameter QP C may be computed for at least one chroma component contained in at least one frame of a moving picture sequence, or on a slice-by-slice basis, wherein the picture-related quantization parameter QP C may be computed for at least one chroma component contained in at least one slice of the picture 12.
  • a mean picture-related quantization parameter QP C may be computed for the whole picture. For this, a mean value over one or more available chroma-QPs, and preferably over each available chroma-QP, may be calculated. Furthermore, a mean chroma activity value a c over one or more available chroma components, and preferably over each available chroma component, may be calculated.
  • the chroma-QP QP C is computed on a picture- wise basis (e.g. frame-by- frame or slice-by-slice)
  • a mean value over one or more block-wise computed luma-QPs QP K may be calculated.
  • the luma-QP may be a picture-wise mean luma-QP denoted with QP K .
  • the activity value a Y of the luma component may be a picture- mse mean activity value a y over one or more block-wise activity values a K , and preferably over each block-wise activity value a K , of the luma component.
  • the quantization parameter QP C for the chroma component may be computed picture-wise such that the chroma-QP QP C is a picture-related quantization parameter in this case.
  • the picture-related chroma-QP QP C may be computed based on the ratio between the picture- wise activity value a c of the chroma component and the picture-wise activity value a y of the luma component, i.e. based on the ratio— , wherein the picture-wise activity value a v of the luma component may be the
  • the picture-wise activity value a c of the chroma component may be the above mentioned picture- wise mean chroma activity value a c .
  • the picture-wise approach may include to apply the herein described concept on a slice-by-slice basis, wherein the herein described principle may be applied to one or more slices, and preferably to each slice, of the picture.
  • the picture-wise approach may include to apply the herein described concept on a frame-by-frame basis, wherein the herein described principle may be applied to one or more frames, and preferably to each frame, of a plurality of consecutive frames in a vieo.
  • the concept of this embodiment is to apply the above discussed block-wise perceptual QP adaptation to the luma component of an image or video first, wherein the concept of [2] may be applied. Then, as an inventive approach, a separate and independent adaptation of the picture-wise mean QP value applied to each chroma component, based on the previously determined visual activity statistics for the luma channel, is carried out.
  • This mapping effectively reduces the chroma QP-and, thereby, the chroma quantization distortion -at high luma QPs near QP m ax , which may explain why, at low coding bit-rates, the use of positive chroma QP offsets has recently been suggested.
  • a reduction of the chroma QP values relative to the luma QP values seems to be unnecessary in terms of visual reconstruction quality, particularly when a cross-component predictive coding technique such as the linear-model chroma (CCLM) predictor [11] is uti- lized.
  • CCLM linear-model chroma
  • the inventive principle proposes to, when perceptual QPA is enabled, employ input-dependent adaptive luma-to-chroma QP offsets on a frame-by-frame or a slice-by-slice basis. Specifically, it is suggested to extend the luma-based QPA approach of [2] by the following backward-compatible HEVC syntax- compliant chroma QPA method:
  • pps_slice_chroma_qp_off- sets_present_flag 1 may be set in the coded PPS.
  • a picture-related chroma-QP value QP C may be determined from the ratio between said chroma channel’s activity a c and the luma activity a Y :
  • chroma channel’s activity value a c may be the chroma channel’s mean activity value a c and the luma activity value a Y may be the mean luma activity value a g ; subscript Y denotes the luma component, 0 ⁇ b £ 1 as previously, k e [Y, Cb, Cr], and is the component’s frame buffer, excluding the border pel rows and columns to prevent the high-pass filter from extending beyond the picture boundaries.
  • the picture-wise e.g. frame-wise or slice-wise mean luma QP QP Y may be ob- tained, e.g. by using the a pic according to [2]:
  • This chroma QP Adaptation effectively lowers the coding bit-rate of the chroma channel when its mean visual activity is relatively high compared to the luma channel’s visual activity.
  • it compensates for the picture-averaged fixed luma-to-chroma QP mapping by Qp c ( ) to prevent undesired re-shifting of coding bits from luma to chroma.
  • the upper limit O max is introduced to prevent very coarse chroma-channel quantization on some sequences at lower bit-rates. Within the CTC sequence set, it only affects the en- coding of the ParkRunning3 sequence. It is worth mentioning that the slice-wise offsets O c , which can be transmitted as the slice_cb_qp__offset and siice_cr_qp_offset elements in HEVC and VTM/BMS 1 , can still be combined with non-zero PPS-wise QP chroma offsets (pps_cb_qp_offset and pps_cr_qp_offset) in the traditional fashion known from HEVC [10].
  • Said picture-wise chroma QP offset O c may be applied in the encoding and decoding of the image or video.
  • QP Y + O c in the encoder-side quantization and decoder-side dequantization (also called inverse quantization) of any prediction residual in the given chroma component c.
  • the activity value of the chroma component may be a picture-related mean chroma activity value being valid for the entire picture 12, i.e.:
  • O c CL c and the chroma-QP may be a picture- related mean chro a-QP being valid for the entire picture, i.e.:
  • the chroma-QP QP C may be computed on a block-wise basis. That is, an adaptive chroma-QP QP C may be computed that is applicable to one or more available blocks or sub-blocks 80, 82, 84, and preferably to each available block or sub-block 80, 82, 84, of the picture 12.
  • the chroma-QP may be referred to as a block- related chroma-QP or as a block-related quantization parameter for the chroma component, respectively.
  • the block-wise chroma QP may be denoted with
  • the block-related quantization parameter QP C (B K ) for the chroma component may be computed on a block-wise basis, wherein the block-related quantization parameter QP C (B K ) may be computed for at least one chroma component contained in at least one block (3 ⁇ 4) of the picture 12.
  • the at least one block may correspond to a Coding Tree Unit (CTU) or a Coding Unit (CU).
  • the block-related quantization parameter QP C (B K ) may be computed on CTU-level and/or on CU-level.
  • the in- ventive concept may also be applicable to other blocks or sub-blocks different from CUs and CTUs.
  • the activity value a c of the chroma component may also be a block- related activity value, denoted with a C (B K ), and being computed on a block- wise basis.
  • the activity value a Y of the luma component may be a block-related activity value, denoted with a Y (B K ), and being computed on a block-wise basis. If the activity value of the luma component may be obtained according to [2], then it may directly be a b/oc -related luma activity value a Y (B K ⁇ ). This block-related luma activity value a Y (B K ) may be directly used in the inventive approach as described herein. There may be no need for determining a mean luma activ- ity ⁇ g as in the above discussed picture-wise approach.
  • this embodiment is to apply, in an encoder 10, the above block-wise perceptual QP adaptation to the luma component of an image or video and, in a manner similar to that of the first inventive approach as discussed in the section (picture-wise QP adaptation) above, to apply an also block-wise QP adaptation to at least one chroma component of said image or video. Then, the adapted block-wise chroma QP values QP C (B K ) may be signaled to a decoder 20 alongside the existing signaled luma QP values a ( B K ).
  • multiple block-wise (e.g. CTU-wise) chroma QP adaptations are conducted for each chroma component of each picture 12, wherein these multiple block-wise adapted chroma QP values QP C (B K ) may be signaled to a decoder 20 in a novel, inventive fashion.
  • the block- wise chroma QP adaptation may be carried out in a manner identical to that described in the section above ( picture-wise QP adaptation), except that each adaptation may be applied for a block or sub-block B K of the picture 12 at index K, instead of the entire picture 12: 1.
  • the block-related chroma-QP QP C (B K ) may be computed separately for each block or sub-block B k :
  • a block-wise adapted luma-QP QP Y (B K ) may be obtained, e.g. by using the a pic according to [2]:
  • This chroma QP Adaptation effectively lowers the coding rate of the chroma sub-block K when its mean visual activity is quite high compared to the luma channel’s visual activity in the same sub-block K.
  • this chroma QP adaptation is performed in a block-wise manner, not only in a picture- wise fashion.
  • the block-related chroma QP offset O c (B K ) may be applied in the encoding and decoding of the image or video. For example, use QPy(B K ) + O c (B K ) in the en- coder-side quantization and decoder-side dequantization (also called inverse quantization) of any prediction residual in the given chroma component c.
  • a block or sub-block e. g., CTU or CU
  • both the final QP indices and the chroma QP offsets could be transmitted as entropy coded QP value differences (i.e., delta-QPs) according to the state of the art known from, e.g., HEVC [1], [2],
  • the additional derivation and signaling of a block-wise adapted QP offset value O c (B K ) for a chroma component c represents a separate, independent quantization parameter adaptation and signaling for at least two components (one luma and at least one chroma) of a multi-component image/video according to the invention.
  • the activity value of the luma component may be a block-related luma activity value being valid for at least one block or sub-block B K of the picture 12, i.e.: a Y — a Y ⁇ B K ) and the luma-QP may be a b/oc -related luma-QP being valid for at least one block or sub-block B K of the picture 12, i.e.:
  • a multi-com- ponent picture may comprise at least one luma component, and additionally one or more color components, i.e. one or more chroma components.
  • the inventive principle may be applied to pictures being coded in the YCbCr color space.
  • each block may comprise a luma coding block and one or more chroma coding blocks.
  • the picture-wise or block-wise computed luma-dependent adaptive chroma quantization parameter QP C may be signaled to the decoder 20 in the data stream 14.
  • a chroma-QP offset-value O c or a chroma-QP index may be signaled in the data stream 14, as discussed above.
  • FIG. 5 shows a decoder 20 according to an embodiment.
  • the decoder 20 may comprise a dequantizer 52, as explained with reference to Figure 2.
  • the dequantizer 52 and, thus, the decoder 20 may be configured to derive from the data stream 14 the above discussed quantization parameter QP K for the luma component in units of blocks B K of the picture 12, and the above discussed quantization parameter QP C for the chroma component in units of further blocks of the picture 12.
  • Said further blocks may represent a single block or subblock (e.g. an above described block- wise QP-adaptation) or a plurality of blocks arranged in a certain order (e.g. an above described picture- wise QP-adaptation, e.g. slice- wise) or the entire picture 12 (e.g. an above described picture- wise QP-adaptation, e.g. frame- wise).
  • the quantization parameter QP C for the chroma component may have been computed by the encoder 10 based on a ratio between the activity a Y of the luma component and the activity a c of the chroma component.
  • the activity values a Y , a c are depicted in hatched lines in Figure 5.
  • the activity value of the luma component may be a picture-wise mean activity value cL Y over one or more block-wise activity values ⁇ x K of the luma component.
  • the quantization pa- rameter for the chroma component may be a picture- related mean chroma-QP QP C .
  • the decoder 20 may be configured to apply, during decoding, the picture-related quantization parameter QP C for the chroma component on a picture- wise basis, i.e.
  • the adaptive chroma-QP QP C may be applied onto the entire picture 12, which may be a single still image, wherein the adaptive chroma-QP QP C may be applied slice-by-slice, or the picture 12 may comprise a plurality of consecutive frames, wherein the adaptive chroma-QP QP C may be applied frame-by-frame.
  • the decoder 20 may be configured to apply, during decoding, the picture- related quantization parameter QP C for the chroma component on a frame-by-frame basis, wherein the picture- related quantization parameter QP C is to be applied onto at least one chroma component contained in at least one frame of a moving picture sequence, or on a slice-by-slice basis, wherein the picture-related quantization parameter QP C is to be applied onto at least one chroma component contained in at least one slice of the picture 12.
  • the picture- related quantization parameter QP C for the chroma component may be indicated in the data stream 14 by means of a) the above described picture-related quantization-parameter-offset-value O c only, or b) the above described picture-related QPindex value calculated from the picture- related mean quantization parameter QP Y for the luma component and the picture- related quantization-parameter-offset-value O c of the chroma component according to equation:
  • the quantization parameter offset value may be a picture- related quantization parameter offset value O c indicating a frame-wise or a slice-wise calculated offset between a picture-related quantization parameter QP C for the chroma component and a picture-related mean quantization parameter ⁇ QP Y for the luma component, the picture-related mean quantization parameter QP Y comprising one or more block-wise computed quantization parameters QP K for the luma component.
  • the activity value a Y of the luma component may be a block-related activity value a Y (B K ) being computed on a block-wise basis
  • the quantization parameter QP C for the chroma component may be a block-related quantization parameter QP C ⁇ B K ) being computed on a block-wise basis
  • the decoder 20 may be configured to apply, during decoding, the Mock-related chroma-QP QP C ⁇ B K ) on at least one chroma component contained in at least one block or sub-block B K of the picture 12.
  • the block-related quantization parameter QP C (B K ) for the chroma component may be indicated in the data stream 14 by means of a) the above described block-related quantization-parameter-offset-value O c (B K ) only, or b) the above described block-related QPlndex(B K ) calculated from the Mock-related quantization parameter QP Y ⁇ B K ) for the luma component and the block-related quantization-parameter-offset-value O c ⁇ B K ) of the chroma component according to equation:
  • QPindex(B K ) QP Y (B K ) ⁇ O c (B K ), wherein the block-related quantization parameter offset value O c (B K ) indicates a block- wise calculated offset between the Mock-related quantization parameter QP C (B, f ) for the chroma component and the Mock-related quantization parameter QP Y ⁇ B K ) for the luma component.
  • Figure 6 shows a block diagram of a method for encoding a picture 12.
  • a quantization parameter QP K for a luma component may be computed in units of blocks B K of the picture, using an activity measure of the luma component.
  • a quantization parameter QP C for a chroma component may be computed based on a ratio— between an activity value a Y of the luma component and an activity a Y
  • Figure 7 shows a block diagram of a method for decoding a picture 12.
  • a quantization parameter QP K for a luma component of the picture 12 is derived from the data stream 14.
  • a quantization parameter QP C for a chroma component of the picture 12 is derived from the data stream 14.
  • the quantization parameter QP C for the chroma com- ponent is calculated based on a ratio— between an activity a Y of the luma component and aY
  • Apparatus for encoding a picture configured to
  • QP* quantization parameter
  • QP C quantization parameter
  • the apparatus according to embodiment 1 configured to compute the quantization parameter (QP C ) for the chroma component in units larger than the blocks [slice-by-slice] or globally for the picture [frame-by- frame].
  • QP C quantization parameter
  • the apparatus according to embodiment 2, configured to insert the quantization parameter (QP C ) for the chroma component in the units larger than the blocks [slice-by-slice] or globally for the picture [frame-by- frame] into the data stream.
  • QP C quantization parameter
  • the apparatus configured to determine a value QPc of the quantization parameter for the chroma component (QPc) from the ratio— between a chroma channel’s mean activity a c and a luma channel’s mean activity a Y , for each slice of the picture or the whole picture, depending on a logarithm of a ⁇ — if a a c > a Y
  • a Y and to be zero if a ⁇ a c ⁇ a Y . .
  • the apparatus configured to determine a value QPc of the quantization parameter for the chroma com- ponent (QP C ) for each slice of the picture or the whole picture, wherein
  • a weight exponent b has a value between 0 ⁇ b ⁇ 1, and/or wherein a base of the logarithm is 2.
  • the apparatus configured to in computing the quantization parameter (QP C ) for the chroma component, compute a picture-wise or a slice-wise mean luma quantiza- tion parameter (QP Y ), and adapting the quantization parameter (QP C ) for the chroma component using a luma-to-chroma-mapping function applied to the picture-wise or slice-wise mean luma quantization parameter (QP Y ). .
  • the apparatus configured to compute the quantization parameter for the chroma component in units of further blocks of the picture. 0.
  • a c (B k ) and a luma channel’s mean activity a Y (B k ) for each block of the picture or the whole picture, depending on a logarithm of a if a ⁇ a c (B k ) > a Y (B k ) and
  • a weight exponent b has a value between 0 ⁇ b ⁇ 1, and/or wherein a base of the logarithm is 2.
  • aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
  • Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus.
  • embodiments of the invention can be implemented in hardware or in software or at least partially in hardware or at least partially in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
  • Some embodiments according to the invention comprise a data carrier having electroni- cally readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
  • embodiments of the present invention can be implemented as a computer pro- gram product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer.
  • the program code may for example be stored on a machine readable carrier.
  • inventions comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
  • an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
  • a further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
  • the data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitory.
  • a further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein.
  • the data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
  • a further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods de- scribed herein.
  • a further embodiment comprises a computer having installed thereon the computer pro- gram for performing one of the methods described herein.
  • a further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for per- forming one of the methods described herein to a receiver.
  • the receiver may, for exam- pie, be a computer, a mobile device, a memory device or the like.
  • the apparatus or sys- tem may, for example, comprise a file server for transferring the computer program to the receiver.
  • a programmable logic device for example a field programmable gate array
  • a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein.
  • the methods are preferably performed by any hardware apparatus.
  • the apparatus described herein may be implemented using a hardware apparatus, or us- ing a computer, or using a combination of a hardware apparatus and a computer.
  • the methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
  • JEM 7 Joint Exploration Test Model 7

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention concerns an encoder (10), a decoder (20) and methods for an adaptive quantization in a multi-channel picture coding. The encoder (10) is configured to compute a quantization parameter(QPK) for a luma component in units of blocks (BK) of the picture (12), using an activity measure of the luma component, and to compute a quantization parameter(QPC) for a chroma component based on a ratio, Formula (I) between an activity value (ay) of the luma component and an activity value (ac) of the chroma component. The adaptation of the quantization parameter ( QPC) for the chroma component may be applied on a picture-wise basis and/or on a block-wise basis.

Description

ENCODER, DECODER AND METHOD FOR ADAPTIVE QUANTIZATION IN MULTICHANNEL PICTURE CODING
GENERAL DESCRIPTION
Embodiments of the present disclosure relate to an encoder for encoding a picture into a data stream, a decoder for decoding a picture from a data stream, corresponding methods for encoding and decoding, a computer readable digital storage medium, and a data stream. Particular embodiments may describe a concept of a luma-dependent adaptation of a chroma quantization parameter.
TECHNICAL BACKGROUND
In todays image processing lossy coding schemes may be exploited in which pictures may be quantized so as to obtain a reduced version of the picture. That is, a certain range of values of the picture may be compressed into a single quantum value. The higher the quantization the higher the reduction of the discrete symbols and, thus, the higher the compressibility of the picture.
The so-called quantization parameter (QP) defines the granularity of the quantization. The quantization parameter may be adaptively adjusted based on the content of the picture. The picture content may be indicated in terms of an activity. For example, a picture or a region of said picture, which contains a high activity may contain a high variance (disper- sion) in the texture and/or in the color of the picture. In turn, a picture or a region of said picture, which contains a low activity may contain a low variance (dispersion) in the texture and/or in the color of the picture.
An example for a picture region comprising a high activity may be a crown of a tree, which may comprise a plurality of small leaves that may further comprise different green or other color shades. Owing to the leaves of the tree, the dispersion will be relatively high. This dispersion may, thus, be used to set the QP relatively high for this picture-region since the non-smooth or non-flat leaf texture hides for the human eye a relatively high amount of coding errors owing to quantization errors. In turn, an example for a picture region comprising a low activity may be a wall of a house, that may be monochrome or may at least comprise large monochrome areas (i.e. has a smooth or flat texture). Here, the dispersion will be relatively low and in fact, quantization errors might be liable to perception. Accordingly, the QP may be set considerably lower compared to the crown of the tree. Furthermore, a picture that is to be compressed may be a still picture or a moving picture, the latter comprising a plurality of consecutive single frames. The picture can be a singlecomponent picture or a multi-component picture. For example, a single-component picture may comprise a single color component, e.g. a grayscale or luminance (or luma) compo- nent only, while a multi-component picture may additionally comprise two or more color components, i.e. a luminance component and one or more chrominance (or chroma) components.
The Human Visual System (HVS) is considerably more sensitive to luma variations than to chroma variations. That means, the human eye perceives a variation in the luma compo- nent (brightness) of a picture to a considerably higher extent than a variation in the available chroma components (color shade). Accordingly, a variation in the luma component is more perceptual to a viewer than a variation of the chroma component.
For example, in HEVC a concept called perceptual AdaptiveQP may be used, wherein the quantization parameter may be adjusted based on the spatial activity of the pixel data in a luma coding block. That is, the more luma activity in a picture-region the higher the adaptive QP.
A further perceptual QO adaptation is described in [2], wherein the entire disclosure of [2] is incorporated herein by reference.
The perceptual QP adaptation (QPA) algorithm presented in [2] and employed in HHI’s re- sponse to JVET’s Call for Proposals (CfP) on video compression technology with capabil- ity beyond HEVC [4] is based on a subjectively motivated block-wise weighted distortion measure D derived from a local visual activity value:
,WSS
pic
Figure imgf000004_0001
where B denotes the sub-blocks (here CTUs) of the picture, indexed via k, wk is the perceptual weight (also called visual sensitivity measure), ak is the block’s visual activity, apic is the picture’s mean visual activity, and s, s' are the original and reconstructed pel values, respectively, of the picture’s luma component. Using D fc SE, the picture’s width W, height H, and component bit-depth BD, a weighted peak signal-to-noise ratio
Figure imgf000005_0001
can be obtained. Notice that, when wk = 1 for all blocks, this WPSNR definition reduces to the conventional PSNR metric used by the JVET. This is particularly true when b = 0. In
[2], the use of b = f was suggested.
During encoding, the perceptual QPA makes use of wk to adapt in each CTU at k the perceptually optimized QP and Lagrange parameter based on the default pre-assigned fixed QP slice and Lagrange parameter Aslice:
Figure imgf000005_0002
Using the adapted QPk and 'n each CTU block ensures that, in terms of visual quality and WPSNR, the coding distortion is optimally distributed within the given picture and within the pictures of the video signal. In other words, the visual sensitivity measure for a b
picture block k is given by wk = (~ ) with preferably, ,
Figure imgf000005_0005
where weight exponent 0 < b < 1, lower activity limit 0 < amin < 8, and h[ ] = filter(s[ ]) are the high-pass filtered input (i.e., original) samples of the picture’s luminance compo- nent, as described in detail in [2].
Note that, instead of the above block-wise adaptation, a generalized picture-wise adaptation of a fixed user-defined QP value QPslice can be realized as follows, but with a picture- wise sensitivity value wY =
Figure imgf000005_0003
Figure imgf000005_0004
with Y denoting the image’s luminance component, contained in the picture sample buffer
Py, With By 6 Py.
The perceptual QP adaptation described in [2] leads to a much better coding efficiency compared to using fixed QP values instead.
However, it would be desirable to provide an even higher gain in coding efficiency without significant loss in the picture quality. Accordingly, it is an object of the present invention to improve existing quantization schemes, in particular for multi-component pictures.
According to the invention, this problem is solved with an encoder according to claim 1 , a decoder according to claim 14, a method for encoding according to claim 22, a method for decoding according to claim 23, a data stream according to claim 24 and a computer readable digital storage medium according to claim 25. Particular and preferable embodi- ments are mentioned in the dependent claims.
The inventive encoder is configured to compute a quantization parameter QPK for a luma component in units of blocks BK of the picture, using an activity measure of the luma com- ponent. According to this aspect of the invention, the encoder is further configured to com- pute a quantization parameter QPC for a chroma component based on a ratio— aY between an activity value aY of the luma component and an activity value ac of the chroma compo- nent. That is, in a first step the quantization parameter QPK for the luma component of the picture may be calculated for one or more blocks of the picture, and preferably for each block of the picture, i.e. on a block-wise basis. Accordingly, for one or more (or each) block of the picture, a QP can be obtained that may be adaptable to the luma component of the respective block. The adaptation of the luma-QP may be based on a measure of the visual activity of the luma component, i.e. of the luma coding block. The visual activity is indicated by an activity value aY of the luma component. Said activity value aY of the luma component of the respective block may be set in relation to an activity value ac of a chroma component of the same block or of the entire picture. Then, in a second step, an adaptive QP for the chroma component is computed based on the above mentioned relation between the activity value aY of the luma component and the activity value ac of the chroma component, i.e. on a ratio— aY . The adaptive QP for the chroma component may be computed for one or more blocks separately, i.e. block-wise, or for a certain arrangement of blocks, e.g. slice-wise, or for the entire picture, i.e. frame-wise or picture-wise. In either case, the computed adaptive QP for the chroma component may then be transmitted in the bit stream to the decoder. The inventive decoder may derive the above mentioned adaptive QPs from the bit stream. In particular, the decoder may derive the quantization parameter QPK for the luma component in units of blocks of the picture, i.e. the block-wise computed QPK. The decoder may further derive the quantization parameter QPC for the chroma component in units of further blocks of the picture. Said further blocks may represent a single block or sub-block (e.g. a block-wise QP-adaptation) or a plurality of blocks being arranged in a certain order (e.g, a picture-wise QP-adaptation, e.g. slice-wise) or the entire picture (e.g. a picture-wise QP- adaptation, e.g. frame-wise). As mentioned above, the adaptive QP for the chroma component may have been computed for one or more blocks separately, i.e. block-wise, or for a certain arrangement of blocks, e.g. slice-wise, or for the entire picture, i.e. frame-wise or picture-wise.
Furthermore, a method for encoding a picture is suggested, wherein the method corn- prises steps of computing a quantization parameter QPK for a luma component in units of blocks BK of the picture, using an activity measure of the luma component, and computing a quantization parameter QPC for a chroma component based on a ratio— ay between an activity value aY of the luma component and an activity value ac of the chroma component.
Still further, a method for decoding a picture is suggested, wherein the method comprises steps of deriving from a bit stream a quantization parameter QPK for a luma component of the picture, and a quantization parameter for a chroma component QPC of the picture, wherein the quantization parameter (QPC) for the chroma component is calculated based on a ratio between an activity aY of the luma component and an activity ac of the chroma component.
In the following, embodiments of the present disclosure are described in more detail with reference to the figures, in which Fig. 1 shows a schematic block diagram of an encoder for block-based coding, which encoder may be used to apply the inventive principle,
Fig. 2 shows a schematic block diagram of a decoder for block-based decoding, which decoder may be used to apply the inventive principle,
Fig. 3 shows a schematic drawing of a picture being partitioned into block and sub blocks, Fig. 4 shows a schematic block diagram of an encoder according to an embodi- ment,
Fig. 5 shows a schematic block diagram of an decoder according to an embodi- ment, Fig. 6 shows a schematic block diagram of a method for encoding according to an embodiment, and
Fig. 7 shows a schematic block diagram of method for decoding according to an embodiment.
DESCRIPTION OF THE FIGURES
Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals.
Method steps which are depicted by means of a block diagram and which are described with reference to said block diagram may also be executed in an order different from the depicted and/or described order. Furthermore, method steps concerning a particular fea- ture of a device may be replaceable with said feature of said device, and the other way around.
Furthermore, the following terminology may be used in order to provide a clear separation between different embodiments: For instance, a computation may be executed block-wise or picture-wise. The result of a block-wise computation may provide a block-related value being specific for the particular block on which the computation was executed. A picture- wise computation may be executed slice-wise, i.e. on a slice-by-slice basis, or frame-wise, i.e. on a frame-by-frame basis. The result of a picture-wise computation may provide a picture-related value being specific for the particular slice or frame on which the computa- tion was executed.
The following description of the figures starts with a presentation of a description of an encoder and a decoder of a block-based predictive codec for coding pictures of a video in order to form an example for a coding framework into which embodiments of the present invention may be built in. The respective encoder and decoder are described with respect to Figures 1 to 3. Thereinafter the description of embodiments of the concept of the pre- sent invention is presented along with a description as to how such concepts could be built into the encoder and decoder of Figures 1 and 2, respectively, although the embodiments described with the subsequent Figures 4 and following, may also be used to form encoders and decoders not operating according to the coding framework underlying the encoder and decoder of Figures 1 and 2.
Figure 1 shows an apparatus for predictively coding a picture 12 into a data stream 14 exemplary using transform-based residual coding. The apparatus, or encoder, is indicated using reference sign 10. Figure 2 shows a corresponding decoder 20, i.e. an apparatus 20 configured to predictively decode the picture 12’ from the data stream 14 also using transform-based residual decoding, wherein the apostrophe has been used to indicate that the picture 12’ as reconstructed by the decoder 20 deviates from picture 12 originally encoded by apparatus 10 in terms of coding loss introduced by a quantization of the prediction residual signal. Figure 1 and Figure 2 exemplarily use transform based prediction residual coding, although embodiments of the present application are not restricted to this kind of prediction residual coding. This is true for other details described with respect to Figures 1 and 2, too, as will be outlined hereinafter.
The encoder 10 is configured to subject the prediction residual signal to spatial-to-spectral transformation and to encode the prediction residual signal, thus obtained, into the data stream 14. Likewise, the decoder 20 is configured to decode the prediction residual signal from the data stream 14 and subject the prediction residual signal thus obtained to spec- tral-to-spatial transformation.
Internally, the encoder 10 may comprise a prediction residual signal former 22 which generates a prediction residual 24 so as to measure a deviation of a prediction signal 26 from the original signal, i.e. from the picture 12. The prediction residual signal former 22 may, for instance, be a subtractor which subtracts the prediction signal from the original signal, i.e. from the picture 12. The encoder 10 then further comprises a transformer 28 which subjects the prediction residual signal 24 to a spatial-to-spectral transformation to obtain a spectral-domain prediction residual signal 24’ which is then subject to quantization by a quantizer 32, also comprised by the encoder 10. The thus quantized prediction residual signal 24” is coded into bitstream 14. To this end, encoder 10 may optionally comprise an entropy coder 34 which entropy codes the prediction residual signal as transformed and quantized into data stream 14. The prediction signal 26 is generated by a prediction stage 36 of encoder 10 on the basis of the prediction residual signal 24” encoded into, and de- codable from, data stream 14. To this end, the prediction stage 36 may internally, as is shown in Figure 1 , comprise a dequantizer 38 which dequantizes prediction residual sig- nal 24” so as to gain spectral-domain prediction residual signal 24’”, which corresponds to signal 24’ except for quantization loss, followed by an inverse transformer 40 which sub- jects the latter prediction residual signal 24”’ to an inverse transformation, i.e. a spectral- to-spatial transformation, to obtain prediction residual signal 24””, which corresponds to the original prediction residual signal 24 except for quantization loss. A combiner 42 of the prediction stage 36 then recombines, such as by addition, the prediction signal 26 and the prediction residual signal 24”” so as to obtain a reconstructed signal 46, i.e. a reconstruction of the original signal 12. Reconstructed signal 46 may correspond to signal 12’. A prediction module 44 of prediction stage 36 then generates the prediction signal 26 on the basis of signal 46 by using, for instance, spatial prediction, i.e. intra-picture prediction, and/or temporal prediction, i.e. inter-picture prediction.
Likewise, decoder 20, as shown in Figure 2, may be internally composed of components corresponding to, and interconnected in a manner corresponding to, prediction stage 36.
In particular, entropy decoder 50 of decoder 20 may entropy decode the quantized spectral-domain prediction residual signal 24” from the data stream, whereupon dequantizer 52, inverse transformer 54, combiner 56 and prediction module 58, interconnected and cooperating in the manner described above with respect to the modules of prediction stage 36, recover the reconstructed signal on the basis of prediction residual signal 24” so that, as shown in Figure 2, the output of combiner 56 results in the reconstructed signal, namely picture 12’.
Although not specifically described above, it is readily clear that the encoder 10 may set some coding parameters including, for instance, prediction modes, motion parameters and the like, according to some optimization scheme such as, for instance, in a manner optimizing some rate and distortion related criterion, i.e. coding cost. For example, en- coder 10 and decoder 20 and the corresponding modules 44, 58, respectively, may support different prediction modes such as intra-coding modes and inter-coding modes. The granularity at which encoder and decoder switch between these prediction mode types may correspond to a subdivision of picture 12 and 12’, respectively, into coding segments or coding blocks. In units of these coding segments, for instance, the picture may be sub- divided into blocks being intra-coded and blocks being inter-coded. Intra-coded blocks are predicted on the basis of a spatial, already coded/decoded neighborhood of the respective block as is outlined in more detail below. Several intra-coding modes may exist and be selected for a respective intra-coded segment including directional or angular intra-coding modes according to which the respective segment is filled by extrapolating the sample values of the neighborhood along a certain direction which is specific for the respective directional intra-coding mode, into the respective intra-coded segment. The intra-coding modes may, for instance, also comprise one or more further modes such as a DC coding mode, according to which the prediction for the respective intra-coded block assigns a DC value to all samples within the respective intra-coded segment, and/or a planar intra-coding mode according to which the prediction of the respective block is approximated or determined to be a spatial distribution of sample values described by a two-dimensional linear function over the sample positions of the respective intra-coded block with driving tilt and offset of the plane defined by the two-dimensional linear function on the basis of the neighboring samples. Compared thereto, inter-coded blocks may be predicted, for instance, temporally. For inter-coded blocks, motion vectors may be signaled within the data stream, the motion vectors indicating the spatial displacement of the portion of a previously coded picture of the video to which picture 12 belongs, at which the previously coded/decoded picture is sampled in order to obtain the prediction signal for the respec- tive inter-coded block. This means, in addition to the residual signal coding comprised by data stream 14, such as the entropy-coded transform coefficient levels representing the quantized spectral-domain prediction residual signal 24’’, data stream 14 may have encoded thereinto coding mode parameters for assigning the coding modes to the various blocks, prediction parameters for some of the blocks, such as motion parameters for inter- coded segments, and optional further parameters such as parameters for controlling and signaling the subdivision of picture 12 and 12’, respectively, into the segments. The decoder 20 uses these parameters to subdivide the picture in the same manner as the encoder did, to assign the same prediction modes to the segments, and to perform the same prediction to result in the same prediction signal.
Figure 3 illustrates the relationship between the reconstructed signal, i.e. the reconstructed picture 12’, on the one hand, and the combination of the prediction residual signal 24”” as signaled in the data stream 14, and the prediction signal 26, on the other hand. As already denoted above, the combination may be an addition. The prediction signal 26 is illustrated in Figure 3 as a subdivision of the picture area into intra-coded blocks which are illustratively indicated using hatching, and inter-coded blocks which are illustratively indicated not-hatched. The subdivision may be any subdivision, such as a regular subdivision of the picture area into rows and columns of square blocks or non-square blocks, or a multi-tree subdivision of picture 12 from a tree root block into a plurality of leaf blocks of varying size, such as a quadtree subdivision or the like, wherein a mixture thereof is illustrated in Figure 3 in which the picture area is first subdivided into rows and columns of tree root blocks which are then further subdivided in accordance with a recursive multi- tree subdivisioning into one or more leaf blocks.
Again, data stream 14 may have an intra-coding mode coded thereinto for intra-coded blocks 80, which assigns one of several supported intra-coding modes to the respective intra-coded block 80. For inter-coded blocks 82, the data stream 14 may have one or more motion parameters coded thereinto. Generally speaking, inter-coded blocks 82 are not restricted to being temporally coded. Alternatively, inter-coded blocks 82 may be any block predicted from previously coded portions beyond the current picture 12 itself, such as previously coded pictures of a video to which picture 12 belongs, or picture of another view or an hierarchically lower layer in the case of encoder and decoder being scalable encoders and decoders, respectively.
The prediction residual signal 24”” in Figure 3 is also illustrated as a subdivision of the picture area into blocks 84. These blocks might be called transform blocks in order to distinguish same from the coding blocks 80 and 82. In effect, Figure 3 illustrates that encoder 10 and decoder 20 may use two different subdivisions of picture 12 and picture 12’, re- spectively, into blocks, namely one subdivisioning into coding blocks 80 and 82, respectively, and another subdivision into transform blocks 84. Both subdivisions might be the same, i.e. each coding block 80 and 82, may concurrently form a transform block 84, but Figure 3 illustrates the case where, for instance, a subdivision into transform blocks 84 forms an extension of the subdivision into coding blocks 80, 82 so that any border be- tween two blocks of blocks 80 and 82 overlays a border between two blocks 84, or alter- natively speaking each block 80, 82 either coincides with one of the transform blocks 84 or coincides with a cluster of transform blocks 84. However, the subdivisions may also be determined or selected independent from each other so that transform blocks 84 could alternatively cross block borders between blocks 80, 82. As far as the subdivision into transform blocks 84 is concerned, similar statements are thus true as those brought forward with respect to the subdivision into blocks 80, 82, i.e. the blocks 84 may be the result of a regular subdivision of picture area into blocks (with or without arrangement into rows and columns), the result of a recursive multi-tree subdivisioning of the picture area, or a combination thereof or any other sort of blockation. Just as an aside, it is noted that blocks 80, 82 and 84 are not restricted to being of quadratic, rectangular or any other shape. Figure 3 further illustrates that the combination of the prediction signal 26 and the prediction residual signal 24”” directly results in the reconstructed signal 12’. However, it should be noted that more than one prediction signal 26 may be combined with the prediction residual signal 24”” to result into picture 12’ in accordance with alternative embodiments. In Figure 3, the transform blocks 84 shall have the following significance. Transformer 28 and inverse transformer 54 perform their transformations in units of these transform blocks 84. For instance, many codecs use some sort of DST or DCT for all transform blocks 84. Some codecs allow for skipping the transformation so that, for some of the transform blocks 84, the prediction residual signal is coded in the spatial domain directly. However, in accordance with embodiments described below, encoder 10 and decoder 20 are configured in such a manner that they support several transforms. For example, the transforms supported by encoder 10 and decoder 20 could comprise: o DCT-II (or DCT-III), where DCT stands for Discrete Cosine Transform
o DST-IV, where DST stands for Discrete Sine Transform
o DCT-IV
o DST-VII
o Identity Transformation (IT)
Naturally, while transformer 28 would support all of the forward transform versions of these transforms, the decoder 20 or inverse transformer 54 would support the corresponding backward or inverse versions thereof: o Inverse DCT-II (or inverse DCT-III)
o Inverse DST-IV
o Inverse DCT-IV
o Inverse DST-VII
o Identity Transformation (IT)
As already outlined above, Figures 1 to 3 have been presented as an example where the inventive concept described further below may be implemented in order to form specific examples for encoders and decoders according to the present application. Insofar, the encoder and decoder of Figures 1 and 2, respectively, may represent possible implementa- tions of the encoders and decoders described herein below. Figures 1 and 2 are, however, only examples. An encoder according to embodiments of the present application may, however, perform block-based encoding of a picture 12 using the concept outlined in more detail below and being different from the encoder of Figure 1 such as, for instance, in that same is no video encoder, but a still picture encoder, in that same does not support inter-prediction, or in that the sub-division into blocks 80 is performed in a manner different than exemplified in Figure 3. Likewise, decoders according to embodiments of the present application may perform block-based decoding of picture 12’ from data stream 14 using the coding concept further outlined below, but may differ, for instance, from the decoder 20 of Figure 2 in that same is no video decoder, but a still picture decoder, in that same does not support intra-prediction, or in that same sub-divides picture 12’ into blocks in a manner different than described with respect to Figure 3 and/or in that same does not derive the prediction residual from the data stream 14 in transform domain, but in spatial domain, for instance.
The innovative concepts described herein are concerned with an approach for a percep- tual QP adaptation, and in particular with a luma-dependent QP-adaptation for a chroma component, that shall be explained in detail in the following with reference to the Figures.
Figure 4 shows an inventive encoder 10 for encoding a picture 12 exploiting the innovative QP-adaptation approach. The picture 12 may be a still picture or a moving picture sequence of consecutive images or frames. The encoder 10 may, for instance, comprise an above mentioned quantizer 32 for quantizing the picture 12.
The quantizer 32 and, thus, the encoder 10 may be configured to compute a quantization parameter QPK for a luma component in units of blocks 80, 82, 84 of the picture 12. That is, for one or more blocks 80, 82, 84, and preferably for each block of the picture 12, its corresponding block-related or block-specific quantization parameter QPK for the luma component may be determined. More generally speaking, the quantization parameter QPK for the luma component may be determined in units of blocks BK. The quantization parameter QPK for the luma component may also be referred to as the luma-QP. Since the luma- QP is block-related or block-specific, the luma-QP is denoted by QPK. The luma-QP QPK may be determined by means of an activity measure of the luma component of the respective block. The activity measure gives an activity value aY of the luma component.
According to the herein described innovative principle, the encoder 10 may be configured to compute a quantization parameter for a chroma component depending on the luma component. The quantization parameter for the chroma component may also be referred to as the chroma-QP and may be denoted with QPC. An activity measure for the chroma component may be executed, which gives an activity value ac of the luma component. The computation of the chroma-QP QPC is based on a ratio— between the activity value aY of the luma component and the activity value ac of the chroma component. Accordingly, the innovative encoder 10 may be configured to determine an adaptive luma-dependent chroma-QP.
The block-based luma-QP QPK and the chroma-QP QPC may be signaled in the data stream 14. The chroma-QP QPC may be signaled by means of a QP index value indicating the luma-QP plus/minus a chroma-QP offset value Oc or by means of the chroma-QP offset value Oc only. Details about the signaling will follow somewhat later in the text.
As mentioned above, the computation of the luma-QP QPK may be executed on a block- wise basis. For example, the block-wise computed luma-QP QPK may be determined by an approach according to [2], the content of which is explicitly incorporated herein by reference.
The luma-dependent computation of the chroma-QP QPC according to the innovative principle as described herein may be executed on a block-wise basis or on a picture-wise basis. The following description firstly discusses the picture-wise computation of the luma- dependent chroma QP QPC and subsequently the block-wise computation of the luma-dependent chroma QP QPC.
Picture-wise chroma-QP adaptation
The chroma-QP QPC may be computed on a picture- wise basis. That is, an adaptive chroma-QP QPC may be computed that is applicable to the entire picture 12, which may be a single image or picture, respectively, or a plurality of consecutive images in a video. In the picture- wise approach, the chroma-QP QPC may be referred to as a picture-related chroma-QP or as a picture-related quantization parameter QPC for the chroma component, respectively.
The picture-related quantization parameter QPC for the chroma component may be computed on a frame-by-frame basis, wherein the picture-related quantization parameter QPC may be computed for at least one chroma component contained in at least one frame of a moving picture sequence, or on a slice-by-slice basis, wherein the picture-related quantization parameter QPC may be computed for at least one chroma component contained in at least one slice of the picture 12. For example, a mean picture-related quantization parameter QPC may be computed for the whole picture. For this, a mean value over one or more available chroma-QPs, and preferably over each available chroma-QP, may be calculated. Furthermore, a mean chroma activity value ac over one or more available chroma components, and preferably over each available chroma component, may be calculated.
Furthermore, if the chroma-QP QPC is computed on a picture- wise basis (e.g. frame-by- frame or slice-by-slice), a mean value over one or more block-wise computed luma-QPs QPK, and preferably over each block-wise computed luma-QP QPK, may be calculated. Accordingly, the luma-QP may be a picture-wise mean luma-QP denoted with QPK. Addi- tionally or alternatively, also the activity value aY of the luma component may be a picture- mse mean activity value ay over one or more block-wise activity values aK, and preferably over each block-wise activity value aK, of the luma component.
According to this exemplary embodiment, the quantization parameter QPC for the chroma component may be computed picture-wise such that the chroma-QP QPC is a picture-related quantization parameter in this case. The picture-related chroma-QP QPC may be computed based on the ratio between the picture- wise activity value ac of the chroma component and the picture-wise activity value ay of the luma component, i.e. based on the ratio— , wherein the picture-wise activity value av of the luma component may be the
ay
above mentioned picture- wise mean luma activity value ay. Additionally or alternatively, the picture-wise activity value ac of the chroma component may be the above mentioned picture- wise mean chroma activity value ac.
Again, the picture-wise approach may include to apply the herein described concept on a slice-by-slice basis, wherein the herein described principle may be applied to one or more slices, and preferably to each slice, of the picture. Alternatively, the picture-wise approach may include to apply the herein described concept on a frame-by-frame basis, wherein the herein described principle may be applied to one or more frames, and preferably to each frame, of a plurality of consecutive frames in a vieo.
Briefly summarizing, the concept of this embodiment is to apply the above discussed block-wise perceptual QP adaptation to the luma component of an image or video first, wherein the concept of [2] may be applied. Then, as an inventive approach, a separate and independent adaptation of the picture-wise mean QP value applied to each chroma component, based on the previously determined visual activity statistics for the luma channel, is carried out. In HEVC and its codec successors, a predefined luma-QP-to-chroma-QP mapping is employed during the quantization and reconstruction of 4:2:0 coded input ( ChromaArrayType =1); see Table 8-10 in [10], Section 8.6.1 for details. This mapping effectively reduces the chroma QP-and, thereby, the chroma quantization distortion -at high luma QPs near QPm ax, which may explain why, at low coding bit-rates, the use of positive chroma QP offsets has recently been suggested. In combination with the proposed perceptual QPA, such a reduction of the chroma QP values relative to the luma QP values seems to be unnecessary in terms of visual reconstruction quality, particularly when a cross-component predictive coding technique such as the linear-model chroma (CCLM) predictor [11] is uti- lized. In fact, excessive fixed redistribution of coding bits to the chroma channels at low rates potentially leads to visible quality reduction on individual video sequences.
Given the abovementioned disadvantages of a fixed input-independent luma-to-chroma QP adjustment via, e. g., the transmission of nonzero pps_cb_qp_offset and
pps_cr_qp_offset values in the PPS header [10], the inventive principle proposes to, when perceptual QPA is enabled, employ input-dependent adaptive luma-to-chroma QP offsets on a frame-by-frame or a slice-by-slice basis. Specifically, it is suggested to extend the luma-based QPA approach of [2] by the following backward-compatible HEVC syntax- compliant chroma QPA method:
1. When the -PerceptQPA or -SliceChromaQPOffsetPeriodicity encoder configuration parameter is provided with a positive value, pps_slice_chroma_qp_off- sets_present_flag = 1 may be set in the coded PPS.
2. For each picture 12 (e.g. for each slice or frame) and chroma component c e [Cb, Cr] present in the coded image or video, a picture-related chroma-QP value QPC may be determined from the ratio between said chroma channel’s activity ac and the luma activity aY:
[
Figure imgf000017_0001
where the chroma channel’s activity value ac may be the chroma channel’s mean activity value ac and the luma activity value aY may be the mean luma activity value ag ; subscript Y denotes the luma component, 0 < b £ 1 as previously, k e [Y, Cb, Cr], and
Figure imgf000018_0001
is the component’s frame buffer, excluding the border pel rows and columns to prevent the high-pass filter from extending beyond the picture boundaries. Value a is a constant. Preferably, a = 4.
3. The picture-wise (e.g. frame-wise or slice-wise) mean luma QP QPY may be ob- tained, e.g. by using the apic according to [2]:
QPY = QPsiice-
4. A difference d between QPY and the luma-to-chroma mapping of QPY may be ob- tained according to [10]: d = QPY - Qpc(QPy). QpcO := chroma QP lookup via HEVC Table 8 - 10. Note: steps 3 and 4 are optional. They can be bypassed by assuming a zero difference, i. e., d = 0.
5. A picture- related (e.g. slice-wise or frame-wise) offset Oc may be set according to Oc = min(Omax + d; QPC + d ) with Omax = 3. This chroma QP Adaptation effectively lowers the coding bit-rate of the chroma channel when its mean visual activity is relatively high compared to the luma channel’s visual activity. At the same time, it compensates for the picture-averaged fixed luma-to-chroma QP mapping by Qpc( ) to prevent undesired re-shifting of coding bits from luma to chroma.
The upper limit Omax is introduced to prevent very coarse chroma-channel quantization on some sequences at lower bit-rates. Within the CTC sequence set, it only affects the en- coding of the ParkRunning3 sequence. It is worth mentioning that the slice-wise offsets Oc, which can be transmitted as the slice_cb_qp__offset and siice_cr_qp_offset elements in HEVC and VTM/BMS 1 , can still be combined with non-zero PPS-wise QP chroma offsets (pps_cb_qp_offset and pps_cr_qp_offset) in the traditional fashion known from HEVC [10]. In summary, the additional derivation of a QP offset 0C for each chroma component c rep- resents a separate, independent quantization parameter adaptation for the components of a multi-component image or video. Said picture-wise chroma QP offset Oc may be applied in the encoding and decoding of the image or video. For example, use QPY + Oc in the encoder-side quantization and decoder-side dequantization (also called inverse quantization) of any prediction residual in the given chroma component c.
In order to allow for correct decoder-side dequantization according to this proposal, the chroma QP data QPY + Oc, or at least the offsets Oc, may be conveyed from the encoder to the decoder. Therefore, for each picture 12, either a final chroma QP index = QPY + Oc or a chroma QPoffset value Oc only may be transmitted as an additional part of the coded bit-stream 14. Note that both the final QP indices and the chroma QP offsets could be transmitted as entropy coded QP value differences (i.e. , delta-QPs) according to the state of the art known from, e.g., HEVC [1], [2]
Summarizing, in the above discussed picture-wise approach the activity value of the luma component may be a picture- related mean luma activity value being valid for the entire picture 12, i.e.: aY = aY and the luma-QP may be a picture- related mean luma-QP being valid for the entire picture 12, i.e.:
QPK = QPY
Additionally, the activity value of the chroma component may be a picture-related mean chroma activity value being valid for the entire picture 12, i.e.:
Oc CLc and the chroma-QP may be a picture- related mean chro a-QP being valid for the entire picture, i.e.:
QPC = QPC.
Block-wise chroma-QP adaptation
The chroma-QP QPC may be computed on a block-wise basis. That is, an adaptive chroma-QP QPC may be computed that is applicable to one or more available blocks or sub-blocks 80, 82, 84, and preferably to each available block or sub-block 80, 82, 84, of the picture 12. In the block- wise approach, the chroma-QP may be referred to as a block- related chroma-QP or as a block-related quantization parameter for the chroma component, respectively. In order to draw a distinction over the denotation of the above discussed picture-wise QP adaptation, the block-wise chroma QP may be denoted with
QP C(BK)· The block-related quantization parameter QPC(BK) for the chroma component may be computed on a block-wise basis, wherein the block-related quantization parameter QPC(BK) may be computed for at least one chroma component contained in at least one block (¾) of the picture 12. In terms of HEVC, the at least one block may correspond to a Coding Tree Unit (CTU) or a Coding Unit (CU). Accordingly, the block-related quantization parameter QPC(BK ) may be computed on CTU-level and/or on CU-level. However, the in- ventive concept may also be applicable to other blocks or sub-blocks different from CUs and CTUs.
Furthermore, if the chroma-QP QPC(BK ) is computed on a block-wise basis, the activity value ac of the chroma component may also be a block- related activity value, denoted with a C(BK), and being computed on a block- wise basis. Furthermore, the activity value aY of the luma component may be a block-related activity value, denoted with a Y(BK), and being computed on a block-wise basis. If the activity value of the luma component may be obtained according to [2], then it may directly be a b/oc -related luma activity value a Y(BK ~). This block-related luma activity value a Y(BK) may be directly used in the inventive approach as described herein. There may be no need for determining a mean luma activ- ity άg as in the above discussed picture-wise approach.
Briefly summarizing, the idea behind this embodiment is to apply, in an encoder 10, the above block-wise perceptual QP adaptation to the luma component of an image or video and, in a manner similar to that of the first inventive approach as discussed in the section (picture-wise QP adaptation) above, to apply an also block-wise QP adaptation to at least one chroma component of said image or video. Then, the adapted block-wise chroma QP values QPC(BK) may be signaled to a decoder 20 alongside the existing signaled luma QP values a ( BK). In other words, instead of applying a single chroma QP adaptation in each chroma component of each picture 12 and signaling these adapted picture-wise chroma QP data according to the state of the art [1], [10], as in the above discussed Section, in this preferred alternative embodiment multiple block-wise (e.g. CTU-wise) chroma QP adaptations are conducted for each chroma component of each picture 12, wherein these multiple block-wise adapted chroma QP values QPC(BK ) may be signaled to a decoder 20 in a novel, inventive fashion. Briefly summarizing, the block- wise chroma QP adaptation may be carried out in a manner identical to that described in the section above ( picture-wise QP adaptation), except that each adaptation may be applied for a block or sub-block BK of the picture 12 at index K, instead of the entire picture 12: 1. When the -PerceptQPA or a novel -CtuChromaQpOffsets encoder configuration parameter is provided with a positive value, pps_ctu_chroma_qp_offsets _pre- sent_flag= 1 may be set in the coded PPS.
2. For each slice and chroma component c e [Cb, Cr], the block-related chroma-QP QPC(BK) may be computed separately for each block or sub-block Bk:
Figure imgf000021_0002
where subscript Y identifies the luma component. Notice that aY BK) equals the aK of [2],
3. A block-wise adapted luma-QP QPY(BK ) may be obtained, e.g. by using the apic according to [2]:
QPY(BK) = QPK , e. g.0f [2].
4. A difference d between QPY(BK ) and the value mapping of QP (BK) may be defined according to [10]: d = QPY(BK)
Figure imgf000021_0001
QpcO := chroma QP lookup via HEVC Table 8 - 10. Note: steps 3 and 4 are optional. They can be bypassed by assuming a zero difference, i. e., d = 0.
5. A block-related offset Oc(BK ) may be set according to 0C(BK ) = min(Omax +
d ; QPC(BK) + d) with Omax = 3. This chroma QP Adaptation effectively lowers the coding rate of the chroma sub-block K when its mean visual activity is quite high compared to the luma channel’s visual activity in the same sub-block K. Unlike the first proposal of the above discussed section (picture- wise QP adaptation), this chroma QP adaptation is performed in a block-wise manner, not only in a picture- wise fashion.
6. The block-related chroma QP offset Oc(BK) may be applied in the encoding and decoding of the image or video. For example, use QPy(BK ) + Oc(BK) in the en- coder-side quantization and decoder-side dequantization (also called inverse quantization) of any prediction residual in the given chroma component c.
7. in order to allow for correct decoder-side dequantization according to this pro- posal, the block-related chroma QP data QPY(BK ) + Oc(BK ), or at least the block- related offsets Oc(BK), may be conveyed from the encoder 10 to the decoder 20. Therefore, for each block or sub-block (e. g., CTU or CU), either a final block-related chroma QP index{BK) = QPY(BK ) + Oc(BK ) or a block-related chroma QP offset value Oc(BK ') may be transmitted as an additional part of the coded bit-stream 14. Note that both the final QP indices and the chroma QP offsets could be transmitted as entropy coded QP value differences (i.e., delta-QPs) according to the state of the art known from, e.g., HEVC [1], [2],
In summary, the additional derivation and signaling of a block-wise adapted QP offset value Oc(BK ) for a chroma component c represents a separate, independent quantization parameter adaptation and signaling for at least two components (one luma and at least one chroma) of a multi-component image/video according to the invention. Summarizing, in the above discussed block-wise approach the activity value of the luma component may be a block-related luma activity value being valid for at least one block or sub-block BK of the picture 12, i.e.: aY— aY{BK) and the luma-QP may be a b/oc -related luma-QP being valid for at least one block or sub-block BK of the picture 12, i.e.:
QPK = QPY(BK)
Additionally, the activity value of the chroma component may be a block-related chroma activity value being valid for at least one block or sub-block BK of the picture 12, i.e.: ac = ac(BK ) and the chroma-QP may be a block-related chroma-QP being valid for at least one block or sub-block BK of the picture 12, i.e.:
QPc = QPC (BK) ·
The above discussed inventive principles of picture-wise and block-wise chroma QP adaptations may preferably applied to a multi-component picture. For example, a multi-com- ponent picture may comprise at least one luma component, and additionally one or more color components, i.e. one or more chroma components. For example, the inventive principle may be applied to pictures being coded in the YCbCr color space. In case of block- based coding, each block may comprise a luma coding block and one or more chroma coding blocks.
As mentioned above, the picture-wise or block-wise computed luma-dependent adaptive chroma quantization parameter QPC may be signaled to the decoder 20 in the data stream 14. For this, a chroma-QP offset-value Oc or a chroma-QP index may be signaled in the data stream 14, as discussed above.
Figure 5 shows a decoder 20 according to an embodiment. The decoder 20 may comprise a dequantizer 52, as explained with reference to Figure 2. The dequantizer 52 and, thus, the decoder 20 may be configured to derive from the data stream 14 the above discussed quantization parameter QPK for the luma component in units of blocks BK of the picture 12, and the above discussed quantization parameter QPC for the chroma component in units of further blocks of the picture 12. Said further blocks may represent a single block or subblock (e.g. an above described block- wise QP-adaptation) or a plurality of blocks arranged in a certain order (e.g. an above described picture- wise QP-adaptation, e.g. slice- wise) or the entire picture 12 (e.g. an above described picture- wise QP-adaptation, e.g. frame- wise).
As discussed above, the quantization parameter QPC for the chroma component may have been computed by the encoder 10 based on a ratio between the activity aY of the luma component and the activity ac of the chroma component. Thus, the activity values aY, ac are depicted in hatched lines in Figure 5.
If the derived chroma-QP QPC has been computed picture- wise, then the activity value of the luma component may be a picture-wise mean activity value cLY over one or more block-wise activity values <xK of the luma component. Furthermore, the quantization pa- rameter for the chroma component may be a picture- related mean chroma-QP QPC. According to this embodiment, the decoder 20 may be configured to apply, during decoding, the picture-related quantization parameter QPC for the chroma component on a picture- wise basis, i.e. the adaptive chroma-QP QPC may be applied onto the entire picture 12, which may be a single still image, wherein the adaptive chroma-QP QPC may be applied slice-by-slice, or the picture 12 may comprise a plurality of consecutive frames, wherein the adaptive chroma-QP QPC may be applied frame-by-frame.
Accordingly, the decoder 20 may be configured to apply, during decoding, the picture- related quantization parameter QPC for the chroma component on a frame-by-frame basis, wherein the picture- related quantization parameter QPC is to be applied onto at least one chroma component contained in at least one frame of a moving picture sequence, or on a slice-by-slice basis, wherein the picture-related quantization parameter QPC is to be applied onto at least one chroma component contained in at least one slice of the picture 12.
In this case, the picture- related quantization parameter QPC for the chroma component may be indicated in the data stream 14 by means of a) the above described picture-related quantization-parameter-offset-value Oc only, or b) the above described picture-related QPindex value calculated from the picture- related mean quantization parameter QPY for the luma component and the picture- related quantization-parameter-offset-value Oc of the chroma component according to equation:
QPindex = QPY ± Oc,
As discussed above, the quantization parameter offset value may be a picture- related quantization parameter offset value Oc indicating a frame-wise or a slice-wise calculated offset between a picture-related quantization parameter QPC for the chroma component and a picture-related mean quantization parameter ~QPY for the luma component, the picture-related mean quantization parameter QPY comprising one or more block-wise computed quantization parameters QPK for the luma component.
Alternatively, if the derived chroma-QP QPC has been computed block-wise, then the activity value aY of the luma component may be a block-related activity value a Y(BK) being computed on a block-wise basis, wherein the quantization parameter QPC for the chroma component may be a block-related quantization parameter QPC{BK) being computed on a block-wise basis, and wherein the decoder 20 may be configured to apply, during decoding, the Mock-related chroma-QP QPC{BK ) on at least one chroma component contained in at least one block or sub-block BK of the picture 12.
In this case, the block-related quantization parameter QPC(BK ) for the chroma component may be indicated in the data stream 14 by means of a) the above described block-related quantization-parameter-offset-value Oc(BK ) only, or b) the above described block-related QPlndex(BK ) calculated from the Mock-related quantization parameter QPY{BK) for the luma component and the block-related quantization-parameter-offset-value Oc{BK) of the chroma component according to equation:
QPindex(BK) = QPY(BK ) ± Oc(BK), wherein the block-related quantization parameter offset value Oc(BK) indicates a block- wise calculated offset between the Mock-related quantization parameter QPC(B, f) for the chroma component and the Mock-related quantization parameter QPY{BK) for the luma component.
Figure 6 shows a block diagram of a method for encoding a picture 12.
In block 61 a quantization parameter QPK for a luma component may be computed in units of blocks BK of the picture, using an activity measure of the luma component.
In block 62 a quantization parameter QPC for a chroma component may be computed based on a ratio— between an activity value aY of the luma component and an activity aY
value ac of the chroma component.
Figure 7 shows a block diagram of a method for decoding a picture 12.
In block 71 a quantization parameter QPK for a luma component of the picture 12 is derived from the data stream 14.
In block 72 a quantization parameter QPC for a chroma component of the picture 12 is derived from the data stream 14. According to the inventive principle, the quantization parameter QPC for the chroma com- ponent is calculated based on a ratio— between an activity aY of the luma component and aY
an activity ac of the chroma component.
The inventive concept as described herein may additionally or alternatively be realized by the following embodiments:
1. Apparatus for encoding a picture, configured to
- compute a quantization parameter (QP*) for a luma component in units of blocks (Bk) of the picture, using an activity measure (ak) of the luma component, and
- compute a quantization parameter (QPC) for a chroma component based on a ratio (— «U ) of an activity of the luma component ( aY ) and an activity ( ac ) of the chroma component.
2. The apparatus according to embodiment 1 , configured to compute the quantization parameter (QPC) for the chroma component in units larger than the blocks [slice-by-slice] or globally for the picture [frame-by- frame].
3. The apparatus according to embodiment 2, configured to insert the quantization parameter (QPC) for the chroma component in the units larger than the blocks [slice-by-slice] or globally for the picture [frame-by- frame] into the data stream.
4. The apparatus according to any one of embodiments 1 to 3, configured to insert an information on the quantization parameter for a luma component into a data stream for each of the blocks.
5. The apparatus according to any one of embodiments 1 to 4, configured to determine a value QPc of the quantization parameter for the chroma component (QPc) from the ratio— between a chroma channel’s mean activity ac and a luma channel’s mean activity aY, for each slice of the picture or the whole picture, depending on a logarithm of a ·— if a ac > aY
aY and to be zero if a · ac < aY . . The apparatus according to any one of embodiments 1 to 5, configured to determine a value QPc of the quantization parameter for the chroma com- ponent (QPC) for each slice of the picture or the whole picture, wherein
QPc = round log
Figure imgf000027_0001
Figure imgf000027_0002
if a ac > aY and to be zero if a ac < aY, wherein a having a constant value between ½ £ cc<5, . The apparatus according to embodiment 6, wherein a weight exponent b has a value between 0 < b < 1, and/or wherein a base of the logarithm is 2. . The apparatus according to any one of embodiments 1 to 7, configured to in computing the quantization parameter (QPC) for the chroma component, compute a picture-wise or a slice-wise mean luma quantiza- tion parameter (QPY), and adapting the quantization parameter (QPC) for the chroma component using a luma-to-chroma-mapping function applied to the picture-wise or slice-wise mean luma quantization parameter (QPY). . The apparatus according to any one of embodiments 1 to 8, configured to compute the quantization parameter for the chroma component in units of further blocks of the picture. 0. The apparatus according to embodiment 9, wherein the further blocks coincide with the blocks (Bk) or form super blocks each composed of one or more of the blocks (Bk). 11. The apparatus according to any one of embodiments 1 to 10, configured to determine a value QPc(Bk) of the quantization parameter for the chroma component (QPC) from the ratio ~~ between a chroma channel’s mean activity
Figure imgf000028_0001
ac(Bk) and a luma channel’s mean activity aY(Bk) for each block of the picture or the whole picture, depending on a logarithm of a if a · ac(Bk) > aY(Bk) and
Figure imgf000028_0002
to be zero if a ac(Bk) < aY(Bk).
12. The apparatus according to any one of embodiments 1 to 11 , configured to determine a value QPc of the quantization parameter for each slice and chroma component QPc(Bk) for each sub-block of the picture, wherein
QPc(Bk) = round f 3b · log
Figure imgf000028_0003
if a ac(Bk) > aY(Bk) and to be zero if a · ac(Bk) < aY(Bk), wherein a having a (constant) value between ½ £ a<5.
13. The apparatus according to embodiment 12, wherein a weight exponent b has a value between 0 < b < 1, and/or wherein a base of the logarithm is 2.
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus. Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software or at least partially in hardware or at least partially in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
Some embodiments according to the invention comprise a data carrier having electroni- cally readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer pro- gram product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitory.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods de- scribed herein. A further embodiment comprises a computer having installed thereon the computer pro- gram for performing one of the methods described herein.
A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for per- forming one of the methods described herein to a receiver. The receiver may, for exam- pie, be a computer, a mobile device, a memory device or the like. The apparatus or sys- tem may, for example, comprise a file server for transferring the computer program to the receiver.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware apparatus.
The apparatus described herein may be implemented using a hardware apparatus, or us- ing a computer, or using a combination of a hardware apparatus and a computer.
The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
While this disclosure has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of this disclo- sure, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments.
References
[1] V. Sze, M. Budagavi, and G. J. Sullivan, High Efficiency Video Coding (HEVC) - Algorithms and Architectures, Cham, Switzerland: Springer International Publishing, 2014.
[2] International patent application WO 2019 / 057 846 A1
[3] V. Baroncini,“Results of Subjective Testing of Responses to the Joint CfP on
Video Compression Technology with Capability beyond HEVC,” JVET-J0080, San Diego, USA, Apr. 2018.
[4] A. Segall, V. Baroncini, J. Boyce, J. Chen, T. Suzuki,“Joint Call for Proposals on Video Compression with Capability beyond HEVC,” JVET-H1002, Macao, China, Oct. 2017.
[5] J. Boyce, K. Suhring, X. Li, V. Seregin,“JVET Common Test Conditions and Soft- ware Reference Configurations,” JVET-J1010, San Diego, USA, Apr. 2018.
[6] A. Segall, E. Francois, D. Rusanovskyy,“JVET Common Test Conditions and
Evaluation Procedures for HDR/WCG Video,” JVET-J101 1 , San Diego, USA, Apr. 2018.
[7] P. Hanhart, J. Boyce, K. Choi,“JVET Common Test Conditions and Evaluation Procedures for 360° Video,” JVET-J1012, San Diego, USA, Apr. 2018.
[8] Wikipedia,“Laplace-Filter (in German),” 2006. Online: https://de.wikipe- dia.org/wiki/Laplace-Filter
[9] J. Chen and E. Alshina,“Algorithm Description for Versatile Video Coding and Test Model 1 (VTM 1),” JVET-J1002, San Diego, USA, Apr. 2018.
[10] ITU-T, recommendation H.265 and ISO/IEC, Int. Standard 23008-2,“High efficiency video coding," Geneva, Switzerland, Feb. 2018. Online:
http://www. itu .int/rec/T-REC-H .265
[11] J. Chen et al.,“Algorithm Description of Joint Exploration Test Model 7 (JEM 7),” JVET-G1001 , Torino, Italy, July 2017.

Claims

1. An apparatus (10) for encoding a picture (12), configured to compute a quantization parameter ( QPK ) for a luma component in units of blocks (BK) of the picture (12), using an activity measure of the luma component, and compute a quantization parameter (QPC) for a chroma component based on a ratio (— ) between an activity value ( aY ) of the luma component and an activity aY
value (ac) of the chroma component. 2 The apparatus (10) according to claim 1 , wherein at least one of the quantization parameter (QPK) for the luma component and the quantization parameter ( QPC ) for the chroma component is adaptively com- puted so as to be adaptable to at least one of a current content of the picture (12) or a current content of at least one block or sub-block (80, 82, 84) of the picture (12).
3. The apparatus (10) according to claim 1 or 2, wherein the quantization parameter ( QPC ) for the chroma component is computed picture-wise such that the quantization parameter (QPC) is a picture-re- lated quantization parameter, wherein the activity value (ay) of the luma component is a picture-related mean activity value (ay) over one or more block-related activity values ( aK ) of the luma component.
4. The apparatus (10) according to claim 3, wherein the picture-related quantization parameter (QPC) for the chroma component is computed on a frame-by-frame basis, wherein the picture-related quantization parameter ( QPC ) is computed for at least one chroma component contained in at least one frame of a moving picture sequence, or on a slice-by-slice basis, wherein the picture-related quantization parameter {QPC) is computed for at least one chroma component contained in at least one slice of the picture (12).
5. The apparatus (10) according to claim 3 or 4, wherein the picture-related quantization parameter (QPC) of the chroma component is computed according to the following equation:
Figure imgf000033_0001
k e [Y, Cb, Cr], 0 £ b £ 1, subscript Y denotes the luma component,
Pfc is the component’s frame buffer, excluding the border pel rows and columns, and a is a constant. 6. The apparatus (10) according to any one of claims 1 to 5, wherein the apparatus (10) is configured to determine for the luma component a picture-related mean quantiza- tion parameter (QPY) over one or more block-related quantization parameters ( QPK ) for the luma component, and determine for the chroma component a picture-related quantization- parameter-offset-value (Oc) indicating a frame-wise or a slice-wise offset of the picture-related quantization parameter (QPC) for the chroma component relative to the picture-related mean quantization parameter (QPY) for the luma component, and to signal the picture-related quantization parameter {QPC) for the chroma component in the data stream (14) by means of: a) the picture-related quantization-parameter-offset-value ( Oc ) only, or b) a picture-related quantization parameter index value
(' QPindex ) calculated from the picture-related mean quantization parameter (QPY) for the luma component and the picture-related q u a nt izat i on -pa ra m ete r-offset- va I u e (Oc) of the chroma component according to equation:
QPindex = QPy ± Oc.
7. The apparatus (10) according to claim 6, wherein the picture-related quantization-parameter-offset-value ( Oc ) is computed according to
Oc = min(Omax + d ; QPC + d ), wherein 0max is a variable upper offset-limit of the picture-related quantiza- tion parameter (QPC) for the chroma component relative to the picture-related mean quantization parameter (QPY) for the luma component, and d is a difference between the picture-related mean quantization parameter (QPY) for the luma component and a fixed luma-to-chroma-mapping.
8. The apparatus (10) according to claim 7, wherein the difference d is zero.
9. The apparatus (10) according to claims 1 or 2, wherein the activity value (aY) of the luma component is a block-related activity value (a Y(BK)) being computed on a block-wise basis, and wherein the quantization parameter ( QPC ) for the chroma component is a block-related quantization parameter (QPc(BKy) being computed on a block-wise basis, wherein the block-related quantization parameter (QPC(BK)) is computed for at least one chroma component contained in at least one block or sub-block (80, 82, 84) of the picture (12).
10. The apparatus (10) according to claim 9, wherein the block-related quantization parameter ( QPC K )) of the chroma component is computed according to the following equation: a · ac(BK) > aY(¾),
Figure imgf000035_0001
otherwise, with aK(BK ) = max (amini (i¾å[ .y]eP^[Z' Xl l) ) , wherein
( BK ) represents a block of the picture (12), k e [Y, Cb, Cr],
0 £ b < 1, subscript Y denotes the luma component,
PK is the component’s frame buffer, excluding the border pel rows and columns, and a is a constant. 11. The apparatus (10) according to claim 9 or 10 wherein the quantization parameter (QPK) for the luma component is a block-related quantization parameter ( QPY{BK )), and wherein the apparatus (10) is configured to determine for the chroma component a block-related quantization- parameter-offset-value ( Oc{BK )) indicating a block-wise offset of the block- related quantization parameter ( QPC(BK )) for the chroma component relative to the block-related quantization parameter (QPY BK)) for the luma component, and to signal the block-related quantization parameter {QPC(BK)) for the chroma component in the data stream (14) by means of: a) the block-related quantization-parameter-offset-value
(Oc(BK)) only, or b) a block-related quantization parameter index value
( QPindex(BK )) calculated from the block-related quantization parameter ( QPY(BK )) for the luma component and the block-related quantization-parameter-offset-value ( Oc(BK )) of the chroma component according to equation:
QPlndex{BK ) = QPY(BK) ± Oc{BK).
The apparatus (10) according to claim 11 , wherein the block-related quantization-parameter-offset-value (Oc{BK)) is computed according to
Oc{BK) = min(Omax + d; QPC(BK ) + d), wherein 0max is a variable upper offset-limit of the block-related quantization parameter (QPC(BK)) for the chroma component relative to the block-related quantization parameter ( QPY(BK )) for the luma component, and d is a difference between the block-related quantization parameter ( QPY(BK )) for the luma component and a fixed luma-to-chroma-mapping.
The apparatus (10) according to claim 12, wherein the difference d is zero.
An apparatus (20) for decoding a picture (12), configured to derive from a data stream (14) a quantization parameter (QPK) for a luma component in units of blocks of the picture (12), and a quantization parameter for a chroma component (QPC) in units of further blocks of the picture (12). 15. The apparatus (20) according to claim 14, wherein the quantization parameter (QPK) for the luma component and the quantization parameter ( QPC ) for the chroma component are computed according to one of claims 1 to 13.
16. The apparatus (20) according to claim 14 or 15, wherein the quantization parameter (QPC) for the chroma component is computed based on a ratio (— ) between an activity (aY) of the luma component aY
and an activity (ac) of the chroma component.
17. The apparatus (20) according to claim 16, wherein the activity value (aY) of the luma component is a picture-related mean activity value ( ag ) over one or more block-related activity values (<½) of the luma component, wherein the quantization parameter ( QPC ) for the chroma component is computed picture-wise such that the quantization parameter ( QPC ) is a picture-re- lated quantization parameter, and wherein the apparatus is configured to apply, during decoding, the picture- related quantization parameter ( QPC ) for the chroma component on a picture-wise basis.
18. The apparatus (20) according to claim 17, wherein the apparatus (20) is configured to apply, during decoding, the picture-related quantization parameter (QPC) for the chroma component on a frame-by-frame basis, wherein the picture-related quantization parameter (QPC) is to be applied onto at least one chroma component contained in at least one frame of a moving picture sequence, or on a slice-by-slice basis, wherein the picture-related quantization parameter ( QPC ) is to be applied onto at least one chroma component contained in at least one slice of the picture (12). 19. The apparatus (20) according to any one of claims 16 to 18, wherein the picture-related quantization parameter ( QPC ) for the chroma component is indicated in the data stream (14) by means of a) a picture-related quantization-parameter-offset-value ( Oc ) only, or b) a picture-related quantization parameter index value
(QPindex) being calculated from a picture-related mean quantization parameter (QPY) for the luma component and the picture-related quantization-parameter-offset-value (Oc) of the chroma component according to equation:
QPindex QPY ± Oc, wherein the quantization parameter offset value ( Oc ) is a picture-related quantization parameter offset value indicating a frame-wise or a slice-wise calculated offset between a picture-related quantization parameter ( QPC ) for the chroma component and the picture-related mean quantization parameter (QPY) for the luma component, the picture-related mean quantization parameter ( QPY ) comprising one or more block-related quantization parameters (QPK) for the luma component.
20. The apparatus (20) according to claim 16, wherein the activity value (aY) of the luma component is a block-related ac- tivity value (a Y(BK)) being computed on a block-wise basis, wherein the quantization parameter ( QPC ) for the chroma component is a block-related quantization parameter (QPC(BK)) being computed on a block-wise basis, and wherein the apparatus (20) is configured to apply, during decoding, the block-related quantization parameter ( QPC{BK )) on at least one chroma component contained in at least one block or sub-block (80, 82, 84) of the picture (12).
21. The apparatus (20) according to claim 20, wherein the block-related quantization parameter ( QPC(BK )) for the chroma component is indicated in the data stream (14) by means of a) a block-related quantization-parameter-offset-value (Oc(BK))
only, or b) a block-related quantization parameter index value
(' QPindex(BK )) calculated from the block-related quantization parameter ( QPY ) for the luma component and the block-related quantization-parameter-offset-value ( Oc(BK )) of the chroma component according to equation:
QPindex(BK) = QPY(BK ) ± Oc(BK), wherein the quantization parameter offset value (0C) is a block-related quantization parameter offset value ( Oc(BK )) indicating a block-wise calculated offset between the block-related quantization parameter (QPC(BK)) for the chroma component and the block-related quantization parameter ( QPY(BK )) for the luma component.
22. A method for encoding a picture (12), the method comprising steps of computing a quantization parameter ( QPK ) for a luma component in units of blocks ( BK ) of the picture (12), using an activity measure of the luma component, and computing a quantization parameter ( QPC ) for a chroma component based on a ratio ( ) between an activity value (ar) of the luma component and an activ
Figure imgf000039_0001
ity value (ac) of the chroma component.
23. A method for decoding a picture (12), the method comprising steps of deriving from a data stream (14) a quantization parameter (QPK) for a luma component of the picture, and a quantization parameter ( QPC ) for a chroma component of the pic- ture wherein the quantization parameter (QPC) for the chroma component is calculated based on a ratio (— ) between an activity (aY) of the luma aY
component and an activity (ac) of the chroma component.
24. A data stream (14) being obtainable by a method according to claim 22 or 23 25. A computer readable digital storage medium having stored thereon a computer program having a program code for performing, when running on a computer, a method according to claim 22 or 23.
PCT/EP2019/067678 2018-07-02 2019-07-02 Encoder, decoder and method for adaptive quantization in multi-channel picture coding WO2020007827A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP18181299 2018-07-02
EP18181299.1 2018-07-02

Publications (1)

Publication Number Publication Date
WO2020007827A1 true WO2020007827A1 (en) 2020-01-09

Family

ID=62975847

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/067678 WO2020007827A1 (en) 2018-07-02 2019-07-02 Encoder, decoder and method for adaptive quantization in multi-channel picture coding

Country Status (1)

Country Link
WO (1) WO2020007827A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173116A (en) * 2021-11-26 2022-03-11 中山大学 Adaptive quantization method based on Laplace filter

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08102968A (en) * 1994-09-29 1996-04-16 Sony Corp Image signal coding method and image signal coder
US20070092001A1 (en) * 2005-10-21 2007-04-26 Hiroshi Arakawa Moving picture coding apparatus, method and computer program
US20090296808A1 (en) * 2008-06-03 2009-12-03 Microsoft Corporation Adaptive quantization for enhancement layer video coding
US20130077676A1 (en) * 2010-06-11 2013-03-28 Kazushi Sato Image processing device and method
WO2019057846A1 (en) 2017-09-21 2019-03-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Concept for varying a coding quantization parameter across a picture, coding quantization parameter adjustment, and coding quantization parameter adaptation of a multi-channel picture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08102968A (en) * 1994-09-29 1996-04-16 Sony Corp Image signal coding method and image signal coder
US20070092001A1 (en) * 2005-10-21 2007-04-26 Hiroshi Arakawa Moving picture coding apparatus, method and computer program
US20090296808A1 (en) * 2008-06-03 2009-12-03 Microsoft Corporation Adaptive quantization for enhancement layer video coding
US20130077676A1 (en) * 2010-06-11 2013-03-28 Kazushi Sato Image processing device and method
WO2019057846A1 (en) 2017-09-21 2019-03-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Concept for varying a coding quantization parameter across a picture, coding quantization parameter adjustment, and coding quantization parameter adaptation of a multi-channel picture

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
"ITU-T, recommendation H.265 and ISO/IEC, Int. Standard 23008-2", HIGH EFFICIENCY VIDEO CODING, February 2018 (2018-02-01), Retrieved from the Internet <URL:http://www.itu.int/rec/T-REC-H.265>
A. SEGALLE. FRANGOISD. RUSANOVSKYY: "JVET Common Test Conditions and Evaluation Procedures for HDR/WCG Video", JVET-J1011
A. SEGALLV. BARONCINIJ. BOYCEJ. CHENT. SUZUKI: "Joint Call for Proposals on Video Compression with Capability beyond HEVC", JVET-H1002, October 2017 (2017-10-01)
HELMRICH (FRAUNHOFER) C ET AL: "AHG10: Improved perceptually optimized QP adaptation and associated distortion measure", no. JVET-K0206, 2 July 2018 (2018-07-02), XP030198723, Retrieved from the Internet <URL:http://phenix.int-evry.fr/jvet/doc_end_user/documents/11_Ljubljana/wg11/JVET-K0206-v1.zip JVET-K0206-v1.pdf> [retrieved on 20180702] *
J. BOYCEK. SUHRINGX. LIV. SEREGIN: "JVET Common Test Conditions and Software Reference Configurations", JVET-J1010, April 2018 (2018-04-01)
J. CHEN ET AL.: "Algorithm Description of Joint Exploration Test Model 7 (JEM 7", JVET-G1001, July 2017 (2017-07-01)
J. CHENE. ALSHINA: "Algorithm Description for Versatile Video Coding and Test Model 1 (VTM 1", JVET-J1002, April 2018 (2018-04-01)
P. HANHARTJ. BOYCEK. CHOI: "JVET Common Test Conditions and Evaluation Procedures for 360° Video", JVET-J1012, April 2018 (2018-04-01)
PERCEPTUALLY OPTIMIZED QP ADAPTATION AND ASSOCIATED DISTORTION MEASURE: "Perceptually optimized QP adaptation and associated distortion measure", 8. JVET MEETING; 18-10-2017 - 25-10-2017; MACAU; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JVET/,, no. JVET-H0047, 10 October 2017 (2017-10-10), XP030151035 *
V. BARONCINI: "Results of Subjective Testing of Responses to the Joint CfP on Video Compression Technology with Capability beyond HEVC", JVET-J0080, April 2018 (2018-04-01)
V. SZEM. BUDAGAVIG. J. SULLIVAN: "High Efficiency Video Coding (HEVC) - Algorithms and Architectures", 2014, SPRINGER INTERNATIONAL PUBLISHING
WIKIPEDIA, LAPLACE-FILTER (IN GERMAN, 2006, Retrieved from the Internet <URL:https://de.wikipedia.org/wiki/Laplace-Filter>

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173116A (en) * 2021-11-26 2022-03-11 中山大学 Adaptive quantization method based on Laplace filter

Similar Documents

Publication Publication Date Title
JP7228012B2 (en) Method and apparatus for determining a quantization parameter predictor from multiple adjacent quantization parameters
KR102130480B1 (en) Method and device for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image
US11212534B2 (en) Methods and apparatus for intra coding a block having pixels assigned to groups
CN102428702B (en) For the method and apparatus of the offset adjusted that rounds off for the improvement quantification of Video coding and decoding
JP6355715B2 (en) Encoding device, decoding device, encoding method, decoding method, and program
US9277227B2 (en) Methods and apparatus for DC intra prediction mode for video encoding and decoding
EP2478702A1 (en) Methods and apparatus for efficient video encoding and decoding of intra prediction mode
JP2013507086A (en) Method and apparatus for adjusting embedded quantization parameter in video encoding and decoding
KR20110113720A (en) Video encoding techniques
WO2020007827A1 (en) Encoder, decoder and method for adaptive quantization in multi-channel picture coding
EP3151560B1 (en) Intra-coding mode-dependent quantization tuning
CN112020860B (en) Encoder, decoder and methods thereof for selective quantization parameter transmission
KR101699460B1 (en) Apparatus and Method of Context-Adaptive Quantization and Inverse Quantizationfor IPCM block
KR101699461B1 (en) Apparatus and Method of Context-Adaptive Quantization and Inverse Quantizationfor IPCM block
KR101699458B1 (en) Apparatus and Method of Context-Adaptive Quantization and Inverse Quantizationfor IPCM block
WO2024042098A1 (en) Encoding and decoding a picture using filtering blocks
KR20160053878A (en) Apparatus and Method of Context-Adaptive Quantization and Inverse Quantizationfor IPCM block
KR20140079519A (en) Quantization parameter coding method using average quantization parameter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19734098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19734098

Country of ref document: EP

Kind code of ref document: A1