GB2506852A - Determining the Value of a Chroma Quantisation Parameter - Google Patents

Determining the Value of a Chroma Quantisation Parameter Download PDF

Info

Publication number
GB2506852A
GB2506852A GB201217444A GB201217444A GB2506852A GB 2506852 A GB2506852 A GB 2506852A GB 201217444 A GB201217444 A GB 201217444A GB 201217444 A GB201217444 A GB 201217444A GB 2506852 A GB2506852 A GB 2506852A
Authority
GB
United Kingdom
Prior art keywords
value
image
quantization parameter
chroma
image portion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB201217444A
Other versions
GB2506852B (en
GB201217444D0 (en
Inventor
Edouard Francois
Christophe Gisquet
Guillaume Laroche
Patrice Onno
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1217444.7A priority Critical patent/GB2506852B/en
Publication of GB201217444D0 publication Critical patent/GB201217444D0/en
Publication of GB2506852A publication Critical patent/GB2506852A/en
Application granted granted Critical
Publication of GB2506852B publication Critical patent/GB2506852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The value of a quantization parameter (QP) for at least one chroma component of an image or image portion is determined based on the value of an intermediate QP (QPI). That QPI is itself based on the value of the QP of the corresponding luma component of the image/image portion. The relation between the value of QPI and the value of QPC is based on at least one adaptive parameter, e.g. implemented as a unique equation or a table. The prior art used a fixed relation between QPC and QPI, thereby not allowing flexibility for setting the QPC values. The disclosure uses adaptive parameters to more finely control the link between chroma and luma QPs via the intermediate QP. The chroma QP may also be used as a deblocking QP for a deblocking filter.

Description

METHOD AND DEVICE FOR DETERMINING THE VALUE OF A
QUANTIZATION PARAMETER
The present invention relates to a method and device for determining the value of a quantization parameter. Such a method and device can be used to provide an improvement of the quantization of the luma and chroma components in the core part of the emerging video codec HEVC (for "High Efficiency Video Coding").
An image or image portion comprises several color components: a luma component and two chroma components. The image and in particular its color components are associated with quantization parameters, well known by the man skilled in the art. Those quantization parameters are used in the quantization process during the image encoding and in the de-quantization process during the image encoding and decoding. Those processes are described more in detail based on figure 1, 2 and 3.
In the following description, the words image and picture have the same meaning.
First the coding structure is shown in the figure 1. More precisely, the figure 1 shows the coding structure used in HEVC. According to HEVC and one of its previous predecessors, the original video sequence 101 is a succession of digital images "images i'. As is known per se, a digital image is represented by one or more matrices the coefficients of which represent pixels.
The images 102 are divided into slices 103. A slice is a part of the image or the entire image. In HEVC these slices are divided into non-overlapping Largest Coding Units (LCU5), also called Coding Tree Blocks (CTB) 104, generally blocks of size 64 pixels x 64 pixels. Each CIB may in its turn be iteratively divided into smaller variable size Coding Units (Gus) 105 using a quadtree decomposition. Coding units are the elementary coding elements and are constituted of two sub units which Prediction Unit (PU) and Transform Units (TU) of maximum size equal to the Cu's size. Prediction Unit corresponds to the partition of the CU for prediction of pixels values. Each CU can be further partitioned into a maximum of 4 square Partition Units or 2 rectangular Partition Units 106. Transform units are used to represent the elementary units that are spatially transform with DCT. A CU can be partitioned in TU based on a quadtree representation (107).
Each slice is embedded in one NAL unit. In addition, the coding parameters of the video sequence are stored in dedicated NAL units called parameter sets. In HEVC and H.264/AVC two kinds of parameter sets NAL units are employed: first, the Sequence Parameter Set (SPS) NAL unit that gathers all parameters that are unchanged during the whole video sequence.
Typically, it handles the coding profile, the size of the video frames and other parameters. Secondly, Picture Parameter Sets (PPS) codes the different values that may change from one frame to another. In previous versions of the HEVC specifications, HEVC also included Adaptation Parameter Sets (APS) which contains parameters that may change from one slice to another. The most recent HEVC specification, also named HM8, does not support this feature.
Figure 2 shows a diagram of a classical HEVC video encoder 20 that can be considered as a superset of one of its predecessors (H.264/AVC).
Each frame of the original video sequence 101 is first divided into a grid of coding units (CU) during stage 201. This stage controls also the definition of coding and entropy slices. In general, two methods define slice boundaries that are: either use a fix number of CU per slices (entropy or coding slices) or a fix number of bytes.
The subdivision of the LCU in CUs and the partitioning of the CU in TUs and PUs is determined based on a rate distortion criterion. Each PU of the CU being processed is predicted spatially by an "Intra" predictor 217, or temporally by an "Inter" predictor (218). Each predictor is a block of pixels issued from the same image or another image, from which a difference block (or "residual") is derived. Thanks to the identification of the predictor block and the coding of the residual, it is possible to reduce the quantity of information actually to be encoded.
The encoded frames are of two types: temporal predicted frames (either predicted from one reference frame called P-frames or predicted from two reference frames called B-frames) and non-temporal predicted frames (called Intra frames or I-frames). In I-frames, only Intra prediction is considered for coding CUs/PUs. In P-frames and B-frames, Intra and Inter prediction are considered for coding CUs/PUs.
In the "Intra" prediction processing module 217, the current block is predicted by means of an "Intra" predictor, a block of pixels constructed from the information already encoded of the current image. The module 202 determines a spatial prediction mode that is used to predict pixels from the neighbors Pus pixels. In HEVC, up to 35 modes are considered. A residual block is obtained by computing the difference of the intra predicted block and current block of pixel.
An intra-predicted block is therefore composed of a prediction mode with a residual. The coding of the intra prediction mode is inferred from the neighbours prediction units' prediction mode. This inferring process (203) of intra prediction mode permits to reduce the coding rate of the intra prediction mode. Intra prediction processing module uses also the spatial dependencies of the frame either for predicting the pixels but also to infer the intra prediction mode of the prediction unit. For entropy slices, prediction of intra prediction mode is not allowed for neighbor CUs that are not in the same entropy slices.
With regard to the second processing module 218 that is "Inter" coding, two prediction types are possible. Mono-prediction (P-type) consists of predicting the block by referring to one reference block from one reference picture. Bi-prediction (B-type) consists in predicting the block by referring to two reference blocks from one or two reference pictures. An estimation of motion 204 between the current PU and reference images 215 is made in order to identify, in one or several of these reference images, one (P-type) or several (B-type) blocks of pixels to use them as predictors of this current block. In case where several block predictors are used (B-type), they are merged to generate one single prediction block. The reference images are images in the video sequence that have already been coded and then reconstructed (by decoding).
The reference block is identified in the reference frame by a motion vector that is equal to the displacement between the PU in current frame and the reference block. Next stage (205) of inter prediction process consists in computing the difference between the prediction block and current block. This block of difference is the residual of the inter predicted block. At the end of the inter prediction process the current PU is composed of one motion vector and a residual.
Thanks to spatial dependencies of movement between neighbours PU, HEVC provides a method to predict the motion vectors of each PU. Several motion vector predictors are employed: typically, the motion vector of the PU localized on the top, the left or the top left corner of the current PU are a first set of spatial predictors. Temporal motion vector candidate is also used that is the one of the collocated PU (i.e. the PU at the same coordinate) in a reference frame. It selects one of the predictor based on a criterion that minimizes the difference between the MV predictor and the one of current PU. In HEVC, this process is referred as AMVP (stands for Adaptive Motion Vector Prediction).
Finally, current PU's motion vector is coded (206) with an index that identifies the predictor within the set of candidates and a MV difference (MVD) of PU's MV with the selected MV candidate. Inter prediction processing module relies also on spatial dependencies between motion information of prediction units to increase the compression ratio of inter predicted coding units. For entropy slices, AMVP process consider neighbor PU as unavailable if they do not belong to the same slice.
These two types of coding thus supply several texture residuals (the difference between the current block and the predictor block), which are compared in a module 216 for selecting the best coding mode.
The residual obtained at the end of inter or intra prediction process is then transformed (207). The transform applies to a Transform Unit (TU) that is included into a CU. A TU can be further split into smaller TUs using a so-called Residual QuadTree (ROT) decomposition 206. In HEVC, generally 2 or 3 levels of decompositions are used and authorized transform sizes are from 32x32, 16x16, 8x8 and 4x4. The transform basis is derived from a discrete cosine transform DCI.
The residual transformed coefficients are then quantized (208). The coefficients of the quantized transformed residual are then coded by means of an entropy coding (209) and then inserted in the compressed bit stream 210.
Coding syntax elements are also coded with help of the stage 209. This processing module uses spatial dependencies between syntax elements to increase the coding efficiency.
In order to calculate the "Intra" predictors or to make an estimation of the motion for the "Inter" predictors, the encoder performs a decoding of the blocks already encoded by means of a so-called "decoding" loop (211, 212, 213, 214, 215). This decoding loop makes it possible to reconstruct the blocks and images from the quantized transformed residuals.
Thus the quantized transformed residual is dequantized (211) by applying the inverse quantization to that provided at step 208 and reconstructed (212) by applying the inverse transform to that of the step 207.
The deblocking filter 213 allows in particular to reduce the blocking artifacts due to the quantization 208 and de-quantization (or inverse quantization) 211 processes.
The invention concerns more particularly the quantization parameters and DBF quantization parameters which are involved in the steps 208, 211 and 213, as explained more in detail below.
If the residual comes from an "Intra" coding 217, the used "Intra" predictor is added to this residual in order to recover a reconstructed block corresponding to the original block modified by the losses resulting from a transformation with loss, here quantization operations.
If the residual on the other hand comes from an "Inter" coding 218, the blocks pointed to by the current motion vectors (these blocks belong to the reference images 215 referred by the current image indices) are merged then added to this decoded residual. In this way the original block is modified by the losses resulting from the quantization operations.
A final loop filter processing module 219 is applied to the reconstructed signal in order to reduce the effects created by heavy quantization of the residuals obtained and to improve the signal quality. In the current HEVC standard, two types of loop filters are used: deblocking filter, and sample adaptive offset (SAO). Another filter adaptive loop filter (ALF) has been considered for the standard but in the current specification HM8, this filter is not included. The parameters of the filters are coded and transmitted in one header of the bitstream typically slice header. In previous versions of the HEVC specification, the filters parameters were also possibly coded and transmitted in the adaptation parameter set.
The filtered images, also called reconstructed images, are then stored as reference images 215 in order to allow the subsequent "Inter" predictions taking place during the compression of the following images of the current video sequence.
In the context of HEVC, it is possible to use several reference images 215 for the estimation and motion compensation of the current image. In other words, the motion estimation is carried out on N images. Thus the best "Inter" predictors of the current block, for the motion compensation, are selected in some of the multiple reference images. Consequently two adjoining blocks may have two predictor blocks that come from two distinct reference images. This is in particular the reason why, in the compressed bit stream, the index of the reference image (in addition to the motion vector) used for the predictor block is indicated.
The use of multiple reference images -the recommendation of the aforementioned VCEG group recommending limiting the number of reference images to four -is both a tool for resisting errors and a tool for improving the compression efficacy.
The resulting bitstream 210 of the encoder 20 is also composed of a set of NAL units that corresponds to parameter sets, coding slices.
Figure 3 shows an overview of a classical video decoder 30 of HEVC type. The decoder 30 receives as an input a bit stream 210 corresponding to a video sequence 101 compressed by an encoder of the HEVC type, like the one in figure 2.
During the decoding process, the bit stream 210 is first of all parsed with help of the entropy decoding module 301. This processing module uses the previously entropy decoded elements to decode the encoded data. It decodes in particular the parameter sets of the video sequence to initialize the decoder and also decode LCU of each video frame. Each NAL unit that corresponds to coding slices or entropy slices are then decoded. The parsing process that consists in 301, 302 and 304 stages can be done in parallel for each slice but block prediction processes module 305 and 303 and loop filter module must be sequential to avoid issue of neighbor data availability.
The partition of the LCU is parsed and CU, PU and TU subdivision are identified. The decoder is successively processing each CU is then performed with help of intra 307 and inter 306 processing modules, inverse quantization and inverse transform modules and finally loop filter processing module 219.
The "Inter" or "Intra" prediction mode for the current block is parsed from the bit stream 210 with help of the parsing process module 301.
Depending on the prediction mode, either intra prediction processing module 307 or inter prediction processing module 306 is employed. If the prediction mode of the current block is "Intra" type, the intra prediction mode is extracted from the bit stream and decoded with help of neighbors' prediction mode during stage 304 of intra prediction processing module 307. The intra predicted block is then computed 303 with the decoded intra prediction mode and the already decoded pixels at the boundaries of current PU. The residual associated with the current block is recovered from the bit stream 301 and then entropy decoded.
If the prediction mode of the current block indicates that this block is of "Inter" type, the motion information is extracted from the bit stream 301 and decoded (304). AMVP process is performed during and 405. Motion information of neighbours PU already decoded is also used to compute the motion vector of current PU. This motion vector is used in the reverse motion compensation module 305 in order to determine the "Inter" predictor block contained in the reference images 215 of the decoder 30. In a similar manner to the encoder, these reference images 215 are composed of images that precede the image currently being decoded and that are reconstructed from the bit stream (and therefore decoded previously).
Next decoding step consists in decoding the residual block that has been transmitted in the bitstream. The parsing module 301 extracts the residual coefficients from the bitstream and performs successively the inverse quantization 211 and inverse transform 212 to obtain the residual block. This residual block is added to the predicted block obtained at output of intra or inter processing module.
At the end of the decoding of all the blocks of the current image, the loop filter processing module 219 is used to eliminate the block effects and improve the signal quality in order to obtain the reference images 215. As done at the encoder, this processing module employs the deblocking filter 213, then SAO 220 filter and finally the ALF 214. It is noticed that ALF was applied in previous versions of the HEVC specification. In the latest version, ALF has been removed. Nevertheless this tool is still considered in the description as an example of implementation.
The images thus decoded constitute the output video signal 308 of the decoder, which can then be displayed and used.
In HEVC, as in previous standards such as MPEG-4 AVC/H.264, the quantization is controlled by the so-called Quantization Parameter (OP). For a signal coded on 8 bits, the QP may vary between 0 and 51, from slice to slice and inside a slice, from coding unit (CU) to CU.
The OP is used to quantize or de-quantize the coefficients resulting from the transform. This is similar to the following operation: CO = INT[CIQS + rounding offset] (eq.1) where -Co is the quantized version of the transform coefficient C, -Os is the quantization step (described below), -INT[ x] is the nearest integer value otx, and -rounding offset is a value usually set to 0.5, but that can be set to other values between 0 to 1, to finely control the quantization process.
Inversely, the de-quantization is similar to the following operation: CIQ=CQ*QS (eq.2) where CIO is the inverse-quantized version of the quantized transform coefficient CO.
The quantization step OS is actually linked to the OP by an equation of the following type: OS = K * 2QP!6 (eq.3) where K is a pre-defined scaling factor.
It should be noticed that in practical terms, the implementation well-known by the man skilled in the art in the encoders and decoders is made using integer computations. This operation is implemented in a different way than (eq.3) in HEVC, but it provides very similar results as shown in figure 4 which depicts the link between OP and OS with the actual current HEVC specification HM8 (solid-lined curve), and according to the equation (eq.3) (dash-lined curve).
In HEVC, as in previous standards such as MPEG-4 AVC/H.264, there is a link between the OP applied to the luma component (noted OPY), and the OP applied to the corresponding chroma components (noted OPC). For simplicity, we consider here only one chroma component, but of course it can concern both chroma components. The concept can be easily extended to several chroma components.
To generate OPC from OFY, the following process applies in the
current HEVC specification HM8:
-computation of an intermediate value OPI as follows: OPI = MAX( -QPBdOffsetC, MIN( 57, OPY + QPOffsetC) ) (eq.4) where o QPBdOffsetC is a pre-defined offset depending on the bit-depth used to represent the chroma component; it only depends on the video format and is defined only once for the whole video sequence including the considered picture; and o QPOffsetC is an offset signaled in the bitstream (classically in the picture or slice header) that enables to partly control the link between OPY and QFC.
Then, once the OPI value has been calculated, QPC is derived from the calculated QPI value using the following correspondence table CorrQP[QPI]: QPI 30 30 31 32 33 34 35 36 37 38 39 40 41 42 43 >43 QPC QPI 29 30 31 32 33 33 34 34 35 35 36 36 37 37 OPI-6 For instance, if OPI takes the value 32 by using the equation 4 above, the value 31 is assigned to QPC.
As an example, Figure 5a depicts QPC as a function of QPY for different values of QFY, with QPBdOffsetC=12, and for three values of QPOffsetC (-6, 0, 6).
Similarly, as an example, Figure 5b depicts the ratio of quantization steps of luma and chroma OPSY I QPSC as a function of QPY for different values of QPY, with QPBdOffsetC=1 2, and for three values of OPOffsetC (-6, 0, 6).
It is observed that: -there is a slope change in the curves linking luma and chroma quantization parameter OP at two given OPY values (OPY=30 and 0PY43); -the parameter QPOffsetC modifies the horizontal position of these two points, as well as the minimum and maximum values (and therefore the range) of OPSY / QPSC.
In addition, the deblocking filter (see 213 in figure 2) applied to the chroma component does not take into account the value of OPOffsetC. Indeed deblocking process uses a OP to derive the strength of the filter. This OP derived to process the deblocking filter (noted ODDBF) is deduced as follows.
First, there is a computation of an intermediate value OPI' as follows: OPI' = MAX( -OpBdOffsetC, MIN( 57, OPYpred)), where OPYpred is an average OP value deduced from the neighboring blocks of the current block being processed.
Then, there is a derivation of the OPDBF using the previously mentioned correspondence table with OPI as entry point: a OPDBF = CorrQP[ OPI'].
So OPOffsetC is not considered in these computations. This results in using a different OP for the chroma quantization and for the chroma deblocking, which may result in visual artefacts.
The correspondence table used for the OPC and the OFDBF supposes a fixed relation between OPI and OPC (or OPI' and OPDBF), once the value of OPI (OFI') has been calculated.
This does not allow any flexibility for setting the quantization parameter values of the chroma component. Moreover the implementation of a table is relatively complex.
The present invention has been devised to address one or more of the foregoing concerns.
According to a first aspect of the invention, there is provided a method for determining the value of a quantization parameter for at least one chrome component QPC of an image or image portion, based on the value of an intermediate quantization parameter QPI which is further based on the value of the quantization parameter of the corresponding luma OFY component of the image or image portion, wherein the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is based on at least one adaptive parameter.
Such a method can add new adaptive parameters to more finely control the link between luma and chroma OPs.
One advantage of the invention is to provide more flexibility by using additional new parameters, signaled in the bitstream, to control the link between QPY and QPC, via the control of the link between QPI and QPC.
In one embodiment, the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is implemented as a table which associates to each possible value of the quantization parameter of the chroma component: -the value of the intermediate quantization parameter, if the value of the intermediate quantization parameter is below the value of the additional adaptive parameter; -a value belonging to a set of predetermined values if the value of the intermediate quantization parameter is equal to the value of the additional adaptive parameter plus a predetermined offset whose range is comprised between a minimal and a maximal value; or -the value of the intermediate quantization parameter minus a secondary predetermined offset if the value of the intermediate quantization parameter is above the value of the additional adaptive parameter plus the maximal value of the range of the predetermined offset.
In one embodiment, the minimal value of the predetermined offset range is equal to 0, the maximal value of the predetermined offset range is equal to 13, and the value of the secondary predetermined offset is equal to 6.
In one embodiment, the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is implemented as a unique equation.
In other words, it is proposed to replace the correspondence table by a simple and generic equation. This new equation results in new correspondence points between QPC and OPI. This equation may use these new control parameters.
This embodiment provides a simplification of the current design by the replacement of a correspondence table (QPC as function of OPI) by a unique simple and generic equation.
In an embodiment, the equation implementing the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is: QPC = QPI -MAX(minQPdelta,MJN(maxQPdelta, (QPI -QPstart) >> QPshift)) where -QPC is the value of the quantization parameter of the chroma component for the considered image or image portion, -QPI is the value of the intermediate quantization parameter for the same image or image portion, -minQPcielta and maxQPclelta are additional adaptive parameters which control the amplitude of the curves linking the variation of the values of the luma and chroma quantization parameters, -QPstart is an additional adaptive parameter that controls the position where a predefined slope's inclination of the curves linking the variation of the values of the luma and chroma quantization parameters changes, -QPshift is an additional adaptive parameter that controls the inclination angle of the slope change of the curves linking the variation of the values of the luma and chroma quantization parameters, and -" >> " is a mathematical operator corresponding to the sight shift by ** bits of the binary representation of ** In an embodiment, the value ot QPstart is comprised between -12 and +50.
In an embodiment, the value of QPshift is comprised between 0 and 2.
In an embodiment, the value of minQPdelta is equal toO.
In an embodiment, the value of maxQPdelta is equal to 6.
In an embodiment, the value of QPshift is equal to 1.
In an embodiment, the value of QPstart is equal to 30.
According to another aspect of the invention, there is provided a method for determining the value of a quantization parameter for at least one chroma component of an image or image portion, based on the value of an intermediate quantization parameter which is further based on the value of the quantization parameter of the corresponding luma component of the image or image portion, wherein the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is implemented as a unique equation.
In an embodiment, the equation is defined by: QPC = QPJ -MAX(0, MIN(6, (QPJ -30) >> 1)), where -QPC is the value of the quantization parameter of the chroma component for an image or image portion, -QPI is the value of the intermediate quantization parameter for the same image or image portion, and -" s" is a mathematical operator corresponding to the sight shift by ** bits of the binary representation of * In an embodiment, the value of the intermediate quantization parameter and the value of the quantization parameter of the luma component is implemented according to the following equation: QP! = MAX(-QPBdOffsetC,MJN(57,QPY + QPOffsetC)), wherein -QPY is the value of the quantization parameter of the luma component for the considered image or image portion, -QPI is the value of the intermediate quantization parameter for the same image or image portion, -QPBdOffsetC is a pre-defined offset depending on the bith-depth used to represent the chroma component, and -QPOffsetC is an offset signaled in a bitstream embedding the compressed data of the considered image or image portion, that enables to partly control the link between QFC and QFY.
In an embodiment, the chroma quantization parameter is also used as a deblocking quantization parameter for a deblocking filter.
Indeed, the unique generic equation can be easily incorporated in the computation of the OP for the chroma deblocking filtering. So the new control parameters are taken into account for the deblocking, which results in an improved video quality.
In an embodiment, an image is divided into blocks, and wherein the value of the intermediate quantization parameter is implemented according to the following equation: QPI' = MAX(-QPBdOffsetC, MIN( 57, OPYpred)) where QPYpred is an average deblocking quantization parameter value deduced from the neighboring blocks of the current block being processed; and QPBdOffsetC is a pre-defined offset depending on the bith-depth used to represent the deblocking chroma component.
According to another aspect of the invention there is provided a method for encoding an image or image portion composed by a luma component and at least one corresponding chroma component, said components being divided into coding units, which forms part of an image sequence, wherein the method comprises: -determining the values of quantization parameter which comprises determining the value of a quantization parameter for at least one chroma component as mentioned above, -encoding the successive coding units, the encoding comprising quantizing and de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters; and -generating a bitstream of encoded data.
In an embodiment, the encoding is based on a reference image, said reference image being filtered by a deblocking filter using deblocking chroma quantization parameters, determined as mentioned above.
According to another aspect of the invention, there is provided a method for decoding an image or image portion, which forms part of an image sequence, the method comprising: -receiving encoded data related to the image or image portion to decode; -decoding the encoded data, the decoding comprising determining the values of quantization parameter which comprises determining the value of quantization parameters for at least one chroma component as mentioned above, and -reconstructing the decoded image or image portion from the decoded data, the reconstructing means comprising means for de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters.
In an embodiment, the method further comprises filtering the decoded image or image portion with a deblocking filter, said deblocking filter using deblocking quantization parameters, determined as mentioned above.
According to another aspect of the invention, there is provided a computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing a method as mentioned above when loaded into and executed by the programmable apparatus.
According to another aspect of the invention, there is provided a computer-readable storage medium storing instructions of a computer program for implementing a method, as mentioned above.
According to another aspect of the invention, there is provided a device for determining the value of a quantization parameter for at least one chroma component of an image or image portion, wherein the device implements a method as mentioned above.
In an embodiment, the device further comprises means for determining the value of a deblocking quantization parameter for at least one chroma component of an image or image portion, wherein the device implements a method as mentioned above.
According to another aspect of the invention, there is provided a device for encoding an image or image portion composed by a luma component and at least one corresponding chroma component, said components being divided into coding units, which forms part of an image sequence, wherein the device comprises: -means for determining the values of quantization parameter which comprises a device for determining the value of a quantization parameter for at least one chroma component as mentioned above, and -means for encoding the successive coding units, the encoding means comprising means for quantizing and de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters.
In an embodiment, the encoding means use a reference image, said device further including: -a device as mentioned above, for determining deblocking chroma quantization parameters; and -a deblocking filter for filtering the reference image by using the deblocking chroma quantization parameters.
According to another aspect of the invention, there is provided a device for decoding an image or image portion, which forms part of an image sequence, the device comprising: -means for receiving encoded data related to the image or image portion to decode; -means for decoding the encoded data, which comprise means for determining the values of quantization parameter which comprises a device for determining the value of quantization parameters for at least one chroma component as mentioned above, -means for decoding the encoded data, the decoding comprising means for quantizing and de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters; -means for reconstructing the decoded image or image portion from the decoded data, the reconstructing means comprising means for de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters; and -means for generating the decoded data.
In an embodiment, the device further comprises: -a device for determining deblocking chroma quantization parameters as mentioned above; and -a deblocking filter for filtering the decoded data, by using the deblocking chroma quantization parameters.
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which:-Figure 1 illustrates the coding structure used for an image in HEVC; Figure 2 is a simplified block diagram showing an example of an encoder; Figure 3 is a simplified block diagram showing an example of a decoder; Figure 4 shows two curves which represent the link between OP and OS according to the current HEVC specification HM8 and a classical way; Figures 5a and Sb shows curves which represents the relation between QPC and OPY according to the value of an offset; and Figures 6a and 6b shows curves which represent the relation between QPC and QPY according to the invention and compared to the current HEVC
specification HM8 results.
The following embodiments can be implemented in a decoder as well as in en encoder as described above.
More precisely, the following embodiments about the QPs can be implemented in the quantization and inverse quantization modules of both decoder and encoder.
The following embodiments about the QPDBFs can be implemented in the deblocking filters of both decoder and encoder.
In a first embodiment, a correspondence table is used. This table is different from the correspondence table used in the prior art. Indeed it includes an adaptive parameter QPstart to control the value of QPC, and more precisely the starting position of the QPI-QPC function slope change. This new correspondence table is specified as follows (for easier readability, QPstart is replaced in the table by QPS).
QPI COPS OPS QPS QPS OPS OPS QPS OPS OPS OPS QPS OPS QPS QPS OPS >QPS+A +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 QPC QPI 29 30 31 32 33 33 34 34 35 35 36 36 37 37 0P1-B A and B are pre-fixed offset. Preferably, A=13 and B=6.
The parameter QPstart is signaled in the bitstream, preferably per picture (that is, in the so-called PPS). It can also be signaled at a sequence level (SPS) or at the slice level (in the slice header).
Thus, the new correspondence table is dynamically modified based on the QPStart value.
An advantage of this first embodiment is that the adaptive parameter, added to control the starting point of the change of slope of the relation OPY-QPC, enables more flexibility to control of the chroma than the current HEVC
specification HM8.
In second embodiment, the relation between QPY and QPI for an image or image portion is implemented as an equation. According to a preferred embodiment, the used equation is: QPC = OPI -MAX( minoPdelta, MIN( maxQPdelta, (OPI -QPstart)>> OPshift)) where: -QPC is the value of the quantization parameter of the chroma component for the considered image or image portion, -QPI is the value of the intermediate quantization parameter for the same image or image portion, -minQPdelta and rnaxQPdelta are additional adaptive parameters which control the amplitude of the curves linking the variation of the values of the luma and chroma quantization parameters, -QPstart is an additional adaptive parameter that controls the position where a predefined slope's inclination of the curves linking the variation of the values of the luma and chroma quantization parameters changes, -QPshift is an additional adaptive parameter that controls the inclination angle of the slope of the curves linking the variation of the values of the luma and chroma quantization parameters changes, and -" >> " is a mathematical operator corresponding to the sight shift by ** bits of the binary representation of ** According to a preferred solution, the value of QPstart is comprised between -12 and i-Si.
According to a preferred solution, the value of QPshift is comprised between 0 and +2.
Those ranges allow keeping the ratio QPSY/QPSC in an zone where the value can be easily adjustable and not in an zone where the value of this ratio is saturated.
For example, these parameters are signalled in the stream, preferably per picture (that is, in the so-called PPS). They can also be signalled at a sequence level (SPS) or at the slice level (in the slice header).
According to a preferred embodiment, the syntax for the new control parameters pic_parameter_set_rbsp( ) at the picture level is represented below: pie_parameter set rbsp( ) Descriptor pic_cb_qp_offset setv) pic_cr_qp_offsct se(v) pic_cbcr_qp_start se(v) pk_cbcr_qp_shift se(v) pic_s1ice_1eve1_chroma_qp_oflets_present_ila g u( 1) where pic_cb_qp_offset and pic_cr_qp_offset specify offsets to the luma quantization parameter OPY used for deriving two chroma quantization parameter, QPCb and QPCr, respectively. The values of pic_cb_qp_offset and pic_cr_qp_offset shall be in the range of -12 to +12, inclusive.
pic_cbcr_qp_start specifies one parameter to control the relation between the chroma quantization parameters QPCb and QPCr and the luma quantization parameter QPY. The value of pic_cbcr_qp_start shall be in the range of -12 to +51, inclusive.
pic_cbcr_qp_shift specifies one parameter to control the relation between the chroma quantization parameters QPCb and QPCr and the luma quantization parameter QPY. The value of pic_cbcr_qp_shift shall be in the range of 0 to 2, inclusive.
pic_slice_level_chroma_qp_offsets_present_flag equal to 1 indicates that the slice_cb_qp_offset, and slice_cr_qp_offset, slice_cbcr_qp_start and slice_cbcr_qp_shift syntax elements are present in the associated slice headers. pic slice level chroma qpoffsets present flag equal to 0 indicates that these syntax elements are not present in the associated slice headers.
According to a preferred embodiment, the syntax for the new control parameters slice_header( ) at the picture level is represented below: slice_header( ) Descriptor slice_qp_delta se(v) if ( pic slice level chroma qp offsets present flag) slice_cb_qp_offset se(v) slicc_cr_qp_offset sc(v) slice_cbcr_qp_start se(v) slicc_cbcr_qp_shift se(v) } ________________________ where slice_cb_qp_offset specifies a difference to be added to the value of pic_cb_qp_offset when determining the value of the QPCb quantization parameter. The value of slice_cb_qp_offset shall be in the range of -12 to +12, inclusive. When slice_cb_qp_offset is not present, it is inferred to be equal to 0.
The value of pic cb qp offset + slice_cb_qp_offset shall be in the range of -12 to +12, inclusive.
slice_cr_qp_offset specifies a difference to be added to the value of piccrqp offset when determining the value of the QPCr quantization parameter. The value of slice_cr_qp_offset shall be in the range of -12 to +12, inclusive. When slice_cr_qp_offset is not present, it is inferred to be equal to 0.
The value of pic_cr_qp_offset + slice_cr_qp_offset shall be in the range of -12 to +12, inclusive.
slice_cbcr_qp_start specifies one parameter to control the relation between the chroma quantization parameters QPCb and QPCr and the luma quantization parameter QPY. The value of pic_cbcr_qp_start shall be in the range of -12 to +51, inclusive. When not present, slice_cbcr_qp_start is set equal to pic_cbcr_qp_start.
slice_cbcr_qp_shift specifies one parameter to control the relation between the chroma quantization parameters QPCb and QPCr and the luma quantization parameter QPY. The value of pic_cbcr_qp_shift shall be in the range of 0 to 2, inclusive. When not present, slice_cbcr_qp_shift is set equal to pic_cbcr_qp_shift.
deblocking_filter_override_flag equal to 0 specifies that deblocking parameters from the active picture parameter set are used for deblocking the current slice. deblocking_filter_override_flag equal to 0 specifies that deblocking parameters from the slice header are used for deblocking the current slice.
When not present, the value of deblocking_filter_override_flag is inferred to be equal to 1.
According to another embodiment, the equation may be replaced by a table, whose values correspond to the values generated by the equation. There will be then a correspondence table with four adaptive parameters, or less if it is decided to set one or more parameter values.
Figures 6a and 6b illustrate the influence of QPstart (for the illustration, the other parameters are set at: minQPdelta=O, maxQpdelta=6, QPshift=1).
Five curves are depicted on each figure.
Three curves correspond to the HM design with different QPOffsetC values (-6, 0, 6). Two curves correspond to the invention, for two values of QPstart (24 and 34). It can be observed that QPstart has a completely different control of the curve than QPOffsetC.
QPOffsetC has an impact on the vertical positioning of the curves (that is, along the QPC or QPSY/QPSC axis), while QPstart has an impact on the horizontal positioning of the curves (that is, along the QPY axis). This parameter therefore gives a different degree of control of the link between QPY and QPC.
Possibly, one or several of those parameters can have a pre-set value: In a preferred embodiment, minaPdelta is set to 0 and maxQPdelta is set to 6. The corresponding equation is then: QPC = QPI -MA)(( 0, MIN( 6, (OPI -QPstart) >> QPshift)).
In another embodiment, QPshift is in addition set to 1, and only QPstart may vary. The corresponding equation is then: QPC = OH -MAX( 0, MIN( 6, (OPI -QPstart)>> 1)).
The replacement of the fixed correspondence table by a generic equation driven by one or several parameters give more degree of control of the link QPY-QPC. It improves the chroma quality, especially when various QPs are applied within a same slice or picture.
In another embodiment, it is proposed to replace the correspondence table between OPI and QPC by a unique equation, with fixed values for the four parameters. According to a preferred embodiment, it comes: -qpstart = 30 -qpShift = 1 -minQpDelta = 0 -maxOpDelta = 6.
Finally, the implemented equation is: QPC = OPI -MAX( 0, MIN( 6, (QPI -30)>> 1)).
This unique equation allows determining all the value of QPC. The advantage for this embodiment is that the table is simply replaced by a simple equation, which simplifies the current HEVC design for several implementations.
According to another embodiment, the process for deblocking the chroma component by a deblocking filter may also be modified. It is reminded that the current HEVC specification HM8 is also based on a correspondence table. Here again this table is replaced by a unique generic equation. According to the invention, the QDDBF derivation to process the deblocking filter (noted QDDBF) is now deduced as follows.
An intermediate quantization parameter value QPI' as follows: QP1' = MAX(-QPBdOffsetC, MIN( 57, OPYpred)) where QPYpred is an average OP value deduced from the neighboring blocks of the current block being processed and QPBdOffsetC is the same offset mentioned above.
Then according to the invention, the relation between the QPDBF and the intermediate quantization parameter value QPI' is implemented according to a unique equation below, said equation comprising adaptive parameters: QPDBF = QPI' -MAX( minQPdelta, MIN( maxQPdelta (QPI' -QPstart)>> QPshift)).
As for the equation concerning the determination of QPC, mentioned above, one or more parameters minQPdelta, maxOPdelta, ORstart and OPshift can be fixed.
The adaptive parameters used for determining QPDBF gives more flexibility. Moreover, the implementation of the relation between QPDBF and CPI' as a unique equation, is simple. In addition, the OP used for the deblocking of chroma is the same as the OP using for the quantization of chrorna, which was not guaranteed wuth the current HEVC specification HM8 since the chroma OP for deblocking does not take into account the parameter OPOFFsetC.
Therefore the chroma deblocking is improved.
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a skilled person in the art which lie within the scope of the present invention.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.

Claims (28)

  1. CLAIMS1. A method for determining the value of a quantization parameter for at least one chroma component of an image or image portion, based on the value of an intermediate quantization parameter which is further based on the value of the quantization parameter of the corresponding luma component of the image or image portion, wherein the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is based on at least one adaptive parameter.
  2. 2. A method according to claim 1, wherein the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is implemented as a table which associates to each possible value of the quantization parameter of the chroma component: -the value of the intermediate quantization parameter, if the value of the intermediate quantization parameter is below the value of the additional adaptive parameter; -a value belonging to a set of predetermined values if the value of the intermediate quantization parameter is equal to the value of the additional adaptive parameter plus a predetermined offset whose range is comprised between a minimal and a maximal value; or -the value of the intermediate quantization parameter minus a secondary predetermined offset if the value of the intermediate quantization parameter is above the value of the additional adaptive parameter plus the maximal value of the range of the predetermined offset.
  3. 3. A method according to claim 2, wherein the minimal value of the predetermined offset range is equal to 0, the maximal value of the predetermined offset range is equal to 13, and the value of the secondary predetermined offset is equal to 6.
  4. 4. A method according to claim 1, wherein the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is implemented as a unique equation.
  5. 5. A method according to claim 4, wherein the equation implementing the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is: QPC = QPI -MAX(minQPdelta,MJN(maxQPdelta, (QPJ -QPstart) >> QPshift)) where -QPC is the value of the quantization parameter of the chroma component for the considered image or image portion, -QPI is the value of the intermediate quantization parameter for the same image or image portion, -minQPcielta and maxQPdelta are additional adaptive parameters which control the amplitude of the curves linking the variation of the values of the luma and chroma quantization parameters, -QPstart is an additional adaptive parameter that controls the position where a predefined slope's inclination of the curves linking the variation of the values of the luma and chroma quantization parameters changes, -QPshift is an additional adaptive parameter that controls the inclination angle of the slope change of the curves linking the variation of the values of the luma and chroma quantization parameters, and - >> " is a mathematical operator corresponding to the sight shift by ** bits of the binary representation of
  6. 6. A method according to claim 5, wherein the value of QPstart is comprised between -12 and --50.
  7. 7. A method according to any one of claim 5 to 6, wherein the value of QPshift is comprised between 0 and 2.
  8. 8. A method according to any one of claim 5 to 7, wherein the value of rninQPdelta is equal to 0.
  9. 9. A method according to any one of claim 5 to 8, wherein the value of maxQPdelta is equal to 6.
  10. 10. A method according to any one of claim 5 to 9, wherein the value of QPshift is equal to 1.
  11. 11. A method according to any one of claim 5 to 10, wherein the value of QPstar-t is equal to 30.
  12. 12. A method for determining the value of a quantization parameter for at least one chroma component of an image or image portion, based on the value of an intermediate quantization parameter which is further based on the value of the quantization parameter of the corresponding luma component of the image or image portion, wherein the relation between the value of the intermediate quantization parameter and the value of the quantization parameter of the chroma component is implemented as a unique equation.
  13. 13. A method according to claim 11, wherein the equation is defined by: QPC = QP1 -MAX(0,MIN(6,(QPI -30)>> 1)), where -QPC is the value of the quantization parameter of the chroma component for an image or image portion, -QPI is the value of the intermediate quantization parameter for the same image or image portion, and - " is a mathematical operator corresponding to the sight shift by ** bits of the binary representation of
  14. 14. A method according to any one of claims 1 to 13, wherein the value of the intermediate quantization parameter and the value of the quantization parameter of the luma component is implemented according to the following equation: QP! = MAX(-QPBdOffsetC,MJN(57,QPY + QPOffsetC)), wherein -QPY is the value of the quantization parameter of the luma component for the considered image or image portion, -QPI is the value of the intermediate quantization parameter for the same image or image portion, -QPBdOffsetC is a pre-defined offset depending on the bith-depth used to represent the chroma component, and -QPOffsetC is an offset signaled in a bitstream embedding the compressed data of the considered image or image portion, that enables to partly control the link between QPC and QPY.
  15. 15. A method according to any claims I to 13, wherein the chroma quantization parameter is also used as a deblocking quantization parameter for a deblocking filter.
  16. 16. A method according to the claim 15, wherein an image is divided into blocks, and wherein the value of the intermediate quantization parameter is implemented according to the following equation: QPI' = MAX(-QPBdOffsetC, MIN( 57, OPYpred)) where OFYpred is an average deblocking quantization parameter value deduced from the neighboring blocks of the current block being processed; and QPRdOffsetC is a pre-defined offset depending on the bith-depth used to represent the deblocking chroma component.
  17. 17. A method for encoding an image or image portion composed by a luma component and at least one corresponding chroma component, said components being divided into coding units, which forms part of an image sequence, wherein the method comprises: -determining the values of quantization parameter which comprises determining the value of a quantization parameter for at least one chroma component according to any one of claims 1 a 14, -encoding the successive coding units, the encoding comprising quantizing and de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters; and -generating a bitstream of encoded data.
  18. 18. A method according to claim 17, wherein the encoding is based on a reference image, said reference image being filtered by a deblocking filter using deblocking chroma quantization parameters, determined according to any one of claims 15 to 16.
  19. 19. A method for decoding an image or image portion, which forms part of an image sequence, the method comprising: -receiving encoded data related to the image or image portion to decode; -decoding the encoded data, the decoding comprising determining the values of quantization parameter which comprises determining the value of quantization parameters for at least one chroma component according to any one of claims 1 a 14, and -reconstructing the decoded image or image portion from the decoded data, the reconstructing means comprising means for de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters.
  20. 20. A method according to claim 19, further comprising filtering the image or image portion with a deblocking filter, said deblocking filter using deblocking quantization parameters, determined according to any one of claims l5to 16.
  21. 21. A computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing a method according to any one of claims 1 to 20 when loaded into and executed by the programmable apparatus.
  22. 22. A computer-readable storage medium storing instructions of a computer program for implementing a method, according to any one of claims 1 to 20.
  23. 23. A device for determining the value of a quantization parameter for at least one chroma component of an image or image portion, wherein the device implements a method according any one of claims 1 to 14.
  24. 24. A device according to claim 23, further comprising means for determining the value of a deblocking quantization parameter for at least one chroma component of an image or image portion, wherein the device implements a method according any one of claims 15 to 16.
  25. 25. A device for encoding an image or image portion composed by a luma component and at least one corresponding chroma component, said components being divided into coding units, which forms part of an image sequence, wherein the device comprises: -means for determining the values of quantization parameter which comprises a device for determining the value of a quantization parameter for at least one chroma component according to claim 23, -means for encoding the successive coding units, the encoding means comprising means for quantizing and de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters; and -means for generating a bitstream of encoded data.
  26. 26. A device according to claim 25, wherein the encoding means used a reference image, said device further including: -a device according to claim 24, for determining deblocking chroma quantization parameters: and -a deblocking filter for filtering the reference image by using the deblocking chroma quantization parameters.
  27. 27. A device for decoding an image or image portion, which forms part of an image sequence, the device comprising: -means for receiving encoded data related to the image or image portion to decode; -means for decoding the encoded data, which comprise means for determining the values of quantization parameter which comprises a device for determining the value of quantization parameters for at least one chroma component according claim 23, -means for decoding the encoded data, the decoding comprising means for quantizing and de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters; and -means for reconstructing the decoded image or image portion from the decoded data, the reconstructing means comprising means for de-dequantizing the luma and chroma components of image or image portion by using the quantization parameters.
  28. 28. A decoding device according to claim 27, further comprising: -a device for determining deblocking chroma quantization parameters according to claim 24; and -a deblocking filter for filtering the decoded data, by using the deblocking chroma quantization parameters.
GB1217444.7A 2012-09-28 2012-09-28 Method and device for determining the value of a quantization parameter Active GB2506852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1217444.7A GB2506852B (en) 2012-09-28 2012-09-28 Method and device for determining the value of a quantization parameter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1217444.7A GB2506852B (en) 2012-09-28 2012-09-28 Method and device for determining the value of a quantization parameter

Publications (3)

Publication Number Publication Date
GB201217444D0 GB201217444D0 (en) 2012-11-14
GB2506852A true GB2506852A (en) 2014-04-16
GB2506852B GB2506852B (en) 2015-09-30

Family

ID=47225416

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1217444.7A Active GB2506852B (en) 2012-09-28 2012-09-28 Method and device for determining the value of a quantization parameter

Country Status (1)

Country Link
GB (1) GB2506852B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020182116A1 (en) * 2019-03-10 2020-09-17 Huawei Technologies Co., Ltd. An encoder, a decoder and corresponding methods using an adaptive loop filter
CN113785577A (en) * 2019-04-26 2021-12-10 华为技术有限公司 Method and apparatus for indicating chroma quantization parameter mapping function
EP4113998A1 (en) * 2019-05-28 2023-01-04 Dolby Laboratories Licensing Corporation Quantization parameter signaling

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220058534A (en) 2019-09-14 2022-05-09 바이트댄스 아이엔씨 Quantization parameters for chroma deblocking filtering
CN114651442A (en) 2019-10-09 2022-06-21 字节跳动有限公司 Cross-component adaptive loop filtering in video coding and decoding
EP4029264A4 (en) * 2019-10-14 2022-11-23 ByteDance Inc. Joint coding of chroma residual and filtering in video processing
WO2021118977A1 (en) 2019-12-09 2021-06-17 Bytedance Inc. Using quantization groups in video coding
WO2021138293A1 (en) 2019-12-31 2021-07-08 Bytedance Inc. Adaptive color transform in video coding

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030147463A1 (en) * 2001-11-30 2003-08-07 Sony Corporation Method and apparatus for coding image information, method and apparatus for decoding image information, method and apparatus for coding and decoding image information, and system of coding and transmitting image information

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030147463A1 (en) * 2001-11-30 2003-08-07 Sony Corporation Method and apparatus for coding image information, method and apparatus for decoding image information, method and apparatus for coding and decoding image information, and system of coding and transmitting image information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020182116A1 (en) * 2019-03-10 2020-09-17 Huawei Technologies Co., Ltd. An encoder, a decoder and corresponding methods using an adaptive loop filter
CN113785577A (en) * 2019-04-26 2021-12-10 华为技术有限公司 Method and apparatus for indicating chroma quantization parameter mapping function
CN113785577B (en) * 2019-04-26 2023-06-27 华为技术有限公司 Method and apparatus for indicating chroma quantization parameter mapping functions
EP4113998A1 (en) * 2019-05-28 2023-01-04 Dolby Laboratories Licensing Corporation Quantization parameter signaling

Also Published As

Publication number Publication date
GB2506852B (en) 2015-09-30
GB201217444D0 (en) 2012-11-14

Similar Documents

Publication Publication Date Title
US11743487B2 (en) Method, device, and computer program for optimizing transmission of motion vector related information when transmitting a video stream from an encoder to a decoder
US10687056B2 (en) Deriving reference mode values and encoding and decoding information representing prediction modes
JP6701270B2 (en) Encoding device, decoding method, encoding method, decoding method, and program
EP2868080B1 (en) Method and device for encoding or decoding an image
GB2506852A (en) Determining the Value of a Chroma Quantisation Parameter
JP2022547599A (en) Method and apparatus for signaling video coding information
WO2019007492A1 (en) Decoder side intra mode derivation tool line memory harmonization with deblocking filter
JP7229682B2 (en) LOOP FILTER CONTROL DEVICE, IMAGE CODING DEVICE, IMAGE DECODING DEVICE, AND PROGRAM