GB2495942A - Prediction of Image Components Using a Prediction Model - Google Patents

Prediction of Image Components Using a Prediction Model Download PDF

Info

Publication number
GB2495942A
GB2495942A GB1118445.4A GB201118445A GB2495942A GB 2495942 A GB2495942 A GB 2495942A GB 201118445 A GB201118445 A GB 201118445A GB 2495942 A GB2495942 A GB 2495942A
Authority
GB
United Kingdom
Prior art keywords
samples
type
component
parameter
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1118445.4A
Other versions
GB201118445D0 (en
GB2495942B (en
Inventor
Edouard Francois
Christophe Gisquet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1118445.4A priority Critical patent/GB2495942B/en
Publication of GB201118445D0 publication Critical patent/GB201118445D0/en
Publication of GB2495942A publication Critical patent/GB2495942A/en
Application granted granted Critical
Publication of GB2495942B publication Critical patent/GB2495942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Components (e.g. luma, chroma) of an image are processed, the image being composed of at least a first and a second, different, type of component. Samples of the second type are predictable (406) from samples of the first type (401, 402, 403) using a prediction model linking the first and second types. The link is represented in the model by at least one model parameter. The value of an intermediate model parameter (404) used to obtain the prediction model parameter, or the value of the model parameter itself, is adjusted (e.g. according to a correction parameter), and samples of the second type are predicted from filtered or non-filtered samples of the first type using the model and model parameter(s) (405) based on the adjusted intermediate model parameter value or the adjusted model parameter. Also disclosed is filtering the samples of the first type using a filter (i) having integer only coefficients and/or (ii) avoiding using left shift operations and/or (iii) avoiding using right shift operations. Samples are predicted of the second type from the filtered samples of the first type using the prediction model.

Description

METHOD AND APPARATUS FOR PROCESSING COMPONENTS OF AN
IMAGE
The present invention concerns a method and device for processing components in an image. The invention further relates to a method and a device for encoding an image, and to a method and device for decoding an image At least one embodiment of the invention is applicable in the field of intra coding of chroma samples of video data.
Video data is typically composed of a series of still images or frames which are shown rapidly in succession as a video sequence to give the visual impression of a moving image. Video applications are continuously moving towards improved image resolution (greater number of pixels per frame, higher frame rate, higher bit-depth...). A large quantity of video content is distributed in digital form via broadcast channels, digital networks and packaged media, with a continuous evolution towards improved quality and resolution (e.g. higher number of pixels per frame, higher frame rate, higher bit-depth or extended colour gamut). This evolution in technology puts increased pressure on distribution networks that already face difficulties in providing HDTV (high definition television) resolution and data rates economically to the end user.
Consequently, further increases in data rate will put additional pressure on such distribution networks. To address this challenge, ITU-T (International Telecommunications Union, telecommunications Standardization Sector) and ISO/MPEG decided to launch a new video coding standard project in January 2010, known as High Efficiency Video Coding (HEVC). It will be appreciated that in what follows the term "HEVC" is used to represent the current implementation of HEVC, which is in course of standardization and will be subject to evolution.
HEVC codec design is similar to that of most previous so-called block-based hybrid transform codecs such as H.263, H.264, MPEG-i, MPEG-2, MPEG-4, SVC. Video compression algorithms, such as those standardized by the standardization bodies ITU, ISO and SMPTE, use spatial and temporal redundancies of images in order to generate data bit streams of reduced size.
Spatial redundancy represents the mutual correlation between adjacent image pixels, while temporal redundancy represents the correlation between images of sequential images. Such compression processes make the transmission and/or storage of video sequences more effective.
During video compression in HEVC, each block of an image being processed is predicted spatially by an "Intra" predictor (so-called "Intra" coding mode), or temporally by an "Inter" predictor (so-called "Inter" coding mode).
Each predictor is a block of pixels obtained from the same image or another image, from which a difference block (or "residual") is derived. In the Intra coding mode the predictor (Intra predictor) used for the current block (prediction block) is a block of pixels constructed from the information already encoded of the current image.
An image of the video data to be transmitted may be provided as a set of two-dimensional arrays (also known as colour channels) of sample values, each entry of which represents the intensity of a colour component such as a measure of luma brightness and chroma colour deviations from neutral grayscale colour toward blue or red (YUV) or as a measure of red, green, or blue light component intensity (RGB).
A YUV model generally defines a colour space in terms of one luma (Y) and two chrominance (UV) components. Generally Y stands for the luma component (the brightness) and U and V are the chrominance (colour) or chroma components.
Among the various technologies in use in HEVC codec, one is the luma-based chroma prediction, which uses a linear model in a block to link luma components to chroma components. The parameters of the model may be determined on the fly, using the borders of the block and an ordinary least-mean square (OLS) method.
The following model is given as example for generating an estimation of, i.e. predicting, the variable Y: Y =aX (1) where: -The X1, values are input to the prediction model for instance from other signals or previous values of the Y signal; -The a factors, ranging from 1 to N, are parameters ("weights") for each input; -In the case for example where Xo=1, whatever n, a0 is a constant term This equation means that each sample Y linearly depends on N input samples X1,.
In order to determine these parameters, the quadratic error of the model for the Y signal over the pixels, indexed with n is expressed as: ec (2) The least mean square method tries to determine thea1 parameters minimizing this error value. The possible minimum where all derivatives with respect to said parameters are null may be considered as: Vi, 0 - 0 (3) This can be rewritten as the following, clearly showing the fact we get a classical system of N linear equations with N unknowns to solve: That can be expressed as a matrix system which is symmetrical, and on which known matrix solving algorithms such as the Gauss pivot can be used.
The a parameters may therefore be determined.
When coding a chroma block as intra (i.e. the chroma block depends on previously coded/decoded data from the current image), particular prediction modes can be applied, e.g. a directional bilinear prediction or a prediction by an average value. In all cases, the outer borders of the chroma block which may be referred to as a Coding Unit (hereafter CU), located inside of previously decoded CUs, are used. Those borders therefore contain decoded pixel data, to which particular filtering might have been applied.
In the current HEVC design, luma samples from the outer left and top borders of the CU are filtered prior to be used by the ordinary least squares algorithm. A different filter is used for the outer left column and for the outer top line. In addition, the luma signal inside the block to be predicted is filtered and subsampled to get the same sample ratio as the chroma signal (since HEVC works on 4:2:0 format, chroma resolution is twice luma resolution for both horizontal and vertical dimensions).
The prediction mode presented in this example relies on an affine model, corresponding to what is presented above: CPfl?d[x,Y] cr.L [x,y] + /3 (5) Here, the values of the prediction block made of samples Cpd[x,y] at pixels position [x,yj are linearly derived from the virtual luma values L'[x,yj derived from the collocated decoded luma values.
The model parameters a and p are estimated using a Least Mean Square method as presented above.
With reference to Figure 1, the model parameters are determined using encoded/decoded chroma samples on the top border 102 and left border 101 and collocated encoded/decoded luma samples from the top border 105 and left border 104, each border lying on the outside of the coding block to be predicted. Recc represent the outer reconstructed chroma samples, resulting from the processing (coding or decoding then reconstruction) of the previous blocks. RecL represent the inner and outer reconstructed luma samples, resulting from the processing (coding or decoding then reconstruction) of the previous blocks. Indeed when the chroma prediction process is being applied, the luma samples RGcL of the block to be predicted have already been reconstructed. The outer ReeL and Recc samples are used to determine the model parameters. Once these parameters are determined, the chroma samples inside the block to be predicted are derived using the inner ReCL samples and the determined model parameters.
Two sets of virtual luma samples L' are considered. Set 1 corresponds to the outer samples. Set 1 is made up of two subsets, a subset made up of the top line samples 105 of the luma block), and another subset made up of the left column samples 104 of the luma block). Set 2 corresponds to the inner samples (that is, those inside the luma block 106). Parameters a and f3 are determined from set 1. Set 2 is used to predict the chroma samples using the estimated parameters a and 3.
The signals used as inputs are: -For the computation of the prediction model parameters a and, virtual samples L' of set 1 are generated as follows: o L' virtual luma values on the left border 104, are obtained through filtering the reconstructed RecL values, located on the same left border column as L' virtual luma samples 104, with the vertical convolution filter 107, resulting in the output: L [x, y] = (Re CL [ 1,2y] + Re CL [-l,2Y + i])>> 1 (6) Where the operation denoted by >> 1' corresponds to an arithmetic right shift operation, equivalent to a division by 2. Similarly, the operation denoted by 5'> R' is equivalent to a division by 2R The operation denoted by << R' is an arithmetic left shift operation equivalent to a multiplication by 2's.
x, y relate to positions in the luma array.
o L' virtual luma values on top border 105, are obtained through filtering the reconstructed ReeL values, located on the same top border line as L' virtual luma samples 105, with the horizontal convolution filter 108, resulting in the output: L[x,y]=(RCCL[2x_1,_1J+2RecL[2xrl]+ReCL[2x+1,-ID>> 2 (7) -For the L' virtual luma values of set 2 106 within the luma block used in the prediction, the filter 107 described for the left border is reused but on a different column to generate the inner L' virtual luma values: Ii [x, y] = (Re CL [2x,2y]+ Re CL [2x,2y + iD>> 1 (8) In what follows, for convenience of notation, a sample Yjx,yl at pixel position Ix,yl is indexed as sample Y. The model parameters a and are found using an ordinary least squares (OLS) algorithm as described below. An OLS technique involves searching for the parameters of the model that minimize the quadratic modeling error over the neighboring samples: e =t((-aL'1-PY (9) where N corresponds to the number of neighboring samples. With reference to the example of Figure 1, N is equal to the sum of chroma block width Wand height H (NW+H).
Minimizing this error for parameters a and 3 leads to the following formulas: NC,L', -C,L', a = i=1 fri =i4j_ (10) N 2 A NXLY 2 C, -aL' N (11) Another detail is that when using arithmetic with 32-bits signed architectures, some computations may on occasion overflow and thus cause unspecified behavior (which is undesirable in any cross platform standard). To this end, the maximum magnitude possible given the input L' and C values is computed, accordingly N and the sums are scaled to guarantee no overflow occurs.
Finally, instead of divisions, the computations are implemented using less complex methods based on look-up tables and shift operations. The actual chroma prediction samples derivation is performed as follows: Cpred[XY]= (A.Ji[x,y] >> S + /3 (12) With S being an integer and A being derived from A1 and A2 using the look-up table mentioned previously. It actually corresponds to a rounded rescaled value of cx.
Once this is done, prediction chroma samples 103 are predicted using filtered luma samples 106 using the now determined model parameters a and 13.
A flow chart of the chroma prediction process is provided in Figure 2.
Virtual luma samples are computed using vertical filtering of 2 taps for the left column outer samples in step 201, and horizontal filtering of 3 taps for the top line outer samples in step 202. In addition virtual inner luma samples are generated by vertical filtering of 2 taps in step 203. Intermediate prediction model parameters A1 and A2 are derived in step 204 from the virtual outer Iuma samples and from the outer chroma samples 207. Finally the parameters a and 13 are computed in step 205 and used to predict the chroma samples using the inner virtual luma samples in step 206.
If it is considered that the input samples L used in the OLS algorithm are corrupted by an additive noise, the OLS estimation will lead to a biased estimation of parameters a and 13, as demonstrated in the following equations.
If it is considered that the true model is defined as follows: C,=a.1+fl (13) where L* are non-corrupted samples.
The true parameters may be defined as: ___ 1=1, =1 (14) N2-{XIJ A; C2 -aLL /3 = N (15) It may be considered that the used signal L' corresponds to the true signal corrupted by an additive noise: (16) This then leads to the following formulas for the derivation of the intermediate parameters Al and A2: A1 (17) N N N 2 A, = A; +Ne? +(2N-1)e (18) If e is considered as a non-correlated Gaussian noise of mean equal to 0 and of variance a02, it can be easily demonstrated that: (19) E[A,I=A;÷(N-1)No-: (20) Where E[x] corresponds to the mathematical expectation of the variable x.
Similarly it can be easily shown that: (N A; =NL2-[LJ =N24 (21) Where o is the variance of the L* signal.
Finally, it can be estimated that: E[aJ LX (22) N4.
As a consequence using OLS with noisy samples statistically leads to a biased under-estimation of the true a parameter. This obviously has also consequences for the 3 parameter since it is derived from the estimated a.
Filtering of outer samples brings a slight improvement, since this is a low pass filtering, which reduces the signal variations and statistically reduces the noise impact However such filtering brings added complexity, especially the filtering applied to the upper outer line of the block.
Usage of the ordinary least squares (OLS) algorithm, to derive the parameters of the linear model, has a number of drawbacks. For example, if the OLS uses noisy samples, the estimation is biased.
The present invention has been devised to address one or more of the foregoing concerns.
According to a first aspect of the invention there is provided a method of processing components of an image for coding of a group of samples of the image, the image being composed of at least a first type of component and a second type of component different from the first type of component, wherein samples of the second type of component are predictable from samples of the first type of component using a prediction model linking the first type of component to the second type of component the link being represented in the prediction model by at least one model parameter, , the method comprising adjusting, according to a correcting parameter based on a predetermined criterion, the value of at least one intermediate model parameter used to obtain at least one such model parameter of the prediction model, or the value of at least one such model parameter, and predicting a group of samples of the second type from a plurality of filtered samples of the first type using the prediction model and a prediction model parameter based on the adjusted at least one intermediate model parameter value or the adjusted at least one such model parameter.
Consequently, bias introduced by using noisy input samples may be corrected leading to reduced complexity (outer luma samples filtering can be simplified) and an improved coding performance.
In an embodiment, the value of the at least one intermediate parameter is adjusted in dependence upon the correcting parameter.
In an embodiment, the or one said intermediate parameter corresponds to an intermediate parameter A1 or A2 used to calculate the model parameter a based on the following expression: N>CL',_tCL A NXL'2 _[L'1J 2 wherein the prediction model is defined based on the following expression Cprj[x,y]=a.L{x,yI+fl where: Cpred[x,y] corresponds to a group of samples of the second component L'pred[x,y] corresponds to a group of filtered samples of the first component a and flare model parameters of the prediction model N corresponds to number of samples used for the prediction In an embodiment, adjusting the value of at least one intermediate parameter comprises increasing the value of intermediate parameter Ai and/or decreasing the value of intermediate parameter A2.
In an embodiment, adjusting the value of at least one intermediate parameter comprises applying the operation A2 = A, -A2 / 2k, where k is an integer shifting parameter.
In an embodiment, the correcting parameter is determined according to noise of the samples of the first type of component used to predict the samples of the second type of component.
In an embodiment, the correcting parameter is determined based on minimisation of a distortion of a DC mode and/or LM mode for prediction of the samples of the second type of component.
In an embodiment, the correcting parameter is determined based on the frame oi slice type.
In an embodiment, the correcting parameter is determined based on a quantization parameter.
In an embodiment, the correcting parameter is determined based on the energy of a neighbouring coded residual signal.
In an embodiment, the method includes signalling the correcting parameter in a bitstream used to transmit the coded image samples.
In an embodiment, the first type of component is a luma component and the second type of component is a chroma component In an embodiment, the group of samples is predicted using N chroma samples on the top border and left border of the chroma samples to be coded, and N luma samples from the top border and left border of co-located luma samples.
In an embodiment, the method includes filtering a set of samples of the first type of component used to predict, by means of the prediction model, samples of the second type of component, wherein the filtering comprises using a filter (i) having integer only coefficients and/or (ii) avoiding using left shift operations and/or (iii) avoiding using right shift operations.
In an embodiment, the set of samples is a first set of samples of at least two sets of samples, wherein the filtering comprises using filters having the same energy for filtering at least two sets of samples, In an embodiment, the said set has a first subset of samples which is filtered using a first filter and also has a second subset of samples which is filtered using a second filter different from the first filter, the first and second filters having the same energy.
A second aspect of the invention provides a device for processing components of an image for coding of a group of samples of the image, the image being composed of at least a first type of component and a second type of component different from the first type of component, wherein samples of the second type of component are predictable from samples of the first type of component using a prediction model linking the first type of component to the second type of component the link being represented in the prediction model by at least one model parameter, the device comprising corrective means for adjusting, according to a correcting parameter based on a predetermined criterion, the value of at least one intermediate parameter used to obtain at least one such model parameter of the prediction model, or the value of at least one such model parameter, and prediction means for predicting a group of samples of the second type from a plurality of filtered samples of the first type using the prediction model and a prediction model parameter based on the adjusted at least one intermediate model parameter value or the adjusted at least one such model parameter.
In an embodiment, the corrective means is configured to adjust the value of the at least one intermediate parameter in dependence upon the correcting parameter.
In an embodiment, the or one said intermediate parameter corresponds to an intermediate parameter A1 or A2 used to calculate the model parameter a based on the following expression: NXCL' -C,L', a = = 2 A wherein the prediction means is configured to predict based on the prediction model according to the following expression CP,d[x,y] = a± [x,y] + /3 where: Cprecj[X,Y] corresponds to a group of samples of the second component L'pred[X,Y] corresponds to a group of filtered samples of the first component a and flare model parameters of the prediction model N corresponds to number of samples used for the prediction In an embodiment, the corrective means is configured to adjust the value of at least one intermediate parameter by increasing the value of intermediate parameter A4 and/or decreasing the value of intermediate parameterA2.
In an embodiment, the corrective means is configured to adjust the value of at least one intermediate parameter by applying the operation 4 = A., -A, I2', where k is an integer shifting parameter.
In an embodiment, the device includes means for determining the correcting parameter in dependence on the noise of the samples of the first type of component used to predict the samples of the second type of component.
In an embodiment, the device includes means for determining the correcting parameter in dependence on minimisation of a distortion of a DC mode andlor LM mode for prediction of the samples of the second type of component.
In art embodiment, the device includes means for determining the correcting parameter in dependence on the frame or slice type.
In an embodiment, the device includes means for determining the correcting parameter in dependence on a quantization parameter.
In an embodiment, the device includes means for determining the correcting parameter in dependence on the energy of a neighbouring coded residual signal.
In an embodiment, the device includes means for signalling the correcting parameter in a bitstream used to transmit the coded image samples.
In an embodiment, the first type of component is a luma component and the second type of component is a chroma component In an embodiment, the prediction means is configured to predict the group of samples using N chroma samples on the top border and left border of the chroma samples to be coded, and N luma samples from the top border and left border of co-located luma samples.
In an embodiment, the device includes filtering means for filtering a set of samples of the first type of component used by the prediction means to predict, by means of the prediction model, samples of the second type of component, wherein the filtering means comprises a filter (i) having integer only coefficients and/or (ii) avoiding using left shift operations and/or (iii) avoiding using right shift operations.
In an embodiment, the set of samples is a first set of samples of at least two sets of samples, wherein the filtering means comprises filters having the same energy for filtering at least two sets of samples, In an embodiment, the filtering means comprises a first filter for filtering a first subset of samples of the said set of samples and a second filter different from the first filter for filtering a second subset of samples of the said set, the first and second filters having the same energy According to a third aspect of the invention there is provided a method of processing components of an image for coding or decoding of a group of samples of the image, the image being composed of at least a first type of component and a second type of component different from the first type of component, wherein samples of the second type of component are predictable from samples of the first type of component using a prediction model linking the first type of component to the second type of component, the method comprising: filtering a set of samples of the first type of component used to predict, by means of the prediction model, samples of the second type of component, wherein the filtering comprises using a filter (i) having integer only coefficients and/or (ii) avoiding using left shift operations and/or (iii) avoiding using right shift operations; and predicting a group of samples of the second type of component from at least the set of filtered samples of the first type of component using the prediction model.
Consequently the filtering operation is simplified and more efficient coding may be provided.
In an embodiment the said set of samples is a first set of samples of at least two sets of samples, wherein the filtering comprises using filters having the same energy for filtering at least two such sets of samples.
In an embodiment the said set of samples has a first subset of samples which is filtered using a first filter and a second subset of samples which is filtered using a second filter different from the first filter, the first and second filters having the same energy.
In an embodiment the first type of component is a luma component and the second type of component is a chroma component and wherein the group of samples is predicted using N chroma samples on the top border and left border of the chroma samples to be coded, and M luma samples from the top border and left border of co-located luma samples.
in an embodiment the first subset includes luma samples from the left border, the second subset includes luma samples from the top border of the group of collocated luma samples, and the second set includes luma samples within the group of collocated luma samples.
In an embodiment the first subset of samples is filtered by a filtering operation based on the following expression: L[x,y]= RecL[-1,2y]+ RecL[-1,2y + ii where ReCL corresponds to reconstructed luma samples of the first subset.
In an embodiment the second subset of samples is filtered by a filtering operation based on the following expression: L'[x.yI = Rec1 [2x -1,-1]-i-RecL [2x + i,-i] where RecL corresponds to reconstructed luma samples of the second subset.
In an embodiment the second subset of samples is filtered by a filtering operation based on the following expression: L[x,y]=(RecI[2x-1,-1]+2RecL[2x,-1I+ Rec,[2x+1,-1D>>1 where RecL corresponds to reconstructed luma samples of the second subset In an embodiment the second set of samples is filtered by a filtering operation based on the following expression: Jt[x,yj= ReeL [2x,2y]+ReeL[2x,2y+1] where RecL corresponds to reconstructed luma samples of the second set.
A fourth aspect of the invention provides a device for processing components of an image for coding or decoding of a group of samples of the image, the image being composed of at least a first type of component and a second type of component different from the first type of component, wherein samples of the second type of component are predictable from samples of the first type of component using a prediction model linking the first type of component to the second type of component, the device comprising: filtering means for filtering a set of samples of the first type of component used to predict, by means of the prediction model, samples of the second type of component, wherein the filtering means comprises a filter (i) having integer only coefficients andlor (ii) avoiding using left shift operations and/or (iii) avoiding using right shift operations; and prediction means for predicting a group of samples of the second type from at least the set of filtered samples of the first type using the prediction model.
In an embodiment the said set of samples is a first set of samples of at least two sets of samples, wherein the filtering means comprises filters having the same energy for filtering at least two such sets of samples.
In an embodiment the said set of samples has a first subset of samples and a second subset of samples and the filtering means comprises a first filter for filtering the first subset and a second filter different from the first filter for filtering the second subset, the first and second filters having the same energy.
In an embodiment the first type of component is a luma component and the second type of component is a chroma component and wherein the prediction means is configured to predict a group of samples using N chroma samples on the top border and left border of the chroma samples to be coded, and M luma samples from the top border and left border of co-located luma samples.
In an embodiment the first subset includes luma samples from the left border, the second subset includes luma samples from the top border of the group of collocated luma samples, and the second set includes luma samples within the group of collocated luma samples.
In an embodiment the first filter is configured to filter by a filtering operation based on the following expression: L[x,y]= Rec[-1,2y]+ RecL[-1,2y + 1] where RecL corresponds to reconstructed luma samples of the first subset.
In an embodiment the second filter is configured to filter by a filtering operation based on the following expression: I±[x,y]= Rec1[2x-1,-1]+ RecL [2x + i,-i] where ReCL corresponds to reconstructed luma samples of the second subset.
In an embodiment the second filter is configured to filter by a filtering operation based on the following expression: L[x,y]= (Rec[2x -1,-1]+2RecL[2x,-1]+ RecL[2x+1,-1fl>>1 where RecL corresponds to reconstructed luma samples of the second subset In an embodiment the filtering means comprises a further filter for filtering the second set of samples by a filtering operation based on the following expression: Ii[x,y] = ReeL [2x.2y]+ ReeL [2x,2y -i-i] where Req. corresponds to reconstructed luma samples of the second set.
A further aspect of the invention provides a method of encoding an image including a step of processing components of the image for encoding of a sample of the image, according to any one of the preceding embodiments, and a step of encoding the image.
A further aspect of the invention provides a method of decoding an image including a step of processing components of the image for decoding of a sample of the image, according to any of the preceding embodiments, and a step of decoding the image.
A further aspect of the invention provides a device for encoding an image comprising a device for processing components of the image for encoding of a sample of the image according to any preceding embodiment.
A further aspect of the invention provides device for decoding an image comprising a device for processing components of the image for decoding of a sample of the image according to any preceding embodiment.
A yet further aspect of the invention provides a signal carrying an information dataset for an image represented by a video bitstream, the image being composed of at least a first type of component and a second type of component different to the first type of component, wherein samples of the second type of component are predictable from samples of at least the first type of component using a prediction model linking the first type of component to the second type of component, the link being represented by at least one model parameter, at least one such model parameter being dependent on at least one intermediate model parameter, the information dataset comprising: corrective information representative of a correcting parameter for adjusting the at least one such model parameter or the one intermediate model parameter.
Another aspect of the invention provides video bitstream representative of an image composed of at least a first type of component and a second type of component different to the first type of component, wherein samples of the second type of component are predictable from samples of at least the first type of component using a prediction model linking the first type of component to the second type of component, the link being represented by at least one model parameter dependent on at least one intermediate model parameter; and further including a signal according to the preceding aspect.
A further aspect of the invention provides a method of processing components of an image for decoding of a sample of the image, the image being composed of at least a first type of component and a second type of component different to the first type of component, wherein samples of the second type of component are predictable from samples of at least the first type of component using a prediction model linking the first type of component to the second type of component, the link between the first type of component and the second type of component being represented by at least one model parameter of the prediction model, dependent on at least one intermediate model parameter, the method comprising: receiving a bitstream representative of the image and corrective information representative of a correcting parameter for adjusting at least one such model parameter or one such intermediate model parameter; decoding samples of the first type; adapting the at least one such model parameter or intermediate model parameter based on the corrective information; and predicting a sample of the second type from at least one decoded sample of the first type using the prediction model and the adapted the or at least one said parameter or intermediate parameter value.
Another aspect of the invention provides a decoding device for processing components of an image for decoding of a sample of the image, the image being composed of at least a first type of component and a second type of component different to the first type of component. wherein samples of the second type of component are predictable from samples of at least the first type of component using a prediction model linking the first type of component to the second type of component the link between the first type of component and the second type of component being represented by at least one parameter of the prediction model dependent on at least one intermediate parameter the device comprising: receiving means for receiving a bitstream representative of the image and corrective information representative a corrective parameter for adjusting the value of at least one such model parameter or intermediate model parameter; corrective means for adapting the at least one such model parameter value or intermediate model parameter value, based on the corrective information; and prediction means for predicting a sample of the second type from a sample of the first type using the prediction model and the adapted at least one model parameter value or intermediate model parameter value.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system". Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which:-Figure 1 schematically illustrates examples of sampling of luma samples and chroma samples for prediction of a chroma block which may be applied in the context of one or more embodiments of the invention; Figure 2 is a flow chart illustrating steps of a method for predicting
chroma samples from luma samples in the prior art;
Figure 3 is a schematic diagram of a data communication network in which one or more embodiments of the invention may be implemented; Figure 4 is a flow chart illustrating steps of a method for predicting chroma samples from luma samples by a chroma intra prediction process according to a first embodiment of the invention; Figure 5 is a flow chart illustrating steps of a method for deriving a correction parameter according to an embodiment of the invention Figure 6 is a flow chart illustrating steps of a method for predicting chroma samples from luma samples by a chroma intra prediction process according to a further embodiment of the invention; Figure 7 is a block diagram of an encoding device or a decoding device in which embodiments of the invention may be implemented.
Figure 3 illustrates a data communication system in which one or more embodiments of the invention may be implemented. Although a streaming scenario is considered here, the data communication can be performed using for example a media storage device such as an optical disc. The data communication system comprises a transmission device, in this case a server 1001, which is operable to transmit data packets of a data stream to a receiving device, in this case a client terminal 1002, via a data communication network 1000. The data communication network 1000 may be a Wide Area Network (WAN) or a Local Area Network (LAN). Such a network may be for example a wireless network (Wifi I 802.lla or b or g), an Ethernet network, an Internet network or a mixed network composed of several different networks. In a particular embodiment of the invention the data communication system may be a digital television broadcast system in which the server 1001 sends the same data content to multiple clients.
The data stream 1004 provided by the server 1001 may be composed of multimedia data representing video and audio data. Audio and video data streams may, in some embodiments of the invention, be captured by the server 1001 using a microphone and a camera respectively. In some embodiments data streams may be stored on the server 1001 or received by the server 1001 from another data provider, or generated at the server 1001.
The server 1001 is provided with an encoder for encoding video and audio streams in particular Ito provide a compressed bitstream for transmission that is a more compact representation of the data presented as input to the encoder.
In order to obtain a better ratio of the quality of transmitted data to quantity of transmitted data, the compression of the video data may be for example in accordance with the HEVC format or H.264/AVC format.
The client 1002 receives the transmitted bitstream and decodes the reconstructed bitstream to reproduce video images on a display device and the audio data by a loud speaker.
In one or more embodiments of the invention an encoded video image is transmitted with a brightness component (lumina) and two colour components (chroma). The digital representation of the video signal thus includes a luma signal (Y), representative of brightness, and colour difference (or chrorna) signals Cb (blue-Y) and Cr (red-Y).
It will be appreciated that while the detailed examples related to a YUV model the invention is not limited thereto, and may be applied to other models such as a RGB, or for encoding any image composed of several colour components, at least one colour component being considered as a reference colour component, the other colour components being dependently coded based on this reference colour component.
In at least one embodiment of the invention a luma-based chroma prediction mode in accordance with HEVC techniques may be applied to predict chroma samples from luma samples which have already been reconstructed. In the case of different resolutions of luma and chroma an interpolation process is applied to luma samples to generate interpolated luma samples at the same resolution as the chroma samples to be predicted.
In the context of embodiments of the invention in which one or more blocks of chroma samples is predicted, the term inner' samples refers to the samples (luma or chroma) that are inside the block. The term outer' samples refers to the samples (luma or chroma) that are outside the block, and in particular to, the samples of the top line and left column neighboring the block.
When coding a chroma block as intra (i.e. it can only depend on previously coded/decoded data from current image), particular prediction modes can be applied, e.g. a directional bilinear prediction or a prediction by an average value. In all cases, the outer borders of the chroma block, also referred to as a Coding Unit (hereafter CU), located inside of previously decoded CUs, are used. Those borders therefore contain decoded pixel data i.e. samples, to which particular filtering may have been applied.
The prediction mode for predicting chroma samples from filtered luma samples according to various embodiments of the invention relies on an affine model, as defined by expression (5), linking prediction chroma samples Cprej[x,y] to previously coded and filtered luma samples L'[x,y].
Embodiments of the invention set out to simplify the filtering process applied to top line samples and to correct the bias brought by using noisy input samples. The correction is done in various embodiments of the invention by modifying intermediate parameters values involved in the OLS algorithm. This helps to reduce complexity with equal or even a slight increase in coding performance.
As demonstrated in the background of the invention, using noisy samples leads to a bias in the estimation of the a parameter for the prediction model of expression (5). This bias is in particular due to the over-estimation of the intermediate model parameter A2 for deriving model parameter a in accordance with expression (10).
Some embodiments of the invention correct this over-estimation by reducing the value of intermediate model parameter A2. Of course it will be appreciated that alternatively, the correction could be achieved by modifying the value of intermediate model parameter Al (increasing it) of expression (10), or even by modifying both intermediate parameters Al and A2.
In one particular embodiment of the invention, to ease the application of the invention in hardware or software implementation, the solution consists of applying the following updating process to the intermediate model parameter A2: (23) With k being a shifting integer parameter that enables the reduction factor applied to the intermediate model parameter A2 to be controlled.
Several different methods for deriving the reduction factor or the shifting parameter k may be implemented in the context of the present invention.
A method of predicting chroma samples from luma samples according to at least one embodiment of the invention will now be described with reference to Figure 4. A number of elementary blocks with the same functional behaviour as corresponding elementary blocks illustrated in Figure 2 are depicted. In comparison to Figure 4 a further step is included 407, corresponding to a step of correction of the intermediate model parameters A1IA2. In step 401 virtual Iuma samples are computed using vertical filtering of 2 taps for the left column outer samples, and in step 402 luma samples are computed using horizontal filtering of 3 taps for the top line outer samples.
Moreover virtual inner luma samples are generated in step 403 by vertical filtering of 2 taps. Intermediate model parameters A1 and A2 are derived in step 404 from the virtual outer luma samples and from the outer chroma samples. In step 407 intermediate parameters Al and A2 are corrected according to an embodiment of the invention. Finally the parameters a and are computed in step 405 and used to predict the chroma samples using the inner virtual luma samples 406.
The reduction factor parameter k used to correct intermediate model parameter A2 can be set by default or signaled in the bitstream, at sequence, picture, slice, [CU or CU level. In a particular embodiment of the invention the reduction parameter is signaled at the slice level. An integer value is explicitly coded in the slice header, and may be further used for all coding units of the same slice, that are using the luma-based chroma prediction mode.
In one particular embodiment of the invention for signalling and deriving the reduction factor parameter, the shifting parameter k, as previously defined, is signaled in the slice header. The estimation of the shifting parameter is actually performed based on the previously encoded/decoded frame, since it is considered that the shifting parameter k should not vary significantly from one frame to another one within the sequence.
A method for deriving a shifting correction parameter k according to an embodiment of the invention will now be described with reference to Figure 5. This estimation process is implemented at the encoder side.
Consider the current frame at time index t to be encoded It, using k as shifting parameter.
In step 501, it is determinedwhether or not all blocks of the frame have been processed. Until all blocks of the frame have been processed, i.e. if it is determined that a current block is not the final block the following process is applied.
In step 502 a chroma samples prediction using the DC (constant value derived as the average of the surrounding chroma samples) mode is performed and its corresponding distortion is computed as: (24) In step 503 a chroma samples prediction using the LM-mode with k as a shifting parameter is performed. Its corresponding distortion is computed as: = (c, -clip(akL' /3kt))2 (25) In step 504, it is determined if dkt C X.dDc, 2. being a predefined real value (in practice it is set to 0.8). If it is determined that dkt < X.d0c the LM prediction mode is considered as being a good candidate and the different possible k values are evaluated in step 505 as follows, including the case of applying no correction to the intermediate parameter: a. A test of the different candidates kM0 to Ml (MO and Ml corresponding to the limits of the range of authorized shifting parameters) is performed.
b. the chroma samples prediction using the LM-mode with k as shifting parameter is performed. Its corresponding distortion is computed as: = (c, -c1ip(aL',+ 13k))2 (26) c. the chroma samples prediction using the LM-mode without any correction is also tested. Its corresponding distortion is computed as: = (c, -clip (a0L',+/30))2 (27) It may be noted that kt is in one of the cases b. and c. (kt is either equal to 0 or in the range MO.. Ml).
The distortion of each possible k is accumulated in a total distortion accumulator in step 506 as (28) Once the entire frame has been processed, the optimal' k is defined in step 507 as the k value minimizing the total distortion accumulator Dk. This value is encoded in the slice header of the next frame to be processed, and will be used for the LM prediction mode of the next processed frame.
Alternatively, the reduction factor parameter can be derived on-the-fly, depending on coding context data such as: -the frame or slice type -a quantization parameter o As the quaritization noise increases with the quantization parameter (QP) value, the A2 reduction parameter should also increase with the QP value.
-the energy of the surrounding coded residual signal.
The solution provided by the embodiments of the invention described above enables the OLS estimation bias due to noisy luma samples to be corrected. The solution is simple, straightforward and actually allows the filtering applied to the outer luma samples used for the OLS estimation to be simplified.
For example, in the current HEVC design, luma samples from the outer top line of the block are filtered by a 3-tap filter of coefficients [1,2,11 / 4: L'[x, y] = (ReeL [2x -i,-i]÷ 2Rec1 [2x,-1]-* ReeL [2x + 1,-iD>> 2 (29) A further embodiment of the invention proposes using a simpler implementation for the generation of the L' virtual samples, thereby reducing the number of required operations. As mentioned previously, two sets of virtual samples L' are considered. Set 1 corresponds to the outer samples. Set 1 is made of two subsets, a subset made up of the top line samples 105 of the luma block, and another subset made up of the left column samples 104 of the luma block. Set 2 corresponds to the inner samples (that is, those inside the luma block 106). Parameters a and 1 are determined from set 1. Set 2 is used to predict the chroma samples using the estimated parameters a and [3.
L' virtual samples of the 2 subsets of set 1 must be of same range since they are both input data of same type, used in the OLS process, to derive the parameters cc and [3.
The following process is applied: -For the computation of the parameters a and [3,the virtual samples L' of set 1 are generated as follows: o L' values on left border 104 forming one subset are obtained through the following filtering: L[x, y] = ReeL [-1,2y}+ Rcc [-l,2y + 1] (30) instead of L'[x,y]= (RCcL{-1,2y]+ReCL{-1,2y+1])>> 1 (31) In equation (30), RecL is filtered by a 2-tap filter with integer coefficients [1,11 i.e. the coefficients of RecL in equation (30). The sum of the absolute value of the coefficients, that is referred to as filter energy, is equal to 2 (i.e. 1+1). In equation (31), RecL is filtered by a 2-tap filter with coefficients [1,1]. Before the right shift operation, the energy is equal to 2. Then the right shift operation corresponds to a division by 2. Thus the filter coefficients become in this case non-integer [0.5, 0.5], with filter energy equal to I (i.e. 0.5 + 0.5) .Compared to the HEVC design of the prior art of equation (31), the new filter of equation (30) avoids one right shift operation for the H outer left samples..
o L' values on top border 105 forming another subset of set I are obtained through the following filtering: L [x,y] = RCCL [2x -i,-i]+ ReeL [2x + i,-i] (32) Instead of L[x,yJ= (RecL[2x_1,_1]+2RecL[2x,-1]+RecL[2x+1,-1D>> 2 (33) In equation (32), ReeL is filtered by a 2-tap filter with integer coefficients [1,1], with filter energy equal to 2. In equation (33), RecL is filtered by a 3-tap filter with coefficients [1,2,1]. Before the right shift operation, the energy is equal to 4. Then the right shift operation corresponds to a division by 4. Thus actually the filter coefficients are non-integer [0.25, 0.5, 0.25], with filter energy equal to 1. T Compared to the I-IEVC design of the prior art, the filter [1,2,1}14 has been replaced by [1,0,1] filter. One right shift operation has been removed for the W outer top samples. Also one addition and multiplication by 2 (or left shift by 1) is removed.
It may be noted that the left or right shift operations referred to act on the samples and not on the postions of the samples given by x, y etc. It should be noted that the filters used for generating L' values on left border 104 and on top border 105 are of same energy, which is desirable for generating samples of same range.
-The values of L virtual samples of set 2 106 used in the prediction process are obtained through the following filtering: L'[x,y]= RCcL[2x,2y]+ReCL[2x,2y+1] (34) Instead of L[x,y] = (Ree [2x,2y]+ ReeL 12x,2y + iD>> 1 (35) In equation (34), RecL is filtered by a 2-tap filter with integer coefficients [1,1] with filter energy equal to 2. In equation (31), RecL is filtered by a 2-tap filter with coefficients [1,1J. Before the right shift operation, the energy is equal to 2. Then the right shift operation corresponds to a division by 2. Thus actually the filter coefficients are non-integer [0.5, 0.5], with filter energy equal to 1. Compared to the HEVC design of the prior art, one right shift operation has been removed for the W*H inner samples.
With reference to Figure 1 samples that are filtered are 1-TheWtop luma samples 105 2-The H left Iuma samples 104 3-The WxH inner luma samples 106 In a particular embodiment shift operations for 1, 2 and 3 can be avoided. In other embodiments the shift operation will be avoided for samples 3 only which already provides a significant reduction of the number of operations, since WxH operations are removed), and only H + W operations are maintained.
Table 1 shows the numbers of operations required in a HEVC design of the prior art compared to the number of operations required for the method according to the embodiment of the invention. A significant reduction of operations is enabled by the embodiment of the invention.
Upper line Left column Inner total ___________ samples samples samples - 2W additions H additions W*H 2W+H+W*H HEVC W left shifts H right shifts additions additions W right shifts W*H right W left shifts
(prior art) shifts W+H+W*H right
___________ shifts W additions H additions W*H W÷H+W*H Proposed additions additions embodiments -
Table I
This simplification results in L' virtual luma samples that have twice an amplitude compared to the ones generated with the HEVC design of the prior art. Consequently, the rest of the a derivation process must take into account this larger amplitude.
Several solutions can be used: -Once Al and A2 have been computed a Al = A1/2 A2=A214 or simply A2 = A2/2 c,Jx, y]= [(Al x,y])>> (S +1)]÷ /3 (36) Note that this latter step does not involve any increased complexity compared to the HEVC design of the prior art; the only difference is the shifting of (S+1) instead of S. -Once Al and A2 have been computed o Adapt the look-up table so that the A parameter is of the right scale, and keep the rest of the process similar.
In the previous embodiment of the invention, filters for generating set 1 and set 2 L' virtual samples all use integer coefficients and no final right shift operations. Their energy is equal to 2. In some embodiments there may be cases where shift operations are partly needed for some of the outer samples.
In any case, it is desirable that the two filters involved in set 1 for generating the top border 105 and left border 104 L' virtual samples have the same energy.
This energy can be different from the energy of the fitter involved in the set 2 L' virtual samples.
For instance, only sample set 2 filtering may be simplified according to equation (34). This can be preferred to generate a smoother signal for the computation of parameters a and 3. In this case the filtering for sample set I may be kept similar to that of the design of the prior art (filtering based on equations (31) and (33)). The energy of the filters for set 1 is equal to 1. The filters in this case do not have integer-only coefficients. The energy of the filter for set 2 is equal to 2. This filter has integer-only coefficients. Thus in this example, filters for sample set 1 have energy equal to 1 and the filtering for generating L' values on top border 105 and left border 104 is not simplified.
Only filtering of L' values of inner samples 106 is simplified and does not use right-shift.
In another embodiment, a simplification involving integer-only and no right-shift only applies to left border 104 and inner 106 L' virtual samples. The energy of these filters is equal to 2.
For the top border 105, the following applies: a L' values on top border 105 are obtained through the following filtering: L{x,y]= (ReCL[2x-1,-1]+2RccJ[2x,-1}+ReCL[2x+1,-1D>> 1 (37) Instead of Ii[x,y]= (RCcL[2x..-1,-1]+2RCcL[2x,-I]+RccJ[2x+1,-1D>> 2 (38) In equation (37), RecL is filtered by a 3-tap filter with integer coefficients [1,2,1], with a right shift of 1. The filter energy equal to 2, with coefficients [0.5,1,0.5].
The energy is the same as the one of the left border filter.
In another embodiment, the L' virtual samples may be generated as follows: * For generating the left border subset 104 of set 1, equation (30) is used. The filter energy is equal to 2, and the filter has integer-only coefficients and no right shift operation is needed.
For generating the top border subset 105 of set 1, equation (32) is applied. The filter energy is equal to 2, and the filter has integer-only coefficients and no right shift operation is needed.
For generating the set 2 samples, equation (35) is applied. The filter energy is equal to 1; the filter uses non-integer coefficients and one right shift operation.
In another embodiment, the L' virtual samples are generated as follows: * Far generating the left border subset 104 of set 1, equation (30) is used. The filter energy is equal to 2, and the filter has integer-only coefficients and no right shift operation is needed.
* For generating the top border subset 105 of set 1, equation (37) is applied. The filter energy is equal to 2, and the filter has non-integer coefficients and one right shift operation is needed.
* For generating the set 2 samples, equation (35) is applied. The filter energy is equal to 1; the filter uses non-integer coefficients and one right shift operation.
In all the mentioned embodiments related to the L' virtual samples generating simplification, the two filters involved in set 1 for generating the top border 105 and left border 104 L' virtual samples have the same energy, which is desirable for the OLS computation. However the energy of the filter involved in filtering the samples of set 2 can be different.
A method for simplifying Iuma sample filtering for a chroma prediction process according to an embodiment of the invention is illustrated in Figure 6.
Compared to Figures 3 and 4, the main differences relate to the simplification of steps 601, 602, 603 that aim at generating the virtual luma samples. Also, the computation of intermediate parameters in step 604 is slightly modified. Finally the chroma prediction block 606 is also modified according to the new equation 36 depicted above.
It will be appreciated that in some embodiments of the invention the step 607 of correction of intermediate parameters is optional. The simplification provided by simplified filtering described above can be applied without applying the step of correction of the intermediate parameter. This step can however improve the coding performance by virtue of the correction of the bias potentially due to the usage of a simpler luma samples filtering.
Figure 7 illustrates a block diagram of a receiving device such as client terminal 1002 of Figure 3 or transmitting device, such as server 1001 of Figure 3 which may be adapted to implement embodiments of the invention.
The method of Figures 4, 5 or 6 and further embodiments of the invention may be implemented in the device in the case of a transmitting device including an encoder, or a receiving device comprising a decoder for decoding the image.
The device 800 comprises a central processing unit (CPU) 801 capable of executing instructions from program ROM 803 on powering up of the device 800, and instructions relating to a software application from a main memory 802 after powering up of the device 800. The main memory 802 may be for example of a Random Access Memory (RAM) type which functions as a working area of the CPU 801, and the memory capacity thereof can be expanded by an optional RAM connected to an expansion port (not illustrated).
Instructions relating to the software application may be loaded into the main memory 802 from a hard-disc (HD) 806 or the program ROM 803 for example.
Such a software application, when executed by the CPU 801, causes the steps of the method of embodiments of the invention to be performed on the device.
The device 800 further includes a network interface 804 enabling connection of the device 800 to the communication network. The software application when executed by the CPU is adapted to receive data streams through the network interface 804 from other devices connected to the communication network in the case of a receiving device and to transmit data streams through the network interface 804 in the case of a transmitting device.
The device 800 further comprises a user interface 805 to display information to, and/or receive inputs from a user.
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a skilled person in the art which lie within the scope of the present invention.
S For example although embodiments of the present invention have been described with reference to the prediction of chroma components from luma components, the invention may be applied to other type of image components.
Many further modifications and variations will suggest themseJves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.

Claims (1)

  1. <claim-text>CLAIMS1. A method of processing components of an image for coding of a group of samples of the image, the image being composed of at least a first type of component and a second type of component different from the first type of component, wherein samples of the second type of component are predictable from samples of the first type of component using a prediction model linking the first type of component to the second type of component the link being represented in the prediction model by at least one model parameter, , the method comprising adjusting, according to a correcting parameter based on a predetermined criterion, the value of at least one intermediate model parameter used to obtain at least one such model parameter of the prediction model, or the value of at least one such model parameter, and predicting a group of samples of the second type from a plurality of filtered samples of the first type using the prediction model and a prediction model parameter based on the adjusted at least one intermediate model parameter value or the adjusted at least one such model parameter.</claim-text> <claim-text>2. A method according to claim 1 wherein the value of the at least one intermediate model parameter is adjusted in dependence upon the correcting parameter.</claim-text> <claim-text>3. A method according to claim 2, wherein the or one said intermediate model parameter corresponds to an intermediate parameter Ai or A2 used to calculate the model parameter a based on the following expression: NXCIL'I -XC1XL', a = i=1 ___ = N N 2 A NL'2 [XL'1J 2 wherein the prediction model is defined based on the following expression Cpred[X,YI= a±[x,y]+ /3 where: Cpreu[X,YI corresponds to a group of samples of the second component L'precj[X,Y] corresponds to a group of filtered samples of the first component a and /3 are model parameters of the prediction model N corresponds to number of samples used for the prediction 4. A method according to claim 3 wherein adjusting the value of at least one intermediate model parameter comprises increasing the value of intermediate parameter A1 and/or decreasing the value of intermediate parameter A2.5. A method according to claim 3 wherein adjusting the value of at least one intermediate model parameter comprises applying the operation 4 =A, -A, 12k,where k is an integer shifting parameter.6. A method according to any preceding claim wherein the correcting parameter is determined according to noise of the samples of the first type of component used to predict the samples of the second type of component.7. A method according to any preceding claim wherein the correcting parameter is determined based on minimisation of a distortion of a DC mode and/or LM mode for prediction of the samples of the second type of component.8. A method according to any preceding claim wherein the correcting parameter is determined based on the frame or slice type.9. A method according to any preceding claim wherein the correcting parameter is determined based on a quantization parameter.10. A method according to any preceding claim wherein the correcting parameter is determined based on the energy of a neighbouring coded residual signal.11. A method according to any preceding claim further comprising signalling the correcting parameter in a bitstream used to transmit the coded image samples.12. A method according to any preceding claim, wherein the first type of component is a luma component and the second type of component is a chroma component 13. A method according to claim 12 wherein the group of samples is predicted using N chroma samples on the top border and left border of the chroma samples to be coded, and N luma samples from the top border and left border of co-located luma samples.14. A method according to any preceding claim further comprising filtering a set of samples of the first type of component used to predict, by means of the prediction model, samples of the second type of component, wherein the filtering comprises using a filter (i) having integer only coefficients and/or (ii) avoiding using left shift operations and/or (iii) avoiding using right shift operations.15. A method according to claim 14 wherein the set of samples is a first set of samples of at least two sets of samples, wherein the filtering comprises using filters having the same energy for filtering at least two sets of samples, 16. A method according to claim 14 or 15, wherein the said set has a first subset of samples which is filtered using a first filter and also has a second subset of samples which is filtered using a second filter different from the first filter, the first and second filters having the same energy.17. A device for processing components of an image for coding of a group of samples of the image, the image being composed of at least a first type of component and a second type of component different from the first type of component, wherein samples of the second type of component are predictable from samples of the first type of component using a prediction model linking the first type of component to the second type of component the link being represented in the prediction model by at least one model parameter, the device comprising corrective means for adjusting, according to a correcting parameter based on a predetermined criterion, the value of at least one intermediate parameter used to obtain at least one such model parameter of the prediction model, or the value of at least one such model parameter, and prediction means for predicting a group of samples of the second type from a plurality of filtered samples of the first type using the prediction model and a prediction model parameter based on the adjusted at least one intermediate model parameter value or the adjusted at least one such model parameter.18. A device according to claim 17 wherein the correctivemeans is configured to adjust the value of the at least one intermediate parameter in dependence upon the correcting parameter.19. A device according to claim 18, wherein the or one said intermediate parameter corresponds to an intermediate parameter A1 or A2 used to calculate the model parameter a based on the following expression: NC1L'1 -=1 2 A NL'2 _[L'1J 2 wherein the prediction means is configured to predict based on the prediction model according to the following expression Cpred[X,Y] = a±[x,y]+ /3 where: Cpred(X,Y] corresponds to a group of samples of the second component L'precj[Xiy] corresponds to a group of filtered samples of the first component a and /3 are model parameters of the prediction model N corresponds to number of samples used for the prediction 20. A device according to claim 19 wherein the corrective means is configured to adjust the value of at least one intermediate parameter by increasing the value of intermediate parameter Ai and/or decreasing the value of intermediate parameter A2.21. A device according to claim 19 wherein the corrective means is configured to adjust the value of at least one intermediate parameter by applying the operation 4 = A2 -A, ID, where k is an integer shifting parameter.22. A device according to any one of claims 17 to 21 further comprising means for determining the correcting parameter in dependence on the noise of the samples of the first type of component used to predict the samples of the second type of component.23. A device according to any one of claims 17 to 22 further comprising means for determining the correcting parameter in dependence on minimisation of a distortion of a DC mode and/or LM mode for prediction of the samples of the second type of component.24. A device according to any one of claims 17 to 23 further comprising means for determining the correcting parameter in dependence on the frame or slice type.25. A device according to any one of claims 17 to 24 further comprising means for determining the correcting parameter in dependence on a quantization parameter.26. A device according to any one of claims 17 to 25 further comprising means for determining the correcting parameter in dependence on the energy of a neighbouring coded residual signal.27. A device according to any one of claims 17 to 26 further comprising means for signalling the correcting parameter in a bitstream used to transmit the coded image samples.28. A device according to any preceding claim, wherein the first type of component is a luma component and the second type of component is a chroma component 29. A device according to claim 28 wherein prediction means is configured to predict the group of samples using N chroma samples on the top border and left border of the chroma samples to be coded, and N luma samples from the top border and left border of co-located luma samples.30. A device according to any one of claims 17 to 29 further comprising filtering means for filtering a set of samples of the first type of component used by the prediction means to predict, by means of the prediction model, samples of the second type of component, wherein the filtering means comprises a filter (i) having integer only coefficients and/or (ii) avoiding using left shift operations andlor (iii) avoiding using right shift operations.31. A device according to claim 30 wherein the set of samples is a first set of samples of at least two sets of samples, wherein the filtering means comprises filters having the same energy for filtering at least two sets of sampies, 32. A device according to claim 30 or 31, wherein the filtering means comprises a first filter for filtering a first subset of samples of the said set of samples and a second filter different from the first filter for filtering a second subset of samples of the said set, the first and second filters having the same energy 33. A method of processing components of an image for coding or decoding of a group of samples of the image, the image being composed of at least a first type of component and a second type of component different from the first type of component, wherein samples of the second type of component are predictable from samples of the first type of component using a prediction model linking the first type of component to the second type of component, the method comprising: filtering a set of samples of the first type of component used to predict, by means of the prediction model, samples of the second type of component, wherein the filtering comprises using a filter (i) having integer only coefficients and/or (ii) avoiding using left shift operations and/or (iii) avoiding using right shift operations; and predicting a group of samples of the second type of component from at least the set of filtered samples of the first type of component using the prediction model.34. A method according to claim 33 wherein the said set of samples is a first set of samples of at least two sets of samples, wherein the filtering comprises using filters having the same energy for filtering at least two such sets of samples.35. A method according to claim 33 or 34, wherein the said set of samples has a first subset of samples which is filtered using a first filter and a second subset of samples which is filtered using a second filter different from the first filter, the first and second filters having the same energy.36. A method according to claim 35, wherein the first type of component is a luma component and the second type of component is a chroma component and wherein the group of samples is predicted using N chroma samples on the top border and left border of the chroma samples to be coded, and N luma samples from the top border and left border of co-located luma samples.37. A method according to claim 36 wherein the first subset includes luma samples from the left border, the second subset includes luma samples from the top border of the group of collocated luma samples, and the second set includes luma samples within the group of collocated luma samples.38. A method according to claim 36 or 37, wherein the first subset of samples is filtered by a filtering operation based on the following expression: Ji[x,y}= Rec1[-1,2y]-i-RecL[-1,2y+1] where RecL corresponds to reconstructed luma samples of the first subset.39. A method according to any one of claims 36 to 38, wherein the second subset of samples is filtered by a filtering operation based on the following expression: L'[x,y]= RCCL[2x -i,-i]+ RecL [2x +i,.-i] where ReCL corresponds to reconstructed luma samples of the second subset.40. A method according to any one of claims 36 to 38, wherein the second subset of samples is filtered by a filtering operation based on the following expression: L[x,y]=(Rccf[2x-1,-1J+2RecL[2x,-1J-I-RecL[2x +i,-1D>>i where Req. corresponds to reconstructed luma samples of the second subset 41. A method according to any one of claims 36 to 40, wherein the second set of samples is filtered by a filtering operation based on the following expression: L' [x, y] = RecL [2x,2y]+ ReeL [2x,2y + ii where ReCL corresponds to reconstructed luma samples of the second set.42. A device for processing components of an image for coding or decoding of a group of samples of the image, the image being composed of at least a first type of component and a second type of component different from the first type of component, wherein samples of the second type of component are predictable from samples of the first type of component using a prediction model linking the first type of component to the second type of component, the device comprising: filtering means for filtering a set of samples of the first type of component used to predict, by means of the prediction model, samples of the second type of component, wherein the filtering means comprises a filter (i) having integer only coefficients and/or (ii) avoiding using left shift operations and/or (iii) avoiding using right shift operations; and prediction means for predicting a group of samples of the second type from at least the set of filtered samples of the first type using the prediction model.43. A device according to claim 42 wherein the said set of samples is a first set of samples of at least two sets of samples, wherein the filtering means comprises filters having the same energy for filtering at least two such sets of samples.44. A device according to claim 42 or 43, wherein the said set of samples has a first subset of samples and a second subset of samples and the filtering means comprises a first filter for filtering the first subset and a second filter different from the first filter for filtering the second subset, the first and second filters having the same energy.45. A device according to any one of claims 42 to 44, wherein the first type of component is a luma component and the second type of component is a chroma component and wherein the prediction means is configured to predict a group of samples using N chroma samples on the top border and left border of the chroma samples to be coded, and N luma samples from the top border and left border of co-located luma samples.46. A device according to claim 45 wherein the first subset includes luma samples from the left border, the second subset includes luma samples from the top border of the group of collocated luma samples, and the second set includes luma samples within the group of collocated luma samples.47. A device according to claim 46, wherein the first filter is configured to filter by a filtering operation based on the following expression: 12[x,y]= RecL [-1,2y]+ RecL [ 1,2y + 1] where RecL corresponds to reconstructed luma samples of the first subset.48. A device according to claim 46 or 47, wherein the second filter is configured to filter by a filtering operation based on the following expression: L[x.y]= ReeL[2x-1,-1]-f Rec,[2x +i,-i] where RecL corresponds to reconstructed luma samples of the second subset.49. A device according to claim 46 or 47, wherein the second filter is configured to filter by a filtering operation based on the following expression: L[x,y]=(RecL[2x-1,_1]+2RecL[2x,--1]+RecL[2x+1,-1D>>1 where ReeL corresponds to reconstructed luma samples of the second subset 50. A device according to any one of claims 46 to 49, wherein the filtering means comprises a further filter for filtering the second set of samples by a filtering operation based on the following expression: £ [x, y] = ReeL [2x,2y]+ ReeL [2x,2y + 1] where Req corresponds to reconstructed luma samples of the second set.51. A method of encoding an image including a step of processing components of the image for encoding of a sample of the image, according to any one of claims ito 16 or 33 to 41, and a step of encoding the image.52. A method of decoding an image including a step of processing components of the image for decoding of a sample of the image, according to any one of claims 33 to 41, and a step of decoding the image.53. A device for encoding an image comprising a device for processing components of the image for encoding of a sample of the image according to any one of claims 17 to 32 or 42 to 50.54. A device for decoding an image comprising a device for processing components of the image for decoding of a sample of the image according to any one of claims 42 to 50.55. A signal carrying an information dataset for an image represented by a video bitstream, the image being composed of at least a first type of component and a second type of component different to the first type of component, wherein samples of the second type of component are predictable from samples of at least the first type of component using a prediction model linking the first type of component to the second type of component, the link being represented by at least one model parameter, at least one such model parameter being dependent on at least one intermediate model parameter, the information dataset comprising: corrective information representative of a correcting parameter for adjusting the at least one such model parameter or the one intermediate model parameter.56. A video bitstream representative of an image composed of at least a first type of component and a second type of component different to the first type of component, wherein samples of the second type of component are predictable from samples of at least the first type of component using a prediction model linking the first type of component to the second type of component, the link being represented by at least one model parameter dependent on at least one intermediate model parameter; and further including a signal according to claim 55.57. A method of processing components of an image for decoding of a sample of the image, the image being composed of at least a first type of component and a second type of component different to the first type of component, wherein samples of the second type of component are predictable from samples of at least the first type of component using a prediction model linking the first type of component to the second type of component, the link between the first type of component and the second type of component being represented by at least one model parameter of the prediction model, dependent on at least one intermediate model parameter, the method comprising: receiving a bitstream representative of the image and corrective information representative of a correcting parameter for adjusting at least one such model parameter or one such intermediate model parameter; decoding samples of the first type; adapting the at least one such model parameter or intermediate model parameter based on the corrective information; and predicting a sample of the second type from at least one decoded sample of the first type using the prediction model and the adapted the or at least one said parameter or intermediate parameter value.58. A decoding device for processing components of an image for decoding of a sample of the image, the image being composed of at least a first type of component and a second type of component different to the first type of component, wherein samples of the second type of component are predictable from samples of at least the first type of component using a prediction model linking the first type of component to the second type of component the link between the first type of component and the second type of component being represented by at least one parameter of the prediction model dependent on at least one intermediate parameter the device comprising: receiving means for receiving a bitstream representative of the image and corrective information representative a corrective parameter for adjusting the value of at least one such model parameter or intermediate model parameter; corrective means for adapting the at least one such model parameter value or intermediate model parameter vajue, based on the corrective information; and prediction means for predicting a sample of the second type from a sample of the first type using the prediction model and the adapted at least one model parameter value or intermediate model parameter value.59. A computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing a method according to any one of claims Ito 17, 33 to 41, 51 to 52 or 57 when loaded into and executed by the programmable apparatus.60. A computer-readable storage medium storing instructions of a computer program for implementing a method, according to any one of claims 1 to 17, 33 to4l, 51 to 52 or 57.61. A method of processing components of an image substantially as hereinbefore described with reference to and as shown in Figure 4, 5 or 6.</claim-text>
GB1118445.4A 2011-10-25 2011-10-25 Method and apparatus for processing components of an image Active GB2495942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1118445.4A GB2495942B (en) 2011-10-25 2011-10-25 Method and apparatus for processing components of an image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1118445.4A GB2495942B (en) 2011-10-25 2011-10-25 Method and apparatus for processing components of an image

Publications (3)

Publication Number Publication Date
GB201118445D0 GB201118445D0 (en) 2011-12-07
GB2495942A true GB2495942A (en) 2013-05-01
GB2495942B GB2495942B (en) 2014-09-03

Family

ID=45373408

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1118445.4A Active GB2495942B (en) 2011-10-25 2011-10-25 Method and apparatus for processing components of an image

Country Status (1)

Country Link
GB (1) GB2495942B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020053805A1 (en) * 2018-09-12 2020-03-19 Beijing Bytedance Network Technology Co., Ltd. Single-line cross component linear model prediction mode
GB2591379A (en) * 2018-01-03 2021-07-28 Beijing Bytedance Network Tech Co Ltd Single-line cross component linear model prediction mode
US11218702B2 (en) 2018-08-17 2022-01-04 Beijing Bytedance Network Technology Co., Ltd. Simplified cross component prediction
EP3861728A4 (en) * 2018-11-06 2022-04-06 Beijing Bytedance Network Technology Co., Ltd. Complexity reduction in parameter derivation for intra prediction
US11595687B2 (en) 2018-12-07 2023-02-28 Beijing Bytedance Network Technology Co., Ltd. Context-based intra prediction
US11729405B2 (en) 2019-02-24 2023-08-15 Beijing Bytedance Network Technology Co., Ltd. Parameter derivation for intra prediction
US11902507B2 (en) 2018-12-01 2024-02-13 Beijing Bytedance Network Technology Co., Ltd Parameter derivation for intra prediction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2795276A1 (en) * 1999-06-15 2000-12-22 Canon Kk Estimation method for relationship between control parameters in digital data compression system, involves estimating adaptive filter parameters in parametric model and providing compensation
GB2366679A (en) * 2000-09-05 2002-03-13 Sony Uk Ltd Processing data having multiple components
WO2008020687A1 (en) * 2006-08-16 2008-02-21 Samsung Electronics Co, . Ltd. Image encoding/decoding method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2795276A1 (en) * 1999-06-15 2000-12-22 Canon Kk Estimation method for relationship between control parameters in digital data compression system, involves estimating adaptive filter parameters in parametric model and providing compensation
GB2366679A (en) * 2000-09-05 2002-03-13 Sony Uk Ltd Processing data having multiple components
WO2008020687A1 (en) * 2006-08-16 2008-02-21 Samsung Electronics Co, . Ltd. Image encoding/decoding method and apparatus

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2591379A (en) * 2018-01-03 2021-07-28 Beijing Bytedance Network Tech Co Ltd Single-line cross component linear model prediction mode
GB2591379B (en) * 2018-01-03 2023-02-15 Beijing Bytedance Network Tech Co Ltd Single-line cross component linear model prediction mode
US11218702B2 (en) 2018-08-17 2022-01-04 Beijing Bytedance Network Technology Co., Ltd. Simplified cross component prediction
US11677956B2 (en) 2018-08-17 2023-06-13 Beijing Bytedance Network Technology Co., Ltd Simplified cross component prediction
TWI820212B (en) * 2018-09-12 2023-11-01 大陸商北京字節跳動網絡技術有限公司 Single-line cross component linear model prediction mode
WO2020053806A1 (en) * 2018-09-12 2020-03-19 Beijing Bytedance Network Technology Co., Ltd. Size dependent down-sampling in cross component linear model
WO2020053804A1 (en) * 2018-09-12 2020-03-19 Beijing Bytedance Network Technology Co., Ltd. Downsampling in cross-component linear modeling
CN110896480A (en) * 2018-09-12 2020-03-20 北京字节跳动网络技术有限公司 Size dependent downsampling in cross-component linear models
US11172202B2 (en) 2018-09-12 2021-11-09 Beijing Bytedance Network Technology Co., Ltd. Single-line cross component linear model prediction mode
TWI824006B (en) * 2018-09-12 2023-12-01 大陸商北京字節跳動網絡技術有限公司 Downsampling in cross-component linear modeling
WO2020053805A1 (en) * 2018-09-12 2020-03-19 Beijing Bytedance Network Technology Co., Ltd. Single-line cross component linear model prediction mode
US11812026B2 (en) 2018-09-12 2023-11-07 Beijing Bytedance Network Technology Co., Ltd Single-line cross component linear model prediction mode
US11438598B2 (en) 2018-11-06 2022-09-06 Beijing Bytedance Network Technology Co., Ltd. Simplified parameter derivation for intra prediction
EP3861728A4 (en) * 2018-11-06 2022-04-06 Beijing Bytedance Network Technology Co., Ltd. Complexity reduction in parameter derivation for intra prediction
US11930185B2 (en) 2018-11-06 2024-03-12 Beijing Bytedance Network Technology Co., Ltd. Multi-parameters based intra prediction
US11902507B2 (en) 2018-12-01 2024-02-13 Beijing Bytedance Network Technology Co., Ltd Parameter derivation for intra prediction
US11595687B2 (en) 2018-12-07 2023-02-28 Beijing Bytedance Network Technology Co., Ltd. Context-based intra prediction
US11729405B2 (en) 2019-02-24 2023-08-15 Beijing Bytedance Network Technology Co., Ltd. Parameter derivation for intra prediction

Also Published As

Publication number Publication date
GB201118445D0 (en) 2011-12-07
GB2495942B (en) 2014-09-03

Similar Documents

Publication Publication Date Title
CA2755889C (en) Image processing device and method
US8903188B2 (en) Method and device for processing components of an image for encoding or decoding
WO2010143583A1 (en) Image processing device and method
GB2495942A (en) Prediction of Image Components Using a Prediction Model
JP6989699B2 (en) Interpolation filters for inter-prediction equipment and methods for video coding
US11533480B2 (en) Method and apparatus for image filtering with adaptive multiplier coefficients
US20110103464A1 (en) Methods and Apparatus for Locally Adaptive Filtering for Motion Compensation Interpolation and Reference Picture Filtering
US10542265B2 (en) Self-adaptive prediction method for multi-layer codec
KR102331933B1 (en) Method and apparatus for processing a video signal using coefficient derived reconstruction
CN113728629A (en) Motion vector derivation in video coding
GB2492130A (en) Processing Colour Information in an Image Comprising Colour Component Sample Prediction Being Based on Colour Sampling Format
US9641847B2 (en) Method and device for classifying samples of an image
US11765351B2 (en) Method and apparatus for image filtering with adaptive multiplier coefficients
CN113489974B (en) Intra-frame prediction method, video/image encoding and decoding method and related devices
AU2015255215B2 (en) Image processing apparatus and method
KR20110087871A (en) Method and apparatus for image interpolation having quarter pixel accuracy using intra prediction modes
KR20240087768A (en) Methods and devices for encoding/decoding video
WO2023275247A1 (en) Encoding resolution control
WO2023052141A1 (en) Methods and apparatuses for encoding/decoding a video
US8929433B2 (en) Systems, methods, and apparatus for improving display of compressed video data
WO2022146215A1 (en) Temporal filter
JP2006081066A (en) Image producing apparatus and image production program
Amiri Bilateral and adaptive loop filter implementations in 3D-high efficiency video coding standard