WO2019031703A1 - Appareil et procédé de décodage d'image conformément à un modèle linéaire dans un système de codage d'image - Google Patents
Appareil et procédé de décodage d'image conformément à un modèle linéaire dans un système de codage d'image Download PDFInfo
- Publication number
- WO2019031703A1 WO2019031703A1 PCT/KR2018/007003 KR2018007003W WO2019031703A1 WO 2019031703 A1 WO2019031703 A1 WO 2019031703A1 KR 2018007003 W KR2018007003 W KR 2018007003W WO 2019031703 A1 WO2019031703 A1 WO 2019031703A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sample
- block
- luma
- rec
- offset
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to a video coding technique, and more particularly, to a video decoding method and apparatus for predicting chroma components according to a linear model in a video coding system.
- HD high definition
- UHD ultra high definition
- the present invention provides a method and apparatus for enhancing video coding efficiency.
- an image decoding method performed by a decoding apparatus includes deriving linear parameter parameters for the current chroma block based on neighboring samples of the current chroma block and surrounding samples of the luma block, Obtaining offset information for the current chroma block through a bitstream, deriving a corresponding position that is modified based on the corresponding position and the offset information, Deriving a downsampled reconstructed luma sample for the target chroma sample based on the reconstructed luma sample of the corresponding location, and reconstructing the predicted sample of the target chroma sample based on the linear model parameters and the downsampled reconstructed luma sample And a step of generating the generated signal.
- the decoding apparatus includes an entropy decoding unit that obtains prediction information including offset information for a current chroma block through a bitstream, and an entropy decoding unit for decoding the current chroma block based on neighbor samples of the current chroma block and surrounding samples of the luma block.
- a video encoding method performed by an encoding apparatus.
- the method includes deriving linear parameter parameters for the current chroma block based on neighboring samples of the current chroma block and surrounding samples of the luma block, Deriving a corresponding position of the reconstructed luma sample, deriving a modified corresponding position, deriving a downsampled reconstructed luma sample for the target chroma sample based on the reconstructed luma sample of the modified corresponding position, Generating predicted samples of the target chroma samples based on the linear model parameters and the downsampled restored luma samples, and generating, encoding, and outputting prediction information for the current chroma block. do.
- a video encoding apparatus derives linear model parameters for the current chroma block based on neighboring samples of the current chroma block and surrounding samples of the luma block and generates linear model parameters for the current chroma block in the luma block Deriving a corresponding position of the restored luma sample, deriving a corrected corresponding position, deriving a downsampled restoration luma sample for the target chroma sample based on the restored luma sample of the modified corresponding position, And a prediction unit for generating prediction samples of the target chroma samples based on the downsampled restoration luma samples, and an entropy encoding unit for generating, encoding, and outputting prediction information for the current chroma block .
- prediction of the current chroma block is performed using a luma component for a current chroma block, downsampling is performed based on a luma component more suitable for prediction of the current chroma block,
- the prediction accuracy of the current chroma block can be improved and the overall coding efficiency can be improved.
- prediction of a chroma component can be performed based on a plurality of downsampling filters, thereby improving prediction accuracy of the current chroma block, thereby improving overall coding efficiency.
- FIG. 1 is a view for schematically explaining a configuration of a video encoding apparatus to which the present invention can be applied.
- FIG. 2 is a schematic view illustrating a configuration of a video decoding apparatus to which the present invention can be applied.
- FIG. 3 exemplarily shows the left neighbor samples and the upper neighbor samples used for intra prediction of the current block.
- FIG. 4 illustrates exemplary intra-directional modes of 65 prediction directions.
- 5 exemplarily shows neighbor samples of the current chroma block and surrounding samples of the corresponding luma block used to derive the relational expression of the corresponding luma block and the current chroma block.
- FIG. 6 shows an example of adjusting the position of the restored luma sample corresponding to the target chroma sample of the current chroma block.
- FIG. 7 schematically shows a video encoding method by the encoding apparatus according to the present invention.
- FIG. 8 schematically shows a video decoding method by a decoding apparatus according to the present invention.
- a picture generally refers to a unit that represents one image in a specific time zone
- a slice is a unit that constitutes a part of a picture in coding.
- One picture may be composed of a plurality of slices, and pictures and slices may be used in combination if necessary.
- a pixel or a pel may mean a minimum unit of a picture (or image). Also, a 'sample' may be used as a term corresponding to a pixel.
- a sample may generally represent a pixel or pixel value and may only represent a pixel / pixel value of a luma component or only a pixel / pixel value of a chroma component.
- a unit represents a basic unit of image processing.
- a unit may include at least one of a specific area of a picture and information related to the area.
- the unit may be used in combination with terms such as a block or an area in some cases.
- an MxN block may represent a set of samples or transform coefficients consisting of M columns and N rows.
- FIG. 1 is a view for schematically explaining a configuration of a video encoding apparatus to which the present invention can be applied.
- the video encoding apparatus 100 includes a picture dividing unit 105, a predicting unit 110, a residual processing unit 120, an entropy encoding unit 130, an adding unit 140, a filter unit 150 And a memory 160.
- the residual processing unit 120 may include a subtracting unit 121, a transforming unit 122, a quantizing unit 123, a reordering unit 124, an inverse quantizing unit 125 and an inverse transforming unit 126.
- the picture dividing unit 105 may divide the inputted picture into at least one processing unit.
- the processing unit may be referred to as a coding unit (CU).
- the coding unit may be recursively partitioned according to a quad-tree binary-tree (QTBT) structure from the largest coding unit (LCU).
- QTBT quad-tree binary-tree
- LCU largest coding unit
- one coding unit may be divided into a plurality of coding units of deeper depth based on a quadtree structure and / or a binary tree structure.
- the quadtree structure is applied first and the binary tree structure can be applied later.
- a binary tree structure may be applied first.
- the coding procedure according to the present invention can be performed based on the final coding unit which is not further divided.
- the maximum coding unit may be directly used as the final coding unit based on the coding efficiency or the like depending on the image characteristics, or the coding unit may be recursively divided into lower-depth coding units Lt; / RTI > may be used as the final coding unit.
- the coding procedure may include a procedure such as prediction, conversion, and restoration, which will be described later.
- the processing unit may include a coding unit (CU) prediction unit (PU) or a transform unit (TU).
- the coding unit may be split from the largest coding unit (LCU) into coding units of deeper depth along the quad tree structure.
- LCU largest coding unit
- the maximum coding unit may be directly used as the final coding unit based on the coding efficiency or the like depending on the image characteristics, or the coding unit may be recursively divided into lower-depth coding units Lt; / RTI > may be used as the final coding unit.
- SCU smallest coding unit
- the coding unit can not be divided into smaller coding units than the minimum coding unit.
- the term " final coding unit " means a coding unit on which the prediction unit or the conversion unit is partitioned or divided.
- a prediction unit is a unit that is partitioned from a coding unit, and may be a unit of sample prediction. At this time, the prediction unit may be divided into sub-blocks.
- the conversion unit may be divided along the quad-tree structure from the coding unit, and may be a unit for deriving a conversion coefficient and / or a unit for deriving a residual signal from the conversion factor.
- the coding unit may be referred to as a coding block (CB)
- the prediction unit may be referred to as a prediction block (PB)
- the conversion unit may be referred to as a transform block (TB).
- the prediction block or prediction unit may refer to a specific area in the form of a block in a picture and may include an array of prediction samples.
- a transform block or transform unit may refer to a specific region in the form of a block within a picture, and may include an array of transform coefficients or residual samples.
- the prediction unit 110 may perform a prediction on a current block to be processed (hereinafter, referred to as a current block), and may generate a predicted block including prediction samples for the current block.
- the unit of prediction performed in the prediction unit 110 may be a coding block, a transform block, or a prediction block.
- the prediction unit 110 may determine whether intra prediction or inter prediction is applied to the current block. For example, the prediction unit 110 may determine whether intra prediction or inter prediction is applied in units of CU.
- the prediction unit 110 may derive a prediction sample for a current block based on a reference sample outside the current block in a picture to which the current block belongs (hereinafter referred to as a current picture). At this time, the prediction unit 110 may derive a prediction sample based on (i) an average or interpolation of neighboring reference samples of the current block, (ii) The prediction sample may be derived based on a reference sample existing in a specific (prediction) direction with respect to the prediction sample among the samples. (i) may be referred to as a non-directional mode or a non-angle mode, and (ii) may be referred to as a directional mode or an angular mode.
- the prediction mode may have, for example, 33 directional prediction modes and at least two non-directional modes.
- the non-directional mode may include a DC prediction mode and a planar mode (Planar mode).
- the prediction unit 110 may determine a prediction mode applied to a current block using a prediction mode applied to a neighboring block.
- the prediction unit 110 may derive a prediction sample for a current block based on a sample specified by a motion vector on a reference picture.
- the prediction unit 110 may derive a prediction sample for a current block by applying one of a skip mode, a merge mode, and a motion vector prediction (MVP) mode.
- the prediction unit 110 can use motion information of a neighboring block as motion information of a current block.
- difference residual between the predicted sample and the original sample is not transmitted unlike the merge mode.
- MVP mode a motion vector of a current block can be derived by using a motion vector of a neighboring block as a motion vector predictor to use as a motion vector predictor of a current block.
- a neighboring block may include a spatial neighboring block existing in a current picture and a temporal neighboring block existing in a reference picture.
- the reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic).
- the motion information may include a motion vector and a reference picture index.
- Information such as prediction mode information and motion information may be (entropy) encoded and output in the form of a bit stream.
- the highest picture on the reference picture list may be used as a reference picture.
- the reference pictures included in the picture order count can be sorted on the basis of the picture order count (POC) difference between the current picture and the corresponding reference picture.
- POC picture order count
- the POC corresponds to the display order of the pictures and can be distinguished from the coding order.
- the subtraction unit 121 generates residual samples that are the difference between the original sample and the predicted sample. When the skip mode is applied, a residual sample may not be generated as described above.
- the transforming unit 122 transforms the residual samples on a transform block basis to generate a transform coefficient.
- the transforming unit 122 can perform the transform according to the size of the transform block and a prediction mode applied to the coding block or the prediction block spatially overlapping the transform block. For example, if intraprediction is applied to the coding block or the prediction block that overlaps the transform block and the transform block is a 4 ⁇ 4 residue array, the residual sample is transformed into a discrete sine transform (DST) In other cases, the residual samples can be converted using a DCT (Discrete Cosine Transform) conversion kernel.
- DST discrete sine transform
- the quantization unit 123 may quantize the transform coefficients to generate quantized transform coefficients.
- the reordering unit 124 rearranges the quantized transform coefficients.
- the reordering unit 124 may rearrange the block-shaped quantized transform coefficients into a one-dimensional vector form through a scanning method of coefficients.
- the reordering unit 124 may be a part of the quantization unit 123, although the reordering unit 124 is described as an alternative configuration.
- the entropy encoding unit 130 may perform entropy encoding on the quantized transform coefficients.
- the entropy encoding unit 130 may derive residual information on the quantized transform coefficients and entropy-encode the residual information.
- Entropy encoding may include, for example, an encoding method such as exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC)
- CABAC context-adaptive binary arithmetic coding
- the entropy encoding unit 130 may encode the information necessary for video restoration (such as the value of a syntax element) together with the quantized transform coefficient or separately.
- the encoded information may be transmitted or stored in units of NAL (network abstraction layer) units in the form of a bit stream.
- NAL network abstraction layer
- the bitstream may be transmitted over a network or stored in a digital storage medium.
- the network may include a broadcasting network and / or a communication network
- the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- the inverse quantization unit 125 inversely quantizes the quantized values (quantized transform coefficients) in the quantization unit 123 and the inverse transformation unit 126 inversely quantizes the inversely quantized values in the inverse quantization unit 125, .
- the adder 140 combines the residual sample and the predicted sample to reconstruct the picture.
- the residual samples and the prediction samples are added in units of blocks so that a reconstruction block can be generated.
- the adding unit 140 may be a part of the predicting unit 110, Meanwhile, the addition unit 140 may be referred to as a restoration unit or a restoration block generation unit.
- the filter unit 150 may apply a deblocking filter and / or a sample adaptive offset. Through deblocking filtering and / or sample adaptive offsets, artifacts in the block boundary in the reconstructed picture or distortion in the quantization process can be corrected.
- the sample adaptive offset can be applied on a sample-by-sample basis and can be applied after the process of deblocking filtering is complete.
- the filter unit 150 may apply an ALF (Adaptive Loop Filter) to the restored picture.
- the ALF may be applied to the reconstructed picture after the deblocking filter and / or sample adaptive offset is applied.
- the memory 160 may store restored pictures (decoded pictures) or information necessary for encoding / decoding.
- the reconstructed picture may be a reconstructed picture whose filtering procedure has been completed by the filter unit 150.
- the stored restored picture may be used as a reference picture for (inter) prediction of another picture.
- the memory 160 may store (reference) pictures used for inter prediction. At this time, the pictures used for inter prediction can be designated by a reference picture set or a reference picture list.
- FIG. 2 is a schematic view illustrating a configuration of a video decoding apparatus to which the present invention can be applied.
- the video decoding apparatus 200 includes an entropy decoding unit 210, a residual processing unit 220, a predicting unit 230, an adding unit 240, a filter unit 250, and a memory 260 .
- the residual processing unit 220 may include a rearrangement unit 221, an inverse quantization unit 222, and an inverse transformation unit 223.
- the video decoding apparatus 200 can restore video in response to a process in which video information is processed in the video encoding apparatus.
- the video decoding apparatus 200 can perform video decoding using a processing unit applied in the video encoding apparatus.
- the processing unit block of video decoding may be, for example, a coding unit and, in another example, a coding unit, a prediction unit or a conversion unit.
- the coding unit may be partitioned along the quad tree structure and / or the binary tree structure from the maximum coding unit.
- a prediction unit and a conversion unit may be further used as the case may be, in which case the prediction block is a block derived or partitioned from the coding unit and may be a unit of sample prediction. At this time, the prediction unit may be divided into sub-blocks.
- the conversion unit may be divided along the quad tree structure from the coding unit and may be a unit that derives the conversion factor or a unit that derives the residual signal from the conversion factor.
- the entropy decoding unit 210 may parse the bitstream and output information necessary for video restoration or picture restoration. For example, the entropy decoding unit 210 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and calculates a value of a syntax element necessary for video restoration, a quantized value Lt; / RTI >
- a coding method such as exponential Golomb coding, CAVLC, or CABAC
- the CABAC entropy decoding method includes receiving a bean corresponding to each syntax element in a bitstream, decoding decoding target information of the decoding target syntax element, decoding information of a surrounding and decoding target block, or information of a symbol / A context model is determined and an occurrence probability of a bin is predicted according to the determined context model to perform arithmetic decoding of the bean to generate a symbol corresponding to the value of each syntax element have.
- the CABAC entropy decoding method can update the context model using the information of the decoded symbol / bin for the context model of the next symbol / bean after determining the context model.
- the residual value i.e., the quantized transform coefficient, which is entropy-decoded in the entropy decoding unit 210, 221).
- the reordering unit 221 may rearrange the quantized transform coefficients into a two-dimensional block form.
- the reordering unit 221 may perform reordering in response to the coefficient scanning performed in the encoding apparatus.
- the rearrangement unit 221 may be a part of the inverse quantization unit 222, although the rearrangement unit 221 has been described as an alternative configuration.
- the inverse quantization unit 222 may dequantize the quantized transform coefficients based on the (inverse) quantization parameters, and output the transform coefficients. At this time, the information for deriving the quantization parameter may be signaled from the encoding device.
- the inverse transform unit 223 may invert the transform coefficients to derive the residual samples.
- the prediction unit 230 may predict a current block and may generate a predicted block including prediction samples of the current block.
- the unit of prediction performed in the prediction unit 230 may be a coding block, a transform block, or a prediction block.
- the prediction unit 230 may determine whether intra prediction or inter prediction is to be applied based on the prediction information.
- a unit for determining whether to apply intra prediction or inter prediction may differ from a unit for generating a prediction sample.
- units for generating prediction samples in inter prediction and intra prediction may also be different.
- whether inter prediction or intra prediction is to be applied can be determined in units of CU.
- the prediction mode may be determined in units of PU to generate prediction samples.
- a prediction mode may be determined in units of PU, and prediction samples may be generated in units of TU.
- the prediction unit 230 may derive a prediction sample for the current block based on the surrounding reference samples in the current picture.
- the prediction unit 230 may apply a directional mode or a non-directional mode based on the neighbor reference samples of the current block to derive a prediction sample for the current block.
- a prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
- the prediction unit 230 may derive a prediction sample for a current block based on a sample specified on a reference picture by a motion vector on a reference picture.
- the prediction unit 230 may derive a prediction sample for a current block by applying a skip mode, a merge mode, or an MVP mode.
- motion information necessary for inter-prediction of a current block provided in the video encoding apparatus for example, information on a motion vector, a reference picture index, and the like may be acquired or derived based on the prediction information
- motion information of a neighboring block can be used as motion information of the current block.
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- the prediction unit 230 may construct a merge candidate list using the motion information of the available neighboring blocks and use the information indicated by the merge index on the merge candidate list as the motion vector of the current block.
- the merge index may be signaled from the encoding device.
- the motion information may include a motion vector and a reference picture.
- the difference between the predicted sample and the original sample is not transmitted.
- a motion vector of a current block can be derived using a motion vector of a neighboring block as a motion vector predictor.
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- a merge candidate list may be generated using a motion vector of the reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block that is a temporally neighboring block.
- the motion vector of the candidate block selected in the merge candidate list is used as the motion vector of the current block.
- the prediction information may include a merge index indicating a candidate block having an optimal motion vector selected from the candidate blocks included in the merge candidate list.
- the predicting unit 230 can derive the motion vector of the current block using the merge index.
- a motion vector predictor candidate list is generated by using a motion vector of the reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block which is a temporally neighboring block . That is, the motion vector of the reconstructed spatial neighboring block and / or the motion vector corresponding to the neighboring block Col may be used as a motion vector candidate.
- the information on the prediction may include a predicted motion vector index indicating an optimal motion vector selected from the motion vector candidates included in the list.
- the predicting unit 230 can use the motion vector index to select a predictive motion vector of the current block from the motion vector candidates included in the motion vector candidate list.
- the predicting unit of the encoding apparatus can obtain the motion vector difference (MVD) between the motion vector of the current block and the motion vector predictor, and can output it as a bit stream. That is, MVD can be obtained by subtracting the motion vector predictor from the motion vector of the current block.
- the predicting unit 230 may obtain the motion vector difference included in the information on the prediction, and derive the motion vector of the current block through addition of the motion vector difference and the motion vector predictor.
- the prediction unit may also acquire or derive a reference picture index or the like indicating the reference picture from the information on the prediction.
- the adder 240 may add a residual sample and a prediction sample to reconstruct a current block or a current picture.
- the adder 240 may add the residual samples and the prediction samples on a block-by-block basis to reconstruct the current picture.
- the adder 240 has been described as an alternative configuration, but the adder 240 may be a part of the predictor 230.
- the addition unit 240 may be referred to as a restoration unit or a restoration block generation unit.
- the filter unit 250 may apply deblocking filtered sample adaptive offsets, and / or ALFs, to the reconstructed pictures.
- the sample adaptive offset may be applied on a sample-by-sample basis and may be applied after deblocking filtering.
- the ALF may be applied after deblocking filtering and / or sample adaptive offsets.
- the memory 260 may store restored pictures (decoded pictures) or information necessary for decoding.
- the reconstructed picture may be a reconstructed picture whose filtering procedure has been completed by the filter unit 250.
- the memory 260 may store pictures used for inter prediction.
- the pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
- the reconstructed picture can be used as a reference picture for another picture.
- the memory 260 may output the restored picture according to the output order.
- the prediction can be performed based on the intra prediction mode.
- the intra prediction may be performed based on a neighboring sample that has already been encoded / decoded at the decoding time of the current block. That is, the prediction sample of the current block may be reconstructed using the left and right neighbor samples of the current block.
- the left peripheral samples and the upper peripheral samples may be expressed as shown in FIG.
- FIG. 3 exemplarily shows the left neighbor samples and the upper neighbor samples used for intra prediction of the current block.
- the intra prediction mode for the current block can be derived when intra prediction is performed on the current block and the intra prediction mode for the current block can be derived using the at least one of the left neighbor samples and the upper neighbor samples, Lt; / RTI > can be generated.
- the intra-prediction mode may include two non-directional intra-prediction modes and 33 directional intra-prediction modes.
- the non-directional intra-prediction modes may include a planar intra-prediction mode and a DC intra-prediction mode, and the directional intra-prediction modes may include intra-prediction modes 2 to 34.
- the planar intra prediction mode may be referred to as a planar mode, and the DC intra prediction mode may be referred to as a DC mode.
- the intra prediction mode 10 indicates a horizontal intra prediction mode or a horizontal mode
- the intra prediction mode 26 indicates a vertical intra prediction mode or a vertical mode
- a directional intra mode angular intra mode can be expressed by an angle.
- the relative angle corresponding to each intra-prediction mode can be expressed with reference to the horizontal reference angle 0 [deg.]
- Corresponding to the intra-prediction mode # 10 and based on the vertical reference angle 0 [ To express a relative angle corresponding to each intra prediction mode.
- the intra-prediction mode may include two non-directional intra-prediction modes and 65 directional intra-prediction modes.
- the non-directional intra-prediction modes may include a planar intra-prediction mode and a DC intra-prediction mode, and the directional intra-prediction modes may include intra-prediction modes 2 to 66.
- FIG. 4 illustrates exemplary intra-directional modes of 65 prediction directions.
- an intra-prediction mode having a horizontal directionality and an intra-prediction mode having a vertical directionality can be distinguished from the intra-prediction mode # 34 having a left-side diagonal prediction direction.
- H and V in Fig. 4 indicate the horizontal direction and the vertical direction, respectively, and the numbers from -32 to 32 indicate displacements in 1/32 unit on the sample grid position.
- the intra-prediction modes 2 to 33 have a horizontal direction, and the intra-prediction modes # 34 to # 66 have a vertical direction.
- the intra prediction mode 18 and the intra prediction mode 50 indicate the horizontal intra prediction mode and the vertical intra prediction mode, respectively, and the intra intra prediction mode and the intra intra prediction mode
- the prediction direction can be expressed by an angle.
- the relative angle corresponding to each intra-prediction mode can be expressed based on the horizontal reference angle 0 corresponding to the intra-prediction mode # 18, and the relative angle corresponding to the intra-prediction mode # To express a relative angle corresponding to each intra prediction mode.
- a luma component of the current block that is already decoded and restored or a chroma component different from the chroma component may be used for intra prediction on the chroma component of the current block have.
- the chroma Cb component and the chroma Cr component are decoded in the following order Encoding / decoding information of the luma component may be used.
- the color format of the input image is a 4: 2: 0 color format
- the luma component has four times as many data as each chroma component, so that the luma component can be down- And the down sampled luma component may be used for intra prediction of each chroma component.
- the chroma component of the current block may be referred to as a current chroma block
- the luma component corresponding to the chroma component may be referred to as a corresponding luma block or a luma block for the current chroma block.
- the method of performing intra prediction of the current chroma block based on the corresponding luma block may be as follows.
- neighboring samples of the corresponding luma block corresponding to neighboring samples of the current chroma block may be derived.
- the neighbor samples of the current chroma block may include left neighbor samples, upper left neighbor samples, and upper neighbor samples of the current chroma block.
- the left neighbor samples are p [-1] [0] -1] to [p-1] [N-1], the upper left neighbor sample is p [-1] 1].
- a linear model of the corresponding luma block and the current chroma block may be derived based on neighbor samples of the corresponding luma block and neighboring samples of the current chroma block,
- the prediction samples of the chroma block can be derived based on the restoration luma samples of the luma block.
- the linear model may be expressed by a relational expression between the corresponding luma block and the current chroma block.
- the intra prediction mode using the correlation between the current chroma block and the corresponding luma block corresponding to the current chroma block may be referred to as an LM (linear model) mode.
- the parameters of the relational expression may be derived based on neighbor samples of the current chroma block and neighbor samples of the corresponding luma block corresponding to neighbor samples of the current chroma block.
- the parameters of the relational expression may include a coefficient and an offset of the relational expression. That is, coefficients and offsets of the relational expression can be derived based on the surrounding samples of the current chroma block and the surrounding samples of the corresponding luma block. The coefficient may be referred to as a scaling factor.
- the relational expression derived based on the neighbor samples of the current chroma block and the surrounding samples of the corresponding luma block can be expressed by the following equation.
- Ref cb is a neighboring sample of the current chroma block
- Ref Y is a neighboring sample of the corresponding luma block
- the parameters ⁇ and ⁇ which are the parameters, calculate a value by which the closest samples of the current chroma block and the neighboring samples of the corresponding luma block become similar to each other using the least squares method.
- the surrounding samples of the corresponding luma block may be down-sampled and used.
- the ⁇ and ⁇ may be derived based on the following equation.
- the parameters a and b represent the coefficient and the offset
- Ref cb represents a (downsampled) neighbor sample of the current chroma block
- Ref Y represents a surrounding sample of the corresponding luma block.
- E error
- Equation (2) Since the parameters ⁇ and ⁇ to be obtained in Equation (2) are values that minimize errors of both sides, the equation for obtaining the parameters can be expressed as follows.
- E ( ⁇ , ⁇ ) represents the ⁇ and ⁇ values minimizing the error
- i is the index of each sample
- ⁇ (lambda) is the control parameter.
- the lambda may be predetermined or may be derived, for example, based on the surrounding samples of the corresponding luma block. For example, And in another example, lambda is set to 0, the latter stage of the above equation (3) may be omitted. This is the same in the following equations.
- N denotes a normalization parameter.
- N may be determined based on the size of the current chroma block.
- the restoration luma samples of the corresponding luma block are applied to the parameters so that a predicted sample of the current chroma block can be derived.
- the relational expression to which the restoration luma samples of the corresponding luma block are applied can be expressed by the following equation.
- Pred cb (i, j) is the predicted sample of the (i, j) coordinate of the current chroma block and rec Y '(i, j) is the restoration luma sample of the (i, j) ? can be the coefficient, and? can represent the offset.
- the rec Y '(i, j) may represent a down-sampled restoration luma sample of the corresponding luma block.
- the parameter ⁇ and the parameter ⁇ of the relational expressions representing the linear model are the same as the neighboring samples used in the intra prediction of the current luma block and the neighbor samples of the current chroma block and the neighbor samples of the luma block
- the intra prediction of the above-described method can be utilized merely by transmitting information indicating whether prediction using the linear model is performed on the current chroma block.
- the method of predicting the chroma Cb component using the correlation between the luma component and the chroma Cb component is shown, it is also possible to perform intra prediction based on the correlation between the luma component and the chroma Cb component as well as other components Can be performed.
- intraprediction can be performed by various modeling techniques that define the relationship between the components as well as the linear model described above.
- the present invention is not limited to the case where the color format of the input image is the YCbCr 4: 2: 0 color format, and the above-described prediction method can be applied if the color format of the input image is a color format capable of conversion between the respective components .
- the color format of the input image is a color format in which the amount of information of all components is the same, such as the RGB color format or the YCbCr 4: 4: 4 color format, down-sampling on the luma component may be omitted have.
- Sampling may be performed to derive a downsampled corresponding luma block of the same size as the current chroma block. Then, a prediction block for the current chroma block may be generated based on the linear model defined as Equation (7). Wherein the downsampled corresponding luma block may be derived based on a restoration luma sample corresponding to a target chroma sample of the current chroma block among the restoration luma samples of the corresponding luma block.
- the reconstructed luma samples corresponding to the target chroma samples may be derived based on a down-sampling filter.
- Various filters may be used as the downsampling filter, and an example of the downsampling filter may be expressed as the following equation.
- Rec ' L [x, y] is a reconstructed luma sample of the (x, y) coordinates in the downsampled corresponding luma block, Rec' L [2x, 2y] ≪ / RTI > (X, y) in the downsampled corresponding luma block based on the restored luma samples of the (2x, 2y) coordinates in the corresponding luma block corresponding to the target chroma samples of the (x, y) )
- Restoration luma samples of the coordinates can be derived.
- a luma component at a position more suitable for the current chroma block there may be a luma component at a position more suitable for the current chroma block than a luma component at a position derived according to the downsampling filter (i.e., a restoration luma sample of the corresponding luma block).
- a restoration luma sample of the corresponding luma block rather than performing the downsampling based on the restored luma samples of the (2x, 2y) coordinates in the corresponding luma block corresponding to the target chroma samples of the (x, y)
- the downsampling based on the sample can improve the prediction accuracy of the target chroma sample of the current chroma block.
- the bit amount of residual information can be reduced and the overall coding efficiency can be improved.
- the offset information for the current chroma block is signaled, the restoration luma samples of the appropriate position in the corresponding luma block are derived based on the offset information, and the downsampling is performed based on the restoration luma sample And performing a prediction of the current chroma block based on the downsampled restoration luma sample.
- 6 shows an example of adjusting the position of the restored luma sample corresponding to the target chroma sample of the current chroma block.
- offset information at the corresponding position is obtained instead of the restoration luma sample of the corresponding position (for example, (2x, 2y) coordinate) in the corresponding luma block corresponding to the target chroma sample of the (x, y)
- a restoration luma sample of the modified position based on the above can be used for downsampling.
- the offset information may include x offset information for the x component and y offset information for the y component.
- the offset information for the modified position can be interpreted based on the search range and the mobile unit promised for the transmitter and the receiver, that is, the encoding apparatus and the decoding apparatus.
- the encoding apparatus and the decoding apparatus can set a search range from which the corrected position is derived, and when the position corresponding to the target chroma sample of the current chroma block is (0, 0), the search range is x Axis can be set from -N / 2 to + N / 2 on the axis, and from -N / 2 to N / 2 on the y-axis. At this time, N can represent the size of the block.
- the moving unit may be set to a half pel unit, that is, a half sample unit. Accordingly, even if the values indicated by the x offset information and the y offset information are parsed as integers, the values can be interpreted in units of half samples. Accordingly, in the corresponding position, the integer value (i.e., x offset) represented by the x offset information is shifted in the x axis by a value obtained by multiplying by the moving unit, and the integer value (i.e., y offset) A position shifted in the y-axis by a value obtained by multiplying the moving unit can be derived to the corrected position.
- the integer value (i.e., x offset) represented by the x offset information is shifted in the x axis by a value obtained by multiplying by the moving unit, and the integer value (i.e., y offset)
- a position shifted in the y-axis by a value obtained by multiplying the moving unit can be derived to the corrected position.
- the modified position may represent a fractional sample position, in which case the restored luma sample of the modified position may be modified Can be derived through interpolation of restoration luma samples of integer sample locations around the location.
- an interpolation filter used in inter prediction, an interpolation filter used in intra prediction, or a new interpolation filter for the present invention may be designed and used.
- the offset information can be derived as shown in the following table.
- pos_offset_x is a syntax for the x offset information
- pos_offset_y is a syntax for the y offset information
- offset_zero_bit is a syntax for flag information indicating whether or not the offset information is transmitted. For example, if the value of the offset_zero_bit is 1, the offset information for the current chroma block including the x offset information and the y offset information may not be signaled. If the offset_zero_bit value is 0, The offset information for the current chroma block, including the x offset information and the y offset information, may be signaled. Meanwhile, the signs of the x offset information and the y offset information may be fixed to negative values.
- the value indicated by the x offset information and the value indicated by the y offset information may indicate the magnitude of the x offset and the y offset, and the sign of the values may be fixed to a negative value. Accordingly, when the x offset information and the y offset information are signaled, the corrected position can be derived as follows.
- X ' is the x component of the modified position
- X is the x component of the corresponding position
- offset X is the x offset
- Y is the y component of the corresponding position
- offset Y is the y offset. Therefore, when the downsampling filter of Equation (8) is applied, the x component X of the corresponding position in the corresponding luma block corresponding to the target chroma sample of the (x, y) coordinate of the current chroma block is 2x,
- the y component Y of the corresponding position in the luma block may be derived as 2y
- the x component X 'of the modified position is 2x - offset X * n
- the y component Y' of the modified position is 2y - offset Y * n Lt; / RTI >
- n may represent a predetermined moving unit.
- the modified position of the corresponding luma block may be derived based on the offset information for the modified position through the method described above and a downsampled restoration luma sample of the corresponding luma block based on the modified position Can be derived. Then, a prediction block of the current chroma block may be generated based on the downsampled restoration luma sample as shown in Equation (7).
- the same downsampling filter can be used for all blocks of the current image.
- the downsampling filter may be defined in advance in the encoding apparatus and the decoding apparatus, or the filter information indicating the downsampling filter for the current image may be defined as a sequence parameter set (SPS) or a video parameter set (VPS) May be included in the header information for controlling the sequence and transmitted. That is, the filter information indicating the downsampling filter can be signaled through a sequence parameter set (SPS) or a video parameter set (VPS), and downsampling of the corresponding luma block based on the filter information indicating the downsampling filter A downsampling filter may be derived.
- SPS sequence parameter set
- VPN video parameter set
- various downsampling filters may be used for the blocks of the current image, and different downsampling filters may be used for each block.
- the type of the downsampling filters may be derived based on the already decoded information, or the downsampling filter of the corresponding block may be selected based on the filter information signaled on a block basis.
- different downsampling filters may be used in slice units.
- the filter information may be transmitted through a slice segment header, and a down-sampling filter for the corresponding slice of the slice segment header including the filter information may be derived based on the filter information.
- different downsampling filters may be used for each picture.
- the filter information may be transmitted via a picture parameter set (PPS), and a downsampling filter for the corresponding picture of the PPS including the filter information may be derived based on the filter information.
- PPS picture parameter set
- Prediction of the chroma component of the current image can be performed based on two or more downsampling filters as described above.
- the two or more multiple downsampling filters may be configured as filter candidates for prediction of the chroma component.
- four down-sampling filters may be composed of filter candidates for down-sampling.
- the downsampling filters may be expressed by the following equations.
- Rec ' L [x, y] is a reconstructed luma sample of the (x, y) coordinates in the downsampled corresponding luma block
- Rec' L [2x, 2y] restoration of the luma samples Rec 'L [2x + 1,2y ] is the corresponding luma blocks within the (2x + 1,2y) coordinates of the restored luma samples
- Rec' L [2x + 1,2y + 1] is the corresponding luma
- the reconstructed luma sample, Rec ' L [2x, 2y + 1], of the (2x, 2y + 1) coordinate in the block represents the reconstructed luma sample of the (2x, 2y + 1) coordinate in the corresponding luma block.
- the downsampling filter of Equation (11) is selected by the downsampling filter for the current chroma block
- the downsampling filter in the corresponding luma block corresponding to the target chroma sample of the (x, y) (X, y) coordinates of the corresponding luma block downsampled based on the restoration luma sample of the 2x + 1, 2y) coordinate and the restoration luma sample of the (2x + 1, 2y + 1) have.
- a value obtained by adding 1 to the sum of the sample value of the restored luma sample of the (2x + 1, 2y) coordinate and the restored luma sample of the (2x + 1, Values derived by arithmetically shifting the values to the right can be derived as reconstructed luma samples of the (x, y) coordinates in the downsampled corresponding luma block.
- the downsampling filter of the current chroma block is selected as the downsampling filter for the current chroma block
- the downsampling filter in the corresponding luma block corresponding to the target chroma sample of the (x, y) (X, y) coordinates in the corresponding luma block downsampled based on the restoration luma samples of the 2x, 2y + 1, 2x, 2y + have.
- Values derived by arithmetically shifting the values to the right can be derived as reconstructed luma samples of the (x, y) coordinates in the downsampled corresponding luma block.
- the downsampling filter of Equation (13) is selected as the downsampling filter for the current chroma block
- the reconstructed luma samples of the (x, y) coordinates in the corresponding luma block that have been downsampled can be derived.
- a sample value of the restoration luma sample of the (2x, 2y) coordinate a sample value of the restoration luma sample of the (2x, 2y + 1) Value obtained by adding 2 to the sum of the sample value of the reconstructed luma sample of the (2x + 1, 2y + 1) coordinate and the value obtained by performing an arithmetic shift of the value to the right, (X, y) coordinates in the corresponding corresponding luma block.
- one of the downsampling filters represented by Equations (10) to (13) is used as a downsampling filter for predicting the current chroma block, i.e., a downsampling filter for downsampling the corresponding luma block
- a restoration luma sample at a position corresponding to the target chroma sample of the (x, y) coordinates of the current chroma block according to the downsampling filter may be used for the downsampling.
- a reconstructed luma sample of the modified position based on the offset information described above at a location corresponding to the target chroma sample according to the downsampling filter may be used for the downsampling.
- the filter candidates may include downsampling filters as described below.
- the downsampling filters may be expressed by the following equations.
- Rec ' L [x, y] is a reconstructed luma sample of the (x, y) coordinates in the downsampled corresponding luma block
- Rec' L [2x, 2y] restoration of the luma samples Rec 'L [2x + 1,2y ] is the corresponding luma blocks within the (2x + 1,2y) coordinates of the restored luma samples
- Rec' L [2x + 1,2y + 1] is the corresponding luma
- the reconstructed luma sample, Rec ' L [2x, 2y + 1], of the (2x, 2y + 1) coordinate in the block represents the reconstructed luma sample of the (2x, 2y + 1) coordinate in the corresponding luma block.
- the downsampling filter of Equation (15) is selected as the downsampling filter for the current chroma block
- the downsampling filter in the corresponding luma block corresponding to the target chroma sample of the (x, y) (X, y) coordinates of the corresponding luma block downsampled based on the restoration luma sample of the 2x + 1, 2y) coordinate and the restoration luma sample of the (2x + 1, 2y + 1) have.
- a value obtained by subtracting 1 from the sum of the sample value of the restored luma sample of the (2x + 1, 2y) coordinate and the restored luma sample of the (2x + 1, (X, y) coordinates in the downsampled corresponding luma block can be derived as a reconstructed luma sample of the downsampled corresponding luma block by arithmetic shifts to the right.
- the downsampling filter in the corresponding luma block corresponding to the target chroma sample of the (x, y) (X, y) coordinates in the corresponding luma block downsampled based on the restoration luma samples of the 2x, 2y + 1, 2x, 2y + have.
- Values derived by arithmetically shifting the values to the right can be derived as reconstructed luma samples of the (x, y) coordinates in the downsampled corresponding luma block.
- the downsampling filter of Equation (17) is selected as the downsampling filter for the current chroma block
- the reconstructed luma samples of the (x, y) coordinates in the corresponding luma block that have been downsampled can be derived.
- a sample value of the restoration luma sample of the (2x, 2y) coordinate a sample value of the restoration luma sample of the (2x, 2y + 1) Value obtained by subtracting 2 from the sum of the sample values of the restored luma samples of the (2x + 1, 2y + 1) coordinates is derived, and a value obtained by performing an arithmetic shift to the right- (X, y) coordinates in the corresponding corresponding luma block.
- one of the downsampling filters represented by Equations (14) to (17) is used as a downsampling filter for predicting the current chroma block, i.e., a downsampling filter for downsampling the corresponding luma block
- a restoration luma sample at a position corresponding to the target chroma sample of the (x, y) coordinates of the current chroma block according to the downsampling filter may be used for the downsampling.
- a reconstructed luma sample of the modified position based on the offset information described above at a location corresponding to the target chroma sample according to the downsampling filter may be used for the downsampling.
- FIG. 7 schematically shows a video encoding method by the encoding apparatus according to the present invention.
- the method disclosed in FIG. 7 can be performed by the encoding apparatus disclosed in FIG. Specifically, for example, S700 to S730 in FIG. 7 may be performed by the predicting unit of the encoding apparatus, and S740 may be performed by the entropy encoding unit of the encoding apparatus.
- the encoding device derives linear model parameters for the current chroma block based on the surrounding samples of the current chroma block and the surrounding samples of the corresponding luma block (S700).
- An encoding device may derive the surrounding samples of the current chroma block and the surrounding samples of the corresponding block.
- the corresponding block may be a corresponding luma block of luma components or a corresponding chroma block of chroma components other than the chroma component of the current chroma block.
- the corresponding block may be the corresponding luma block of the luma component or the corresponding chroma block of the chroma Cr component.
- the corresponding block may be the corresponding chroma block of the corresponding luma block or chroma Cb component of the luma component.
- the corresponding luma block or the corresponding chroma block may be a block already encoded at the encoding time of the current chroma block.
- the neighboring samples of the current chroma block may include left neighbor samples, upper left neighbor sample, and upper neighbor samples of the current chroma block, And may represent the surrounding samples of the corresponding luma block corresponding to neighboring samples. If the size of the current chroma block is MxN and the x component of the top-left sample position of the current chroma block is 0 and the y component is 0, the left neighbor samples are p [-1] [0] -1], p [-1] [N-1], the upper left neighbor sample is p [-1] ].
- the area including the neighboring samples of the current chroma block for deriving the linear model parameters may be referred to as a template of the current chroma block.
- the template of the current chroma block is p [-1 ] [0] to p [-1] [N-1], p [-1] .
- an area including the surrounding samples of the corresponding luma block corresponding to the neighboring samples of the current chroma block may be referred to as a template of the corresponding luma block.
- the encoding device may derive linear model parameters for the current chroma block based on the surrounding samples of the current chroma block and the surrounding samples of the corresponding luma block. That is, the encoding device may derive linear model parameters for the current chroma block based on the template of the current chroma block and the template of the corresponding luma block. Specifically, a relational expression between the current chroma block and the corresponding luma block may be derived based on neighbor samples of the current chroma block and surrounding samples of the corresponding luma block, May be a mathematical expression representing a model between blocks.
- the relational expression may be a mathematical expression representing a linear model of the current chroma block and the corresponding luma block, or a mathematical expression representing a model other than the linear model.
- the linear model parameters may include a coefficient and an offset of the linear model. The parameters may be derived based on Equations (5) and (6) described above.
- the encoding apparatus can generate a flag indicating whether to derive the intra prediction mode of the current chroma block based on the linear model. It may be determined whether to derive the intra prediction mode of the current chroma block among the plurality of candidate intra prediction modes based on the linear model based on the value of the flag. That is, it may be determined whether to derive the parameters based on the flag. For example, if the value of the flag is 1, the linear model parameters may be derived based on the surrounding samples of the current chroma block and the surrounding samples of the corresponding luma block, and the value of the flag is 0 , Then the linear model parameters may not be derived.
- the flag may be transmitted at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level.
- the encoding device derives the corresponding position in the corresponding luma block for the target chroma sample of the current chroma block based on the downsampling filter (S710).
- the encoding device may derive the corresponding location in the corresponding luma block for the target chroma sample according to the downsampling filter.
- the downsampling filter may be derived as Equation (8).
- the corresponding position in the corresponding luma block for the target chroma sample of the (x, y) coordinate may be derived as (2x, 2y) coordinates.
- the downsampling filter may be derived as shown in Equation (10).
- the corresponding position in the corresponding luma block for the target chroma sample of the (x, y) coordinate may be derived as (2x, 2y) and (2x + 1, 2y) coordinates.
- the downsampling filter may be derived as shown in Equation (11).
- the corresponding position in the corresponding luma block for the target chroma sample of the (x, y) coordinates may be derived as (2x + 1, 2y) and (2x + 1, 2y + 1) coordinates.
- the downsampling filter may be derived as Equation (12).
- the corresponding position in the corresponding luma block for the target chroma sample of the (x, y) coordinate may be derived in the (2x, 2y + 1) coordinate and (2x + 1, 2y + 1) coordinate.
- the downsampling filter may be derived as shown in Equation (13).
- the corresponding position in the corresponding luma block for the target chroma sample of the (x, y) coordinate is (2x, 2y), (2x, 2y + 1) 2x + 1, 2y + 1) coordinates.
- the downsampling filter may be derived as shown in Equation (14).
- the corresponding position in the corresponding luma block for the target chroma sample of the (x, y) coordinate may be derived as (2x, 2y) and (2x + 1, 2y) coordinates.
- the downsampling filter may be derived as shown in Equation (15).
- the corresponding position in the corresponding luma block for the target chroma sample of the (x, y) coordinates may be derived as (2x + 1, 2y) and (2x + 1, 2y + 1) coordinates.
- the downsampling filter may be derived as shown in Equation (16).
- the corresponding position in the corresponding luma block for the target chroma sample of the (x, y) coordinate may be derived in the (2x, 2y + 1) coordinate and (2x + 1, 2y + 1) coordinate.
- the downsampling filter may be derived as Equation (17).
- the corresponding position in the corresponding luma block for the target chroma sample of the (x, y) coordinate is (2x, 2y), (2x, 2y + 1) 2x + 1, 2y + 1) coordinates.
- the encoding apparatus may derive a predetermined downsampling filter to the downsampling filter.
- the encoding apparatus may constitute a plurality of downsampling filter candidates, and may derive a downsampling filter candidate of the plurality of downsampling filter candidates to the downsampling filter. That is, the encoding device may derive a downsampled reconstructed luma sample for the target chroma sample based further on downsampling filter candidates of one of the plurality of downsampling filter candidates.
- the encoding apparatus may generate filter information indicating downsampling filter candidates of one of the plurality of downsampling filter candidates.
- the filter information may be signaling through a sequence parameter set (SPS), a video parameter set (VPS), a picture parameter set (PPS), a block unit syntax information or a slice segment header.
- SPS sequence parameter set
- VPS video parameter set
- PPS picture parameter set
- block unit syntax information indicating downsampling filter candidates of one of the plurality of downsampling filter candidates.
- the downsampling filter candidates may include at least two downsampling filters among the downsampling filters derived as Equation (8) and Equation (10) to Equation (17) have.
- the downsampling filter candidates may include a first candidate to a fourth candidate, the first candidate being the above-described equation 10, the second candidate being the above-described equation 11, the third candidate being the aforementioned mathematical formula (12), and the fourth candidate can be derived as shown in Equation (13).
- the down-sampling filter candidates may include first to fourth candidates, and the first candidate is the above-described equation 14, the second candidate is the above-described equation 15, the third candidate is the above- The above-described formula (16) and the fourth candidate may be derived as shown in the above-mentioned equation (17).
- the encoding apparatus derives a corrected corresponding position (S720).
- the encoding apparatus may derive a reconstructed luma sample suitable for a predicted sample of the target chroma sample from a restored luma sample of the corresponding position within a search range, Location.
- the search range may be preset.
- the x-component range of the search range may range from -N / 2 to + N / 2 based on the x component of the corresponding position, and the y- N / 2.
- the encoding device may generate offset information indicating the difference between the corresponding position and the modified corresponding position.
- the offset information comprises x offset information indicating an x offset that is a difference between the x component of the corresponding position and the x component of the modified corresponding position, and x offset information indicating the difference between the y component of the corresponding position and the y component of the modified corresponding position and y offset information indicating a y offset.
- the value indicated by the x offset information included in the offset information and the value indicated by the y offset information may indicate the magnitude of the x offset and the y offset, and the sign of the values may be fixed to a negative value.
- the movement unit for the x offset and the y offset may be pre-set.
- the moving unit may be set to 1/2 sample.
- the x component of the modified corresponding position may be derived as the x component of the corresponding position minus the value of the x offset multiplied by the moving unit, and the y component of the modified corresponding position is the corresponding position
- the encoding device derives a downsampled reconstructed luma sample for the target chroma sample based on the downsampling filter and the reconstructed luma sample of the modified corresponding location (S730).
- the encoding device may derive a downsampled reconstructed luma sample for the target chroma sample based on the reconstructed luma sample of the modified corresponding location.
- the encoding device may perform downsampling based on a downsampling filter and a reconstructed luma sample of the modified corresponding location, and derive the downsampled reconstructed luma sample for the targeted chroma sample.
- the downsampling filter is derived as shown in Equation (8)
- the downsampling can be performed as the following equation.
- Rec 'L [x, y ] is a (x, y) down-sampled restored to the destination chroma samples of the coordinate luma samples
- Rec L [2x-offset x , 2y-offset y] is a (x, y)
- the restoration luma sample, offset x , of the modified corresponding location in the luma block for the target chroma sample of the coordinates may be the x offset and offset x may represent the y offset.
- the reconstructed luma sample of the modified corresponding position may be derived through interpolation of reconstructed luma samples of the integer sample position around the modified position.
- an interpolation filter used in inter prediction, an interpolation filter used in intra prediction, or a new interpolation filter for the present invention is used as a restoration luma sample Can be performed.
- the encoding apparatus generates a prediction sample of the target chroma sample based on the linear model parameters and the downsampled restoration luma sample (S740).
- An encoding device may generate the prediction samples of the target chroma samples of the current chroma block based on the linear model parameters and the downsampled reconstructed luma sample.
- the linear model parameters may include a coefficient and an offset.
- the coefficient may be referred to as a scaling factor.
- the encoding apparatus may generate the predicted sample of the target chroma sample, i.e., the prediction block of the current chroma block, by substituting the downsampled restoration luma sample into the relational expression representing the linear model.
- the encoding device may generate the prediction sample of the target chroma sample by adding the offset to the value obtained by multiplying the downsampled restoration luma sample by the coefficient.
- the sample of the prediction block of the current chroma block may be derived by adding the offset to a value obtained by multiplying the downsampled restoration luma sample by the coefficient.
- the prediction sample can be derived based on Equation (7).
- the encoding apparatus generates prediction information for the current chroma block, encodes it, and outputs it (S760).
- the encoding apparatus may encode the prediction information for the current chroma block and output it as a bit stream.
- the prediction information may include a flag indicating whether to derive an intra prediction mode of the current chroma block based on the linear model.
- the flag may be transmitted at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level.
- the prediction information may include the offset information.
- the offset information may include x offset information indicating an x offset and y offset information indicating a y offset, and the offset information may be signaled through block-based syntax information. have.
- the prediction information may include flag information indicating whether the value of the offset information is 0 or not. For example, when the value of the flag information is 0, the flag information may indicate that the value of the offset information is not 0, and when the value of the flag information is 1, The value may be zero.
- the flag information may be signaled through block-based syntax information.
- the encoding apparatus may use the prediction sample as a reconstruction sample of the current chroma block, or may generate a residual sample based on the original sample and the derived prediction sample .
- the encoding apparatus may generate information on the residual based on the residual samples.
- the information on the residual may include transform coefficients relating to the residual sample.
- the encoding apparatus may derive the residual sample (or residual sample array) for the current chroma block based on the residual information.
- the encoding apparatus may derive the reconstructed sample based on the prediction sample and the residual sample. That is, the encoding apparatus may add the prediction sample and the residual sample to derive the reconstructed sample.
- the encoding apparatus may derive a reconstruction block or a reconstructed picture based on the reconstructed sample. It is as described above that the encoding apparatus can apply an in-loop filtering procedure such as deblocking filtering and / or SAO procedures to the restored picture to improve subjective / objective picture quality as needed. Also, the encoding apparatus can encode the information on the residual and output it in the form of a bit stream.
- the bitstream may be transmitted to a decoding device via a network or (digital) storage medium.
- the network may include a broadcasting network and / or a communication network, and the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- FIG. 8 schematically shows a video decoding method by a decoding apparatus according to the present invention.
- the method disclosed in Fig. 8 can be performed by the decoding apparatus disclosed in Fig. Specifically, for example, S800 to S810, S830 to S850 in FIG. 8 may be performed by the predicting unit of the decoding apparatus, and S820 may be performed by the entropy decoding unit of the decoding apparatus.
- the decoding apparatus derives linear model parameters for the current chroma block based on neighbor samples of the current chroma block and neighbor samples of the luma block (S800).
- a decoding device may derive the surrounding samples of the current chroma block and the surrounding samples of the luma block.
- the luma block may represent a corresponding luma block of luma components.
- the decoding device may also derive linear model parameters for the current chroma block based on neighbor samples of the current chroma block and surrounding samples of the chroma block that is a different chroma block than the current chroma block.
- the chroma block which is a chroma component different from the current chroma block
- the chroma block may be a chroma block of a chroma Cr component.
- the chroma block when the chroma component of the current chroma block is a chroma Cr component, the chroma block, which is a chroma component different from the current chroma block, may be a chroma block of a chroma Cb component.
- the chroma block which is a chroma component different from the luma block or the current chroma block, may be a block already decoded at the decoding time of the current chroma block.
- a luma block is described in the following embodiments, it may be performed by a method for a chroma block which is a chroma component different from the current chroma block.
- the luma block may be referred to as a corresponding luma block.
- the neighboring samples of the current chroma block may include left neighbor samples, upper left neighbor samples, and upper neighbor samples of the current chroma block, And may represent the surrounding samples of the luma block corresponding to the samples. If the size of the current chroma block is MxN and the x component of the top-left sample position of the current chroma block is 0 and the y component is 0, the left neighbor samples are p [-1] [0] -1], p [-1] [N-1], the upper left neighbor sample is p [-1] ].
- the area including the neighboring samples of the current chroma block for deriving the linear model parameters may be referred to as a template of the current chroma block.
- the template of the current chroma block is p [-1 ] [0] to p [-1] [N-1], p [-1] .
- an area including the surrounding samples of the luma block corresponding to the neighboring samples of the current chroma block may be referred to as a template of the luma block.
- the decoding device may derive linear model parameters for the current chroma block based on the surrounding samples of the current chroma block and the surrounding samples of the luma block.
- the decoding apparatus may derive linear model parameters for the current chroma block based on the template of the current chroma block and the template of the luma block.
- a relational expression between the current chroma block and the luma block may be derived based on neighbor samples of the current chroma block and surrounding samples of the luma block, May be a mathematical expression representing a model.
- the relational expression may be a mathematical expression representing a linear model of the current chroma block and the luma block, or a mathematical expression representing a model other than the linear model.
- the linear model parameters may include a coefficient and an offset of the linear model. The parameters may be derived based on Equations (5) and (6) described above.
- a flag indicating whether to derive the intra prediction mode of the current chroma block based on the linear model can be obtained through the bit stream. It may be determined whether to derive the intra prediction mode of the current chroma block among the plurality of candidate intra prediction modes based on the linear model based on the value of the flag.
- the decoding apparatus may generate the temporary prediction block based on the flag and determine whether to derive an intra prediction mode of the current chroma block among the plurality of candidate intra prediction modes based on the temporary prediction block. That is, the decoding apparatus can determine whether to derive the parameters based on the flag.
- the parameters of the relational expression may be derived based on the neighbor samples of the current chroma block and the neighbor samples of the luma block, and the value of the flag may be 0 , The parameters of the above relation may not be derived.
- the flag may be received at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level.
- the decoding apparatus derives the corresponding position in the luma block for the target chroma sample of the current chroma block (S810).
- the decoding device may derive the corresponding location in the luma block for the target chroma sample according to a downsampling filter.
- the downsampling filter may be derived as Equation (8).
- the corresponding position in the luma block for the target chroma sample of the (x, y) coordinates may be derived as (2x, 2y) coordinates.
- the downsampling filter may be derived as shown in Equation (10).
- the corresponding positions in the luma block for the target chroma samples of the (x, y) coordinates may be derived as (2x, 2y) coordinates and (2x + 1, 2y) coordinates.
- the downsampling filter may be derived as shown in Equation (11).
- the corresponding position in the luma block for the target chroma sample of the (x, y) coordinates may be derived as (2x + 1, 2y) and (2x + 1, 2y + 1) coordinates.
- the downsampling filter may be derived as Equation (12).
- the corresponding positions in the luma block for the target chroma samples of the (x, y) coordinates may be derived as (2x, 2y + 1) coordinates and (2x + 1, 2y + 1) coordinates.
- the downsampling filter may be derived as shown in Equation (13).
- the corresponding position in the luma block for the target chroma sample of the (x, y) coordinate is (2x, 2y), (2x, 2y + 1) + 1, 2y + 1) coordinates.
- the downsampling filter may be derived as shown in Equation (14).
- the corresponding positions in the luma block for the target chroma samples of the (x, y) coordinates may be derived as (2x, 2y) coordinates and (2x + 1, 2y) coordinates.
- the downsampling filter may be derived as shown in Equation (15).
- the corresponding position in the luma block for the target chroma sample of the (x, y) coordinates may be derived as (2x + 1, 2y) and (2x + 1, 2y + 1) coordinates.
- the downsampling filter may be derived as shown in Equation (16).
- the corresponding positions in the luma block for the target chroma samples of the (x, y) coordinates may be derived as (2x, 2y + 1) coordinates and (2x + 1, 2y + 1) coordinates.
- the downsampling filter may be derived as Equation (17).
- the corresponding position in the luma block for the target chroma sample of the (x, y) coordinate is (2x, 2y), (2x, 2y + 1) + 1, 2y + 1) coordinates.
- the decoding apparatus may derive a predetermined downsampling filter by the downsampling filter.
- the decoding apparatus may obtain filter information indicating a downsampling filter candidate of one of a plurality of downsampling filter candidates through a bitstream, and derive the downsampling filter candidate indicated by the filter information to the downsampling filter . That is, the decoding apparatus can obtain filter information indicating a downsampling filter candidate of one of a plurality of downsampling filter candidates through the bitstream, and based on the downsampling filter candidate indicated by the filter information, A downsampled restoration luma sample for the sample can be derived.
- the filter information may be signaling through a sequence parameter set (SPS), a video parameter set (VPS), a picture parameter set (PPS), a block unit syntax information or a slice segment header.
- the downsampling filter candidates may include at least two downsampling filters among the downsampling filters derived as Equation (8) and Equation (10) to Equation (17) have.
- the downsampling filter candidates may include a first candidate to a fourth candidate, the first candidate being the above-described equation 10, the second candidate being the above-described equation 11, the third candidate being the aforementioned mathematical formula (12), and the fourth candidate can be derived as shown in Equation (13).
- the down-sampling filter candidates may include first to fourth candidates, and the first candidate is the above-described equation 14, the second candidate is the above-described equation 15, the third candidate is the above- The above-described formula (16) and the fourth candidate may be derived as shown in the above-mentioned equation (17).
- the decoding apparatus obtains offset information for the current chroma block through the bitstream (S820).
- the decoding apparatus can receive the offset information through the bitstream.
- the offset information may include x offset information indicating an x offset and y offset information indicating a y offset.
- the offset information may be signaled through block-based syntax information.
- the decoding apparatus can obtain flag information indicating whether the value of the offset information is 0 through the bit stream. For example, when the value of the flag information is 0, the flag information may indicate that the value of the offset information is not 0, and when the value of the flag information is 1, The value may be zero. If the flag information indicates that the value of the offset information is not 0, the offset information can be obtained through the bitstream, and if the flag information indicates that the value of the offset information is 0, The offset information may not be obtained.
- the decoding apparatus derives a modified corresponding position based on the corresponding position and the offset information (S830).
- the decoding apparatus can derive a position shifted by the offset information at the corresponding position to the corrected corresponding position.
- the modified corresponding position may be derived in a search range based on the corresponding position.
- the range of the x range of the search range is from -N / 2 to + N / 2 based on the x component of the corresponding position
- the range of the y range is from -N / 2 to + N / 2 Can be set beforehand.
- the value indicated by the x offset information included in the offset information and the value indicated by the y offset information may indicate the magnitude of the x offset and the y offset, and the sign of the values may be fixed to a negative value.
- a moving unit for the x offset and the y offset may be preset. For example, the moving unit may be set to 1/2 sample.
- the x component of the modified corresponding location may be derived as the x component of the corresponding location minus the value of the x offset multiplied by the mobile unit and the y component of the modified corresponding location is the y component of the corresponding location Subtracting the value obtained by multiplying the y offset by the reference unit, and the modified corresponding position may be derived based on Equation (9).
- the decoding apparatus derives a downsampled restoration luma sample for the target chroma sample based on the restored luma sample of the modified corresponding position (S840).
- the decoding device may derive a downsampled reconstructed luma sample for the target chroma sample based on the reconstructed luma sample of the modified corresponding location.
- the decoding device may perform downsampling based on a downsampling filter and a reconstructed luma sample of the modified corresponding location, and derive the downsampled reconstructed luma sample for the targeted chroma sample.
- the downsampling filter is derived as shown in Equation (8)
- the downsampling can be performed as the following equation.
- Rec 'L [x, y ] is a (x, y) down-sampled restored to the destination chroma samples of the coordinate luma samples
- Rec L [2x-offset x , 2y-offset y] is a (x, y)
- the restoration luma sample, offset x , of the modified corresponding location in the luma block for the target chroma sample of the coordinates may be the x offset and offset x may represent the y offset.
- the reconstructed luma sample of the modified corresponding position may be derived through interpolation of reconstructed luma samples of the integer sample position around the modified position.
- an interpolation filter used in inter prediction, an interpolation filter used in intra prediction, or a new interpolation filter for the present invention is used as a restoration luma sample Can be performed.
- the decoding apparatus generates a prediction sample of the target chroma sample based on the linear model parameters and the downsampled restoration luma sample (S850).
- the decoding apparatus may generate the prediction samples of the target chroma samples of the current chroma block based on the linear model parameters and the downsampled reconstructed luma samples.
- the linear model parameters may include a coefficient and an offset.
- the coefficient may be referred to as a scaling factor.
- the decoding apparatus may generate the predicted sample of the target chroma sample, i.e., the prediction block of the current chroma block, by substituting the downsampled restoration luma sample into the relational expression representing the linear model.
- the decoding apparatus may generate the prediction sample of the target chroma sample by adding the offset to the value obtained by multiplying the downsampled restoration luma sample by the coefficient.
- the sample of the prediction block of the current chroma block may be derived by adding the offset to a value obtained by multiplying the downsampled restoration luma sample by the coefficient.
- the prediction sample can be derived based on Equation (7).
- the decoding apparatus may use the prediction sample as a reconstruction sample of the current chroma block according to a prediction mode, or may add a residual sample to the prediction sample to generate a reconstruction sample .
- the decoder may receive information about the residual for the current chroma block if the residual sample for the current chroma block is present and information about the residual may be included in the information for the restored sample have.
- the information on the residual may include a transform coefficient relating to the residual sample.
- the decoding apparatus may derive the residual sample (or residual sample array) for the current chroma block based on the residual information.
- the decoding apparatus may generate a reconstructed sample based on the prediction sample and the residual sample, and may derive a reconstructed block or a reconstructed picture based on the reconstructed sample.
- the decoding apparatus can apply an in-loop filtering procedure such as deblocking filtering and / or SAO procedure to the restored picture in order to improve subjective / objective picture quality as necessary.
- the decoding apparatus can receive the prediction information for the current chroma block through the bitstream, and can perform entropy decoding.
- the prediction information may include a flag indicating whether to derive an intra prediction mode of the current chroma block based on a linear model.
- the flag may be received at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level.
- the prediction information may include the offset information.
- the offset information may include x offset information indicating an x offset and y offset information indicating a y offset, and the offset information may be signaled through block-based syntax information. have.
- the prediction information may include flag information indicating whether the value of the offset information is 0 or not. For example, when the value of the flag information is 0, the flag information may indicate that the value of the offset information is not 0, and when the value of the flag information is 1, The value may be zero.
- the flag information may be signaled through block-based syntax information.
- prediction of the current chroma block is performed using a luma component for a current chroma block, downsampling is performed based on a luma component at a position more suitable for prediction of the current chroma block,
- the prediction accuracy of the current chroma block can be improved and the overall coding efficiency can be improved.
- prediction of a chroma component can be performed based on a plurality of downsampling filters, thereby improving prediction accuracy of the current chroma block, thereby improving overall coding efficiency.
- the above-described method according to the present invention can be implemented in software, and the encoding apparatus and / or decoding apparatus according to the present invention can perform image processing of, for example, a TV, a computer, a smart phone, a set- Device.
- the above-described method may be implemented by a module (a process, a function, and the like) that performs the above-described functions.
- the module is stored in memory and can be executed by the processor.
- the memory may be internal or external to the processor and may be coupled to the processor by any of a variety of well known means.
- the processor may comprise an application-specific integrated circuit (ASIC), other chipset, logic circuitry and / or a data processing device.
- the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory cards, storage media, and / or other storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
La présente invention concerne un procédé de décodage d'image réalisé par un appareil de décodage comprenant les étapes consistant : à dériver des paramètres de modèle linéaire d'un bloc de saturation actuel sur la base d'échantillons voisins du bloc de saturation actuel et d'échantillons voisins d'un bloc luma ; à dériver une position correspondante d'un échantillon luma reconstruit dans le bloc luma par rapport à un échantillon de saturation cible du bloc de saturation actuel ; à acquérir des informations de décalage du bloc de saturation actuel par l'intermédiaire d'un train de bits ; à dériver une position correspondante modifiée sur la base de la position correspondante et des informations de décalage ; à dériver un échantillon luma reconstruit sous-échantillonné pour l'échantillon de saturation cible sur la base d'un échantillon luma reconstruit à la position correspondante modifiée ; et à générer un échantillon de prédiction pour l'échantillon de saturation cible sur la base des paramètres de modèle linéaire et de l'échantillon luma reconstruit sous-échantillonné.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762543317P | 2017-08-09 | 2017-08-09 | |
US62/543,317 | 2017-08-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019031703A1 true WO2019031703A1 (fr) | 2019-02-14 |
Family
ID=65271657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/007003 WO2019031703A1 (fr) | 2017-08-09 | 2018-06-21 | Appareil et procédé de décodage d'image conformément à un modèle linéaire dans un système de codage d'image |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019031703A1 (fr) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113099221A (zh) * | 2021-02-22 | 2021-07-09 | 浙江大华技术股份有限公司 | 跨分量样点自适应补偿方法、编码方法及相关装置 |
WO2023125771A1 (fr) * | 2021-12-29 | 2023-07-06 | Mediatek Inc. | Prédiction de modèle linéaire à composante transversale |
WO2024061136A1 (fr) * | 2022-09-19 | 2024-03-28 | Douyin Vision Co., Ltd. | Procédé, appareil et support de traitement vidéo |
WO2024186135A1 (fr) * | 2023-03-08 | 2024-09-12 | 엘지전자 주식회사 | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015137786A1 (fr) * | 2014-03-14 | 2015-09-17 | 삼성전자 주식회사 | Procédé et appareil de codage/décodage de vidéo extensible |
US9288500B2 (en) * | 2011-05-12 | 2016-03-15 | Texas Instruments Incorporated | Luma-based chroma intra-prediction for video coding |
-
2018
- 2018-06-21 WO PCT/KR2018/007003 patent/WO2019031703A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9288500B2 (en) * | 2011-05-12 | 2016-03-15 | Texas Instruments Incorporated | Luma-based chroma intra-prediction for video coding |
WO2015137786A1 (fr) * | 2014-03-14 | 2015-09-17 | 삼성전자 주식회사 | Procédé et appareil de codage/décodage de vidéo extensible |
Non-Patent Citations (3)
Title |
---|
CHEN, JIANLE ET AL.: "On Phase Alignment of Up-sampling Process in SHVC", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, JCTVC-00215, 15TH MEETING, 1 November 2013 (2013-11-01), Geneva, CH, XP030115264 * |
CHEN, JIANLE ET AL.: "SHVC Test Model 11 (SHM 11) Introduction and Encoder Description", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/EEC JTC 1/SC 29/WG 11, JCTYC-Y1007, 22ND MEETING, 21 October 2015 (2015-10-21), Geneva, CH, XP030117648 * |
FRANCOIS, EDOUARD ET AL.: "Use of Chroma Phase in LM Mode", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, JCTVC-10188, 9TH MEETING, 7 May 2012 (2012-05-07), Geneva, CH, XP030052772 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113099221A (zh) * | 2021-02-22 | 2021-07-09 | 浙江大华技术股份有限公司 | 跨分量样点自适应补偿方法、编码方法及相关装置 |
CN113099221B (zh) * | 2021-02-22 | 2023-06-02 | 浙江大华技术股份有限公司 | 跨分量样点自适应补偿方法、编码方法及相关装置 |
WO2023125771A1 (fr) * | 2021-12-29 | 2023-07-06 | Mediatek Inc. | Prédiction de modèle linéaire à composante transversale |
WO2024061136A1 (fr) * | 2022-09-19 | 2024-03-28 | Douyin Vision Co., Ltd. | Procédé, appareil et support de traitement vidéo |
WO2024186135A1 (fr) * | 2023-03-08 | 2024-09-12 | 엘지전자 주식회사 | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017052081A1 (fr) | Procédé et appareil de prédiction inter dans un système de codage d'images | |
WO2017034331A1 (fr) | Procédé et dispositif de prédiction intra d'échantillon de chrominance dans un système de codage vidéo | |
WO2017188566A1 (fr) | Procédé et appareil d'inter-prédiction dans un système de codage d'images | |
WO2017022973A1 (fr) | Procédé d'interprédiction, et dispositif, dans un système de codage vidéo | |
WO2017069590A1 (fr) | Procédé et dispositif de décodage d'image à base de modélisation dans un système de codage d'image | |
WO2017043786A1 (fr) | Procédé et dispositif de prédiction intra dans un système de codage vidéo | |
WO2018236051A1 (fr) | Procédé et dispositif de décodage d'image selon une prédiction intra dans un système de codage d'image | |
WO2017014412A1 (fr) | Procédé et dispositif de prédiction interne dans un système de codage vidéo | |
WO2012081879A1 (fr) | Procédé de décodage prédictif inter de films codés | |
WO2017188565A1 (fr) | Procédé et dispositif de décodage d'image dans un système de codage d'image | |
WO2018070632A1 (fr) | Procédé et dispositif de décodage vidéo dans un système de codage vidéo | |
WO2018021585A1 (fr) | Procédé et appareil d'intra-prédiction dans un système de codage d'image | |
WO2018056709A1 (fr) | Procédé et dispositif d'inter-prédiction dans un système de codage d'image | |
WO2018216862A1 (fr) | Procédé et dispositif de décodage d'image selon une prédiction intra dans un système de codage d'image | |
WO2019112071A1 (fr) | Procédé et appareil de décodage d'image basés sur une transformation efficace de composante de chrominance dans un système de codage d'image | |
WO2017048008A1 (fr) | Procédé et appareil de prédiction inter dans un système de codage vidéo | |
WO2017061671A1 (fr) | Procédé et dispositif de codage d'image basé sur une transformation adaptative dans un système de codage d'image | |
WO2019194507A1 (fr) | Procédé de codage d'image basé sur une prédiction de mouvement affine, et dispositif associé | |
WO2017195914A1 (fr) | Procédé et appareil d'inter-prédiction dans un système de codage vidéo | |
WO2018174357A1 (fr) | Procédé et dispositif de décodage d'image dans un système de codage d'image | |
WO2020141932A1 (fr) | Procédé et appareil de prédiction inter utilisant des mmvd de cpr | |
WO2017052272A1 (fr) | Procédé et appareil pour une prédiction intra dans un système de codage vidéo | |
WO2019031703A1 (fr) | Appareil et procédé de décodage d'image conformément à un modèle linéaire dans un système de codage d'image | |
WO2018128223A1 (fr) | Procédé et appareil d'inter-prédiction dans un système de codage d'image | |
WO2019117659A1 (fr) | Procédé de codage d'images basé sur l'élaboration d'un vecteur de mouvement, et dispositif associé |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18844793 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18844793 Country of ref document: EP Kind code of ref document: A1 |