WO2019112072A1 - Procédé et dispositif de décodage d'image sur la base d'une liste de candidats d'informations de mouvement modifiées dans un système de codage d'image - Google Patents

Procédé et dispositif de décodage d'image sur la base d'une liste de candidats d'informations de mouvement modifiées dans un système de codage d'image Download PDF

Info

Publication number
WO2019112072A1
WO2019112072A1 PCT/KR2017/014073 KR2017014073W WO2019112072A1 WO 2019112072 A1 WO2019112072 A1 WO 2019112072A1 KR 2017014073 W KR2017014073 W KR 2017014073W WO 2019112072 A1 WO2019112072 A1 WO 2019112072A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion information
candidate
current block
candidate list
candidates
Prior art date
Application number
PCT/KR2017/014073
Other languages
English (en)
Korean (ko)
Inventor
서정동
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to PCT/KR2017/014073 priority Critical patent/WO2019112072A1/fr
Publication of WO2019112072A1 publication Critical patent/WO2019112072A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Definitions

  • the present invention relates to an image coding technique, and more particularly, to an image decoding method and apparatus based on a modified motion information candidate list of a current block in an image coding system.
  • HD high definition
  • UHD ultra high definition
  • the present invention provides a method and apparatus for enhancing video coding efficiency.
  • a motion information candidate list generation method comprising: generating motion information candidate lists by updating candidates of a motion information candidate list of a current block without receiving additional information; And a method and an apparatus for performing inter prediction.
  • an image decoding method performed by a decoding apparatus includes the steps of: obtaining information on inter prediction of a current block through a bitstream; generating a motion information candidate list of the current block based on neighboring blocks of the current block; Generating a modified motion information candidate list by updating the current motion information list, deriving motion information of the current block based on the inter prediction information and the modified motion information list, And performing inter prediction of the current block.
  • a decoding apparatus for performing image decoding.
  • the decoding apparatus includes an entropy decoding unit that obtains information on inter prediction of a current block through a bitstream, and a motion information candidate list generation unit that generates a motion information candidate list of the current block based on neighboring blocks of the current block, To generate a modified motion information candidate list by deriving motion information of the current block based on information on the inter prediction and the modified motion information list, And a prediction unit for performing inter prediction of the current block.
  • a video encoding method performed by an encoding apparatus.
  • the method includes generating motion information for a current block, generating a motion information candidate list of the current block based on neighboring blocks of the current block, updating the motion information candidate list candidates, ) Motion information candidate list, and encoding and outputting information on inter prediction of the current block.
  • a video encoding apparatus includes generating motion information for a current block, generating a motion information candidate list of the current block based on neighboring blocks of the current block, updating the candidates of the motion information candidate list, a predictor for generating a modified motion information candidate list, and an entropy encoding unit for encoding and outputting information on inter prediction of the current block.
  • a motion information candidate list including the modified candidate of the current block can be generated to derive more accurate motion information, thereby reducing the bit amount of information for inter prediction of the current block and reducing the overall coding efficiency Can be improved.
  • the present invention it is possible to generate a modified motion information candidate list by updating the candidates of the motion information candidate list without receiving additional additional information, thereby obtaining more accurate motion information.
  • the bit amount of the information for the inter prediction of the current block can be reduced and the overall coding efficiency can be improved.
  • FIG. 1 is a view for schematically explaining a configuration of a video encoding apparatus to which the present invention can be applied.
  • FIG. 2 is a schematic view illustrating a configuration of a video decoding apparatus to which the present invention can be applied.
  • FIG. 3 shows neighboring blocks for generating a motion information candidate list of the current block.
  • FIG. 4 shows an example of a template of the current block.
  • FIG. 5 shows an example of updating a motion information candidate list based on the template of the current block.
  • FIG. 6 shows an example of updating a motion information candidate list based on the template of the current block.
  • FIG. 7 shows an example of updating a motion information candidate list based on the template of the current block.
  • FIG. 8 shows an example of updating the motion information candidate list based on the priorities derived according to the directions of the motion vectors of the candidates.
  • FIG. 9 shows an example of updating the motion information candidate list based on the template of the current block and the priorities derived according to the directions of the motion vectors of the candidates of the motion information candidate list.
  • FIG. 10 schematically shows a video encoding method by an encoding apparatus according to the present invention.
  • FIG. 11 schematically shows a video decoding method by a decoding apparatus according to the present invention.
  • a picture generally refers to a unit that represents one image in a specific time zone
  • a slice is a unit that constitutes a part of a picture in coding.
  • One picture may be composed of a plurality of slices, and pictures and slices may be used in combination if necessary.
  • a pixel or a pel may mean a minimum unit of a picture (or image). Also, a 'sample' may be used as a term corresponding to a pixel.
  • a sample may generally represent a pixel or pixel value and may only represent a pixel / pixel value of a luma component or only a pixel / pixel value of a chroma component.
  • a unit represents a basic unit of image processing.
  • a unit may include at least one of a specific area of a picture and information related to the area.
  • the unit may be used in combination with terms such as a block or an area in some cases.
  • an MxN block may represent a set of samples or transform coefficients consisting of M columns and N rows.
  • FIG. 1 is a view for schematically explaining a configuration of a video encoding apparatus to which the present invention can be applied.
  • the video encoding apparatus 100 includes a picture dividing unit 105, a predicting unit 110, a residual processing unit 120, an adding unit 140, a filter unit 150, and a memory 160 .
  • the residual processing unit 120 may include a subtracting unit 121, a transforming unit 122, a quantizing unit 123, a reordering unit 124, an inverse quantizing unit 125 and an inverse transforming unit 126.
  • the picture dividing unit 105 may divide the inputted picture into at least one processing unit.
  • the processing unit may be referred to as a coding unit (CU).
  • the coding unit may be recursively partitioned according to a quad-tree binary-tree (QTBT) structure from the largest coding unit (LCU).
  • QTBT quad-tree binary-tree
  • LCU largest coding unit
  • one coding unit may be divided into a plurality of coding units of deeper depth based on a quadtree structure and / or a binary tree structure.
  • the quadtree structure is applied first and the binary tree structure can be applied later.
  • a binary tree structure may be applied first.
  • the coding procedure according to the present invention can be performed based on the final coding unit which is not further divided.
  • the maximum coding unit may be directly used as the final coding unit based on the coding efficiency or the like depending on the image characteristics, or the coding unit may be recursively divided into lower-depth coding units Lt; / RTI > may be used as the final coding unit.
  • the coding procedure may include a procedure such as prediction, conversion, and restoration, which will be described later.
  • the processing unit may include a coding unit (CU) prediction unit (PU) or a transform unit (TU).
  • the coding unit may be split from the largest coding unit (LCU) into coding units of deeper depth along the quad tree structure.
  • LCU largest coding unit
  • the maximum coding unit may be directly used as the final coding unit based on the coding efficiency or the like depending on the image characteristics, or the coding unit may be recursively divided into lower-depth coding units Lt; / RTI > may be used as the final coding unit.
  • SCU smallest coding unit
  • the coding unit can not be divided into smaller coding units than the minimum coding unit.
  • the term " final coding unit " means a coding unit on which the prediction unit or the conversion unit is partitioned or divided.
  • a prediction unit is a unit that is partitioned from a coding unit, and may be a unit of sample prediction. At this time, the prediction unit may be divided into sub-blocks.
  • the conversion unit may be divided along the quad-tree structure from the coding unit, and may be a unit for deriving a conversion coefficient and / or a unit for deriving a residual signal from the conversion factor.
  • the coding unit may be referred to as a coding block (CB)
  • the prediction unit may be referred to as a prediction block (PB)
  • the conversion unit may be referred to as a transform block (TB).
  • the prediction block or prediction unit may refer to a specific area in the form of a block in a picture and may include an array of prediction samples.
  • a transform block or transform unit may refer to a specific region in the form of a block within a picture, and may include an array of transform coefficients or residual samples.
  • the prediction unit 110 may perform a prediction on a current block to be processed (hereinafter, referred to as a current block), and may generate a predicted block including prediction samples for the current block.
  • the unit of prediction performed in the prediction unit 110 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 110 may determine whether intra prediction or inter prediction is applied to the current block. For example, the prediction unit 110 may determine whether intra prediction or inter prediction is applied in units of CU.
  • the prediction unit 110 may derive a prediction sample for a current block based on a reference sample outside the current block in a picture to which the current block belongs (hereinafter referred to as a current picture). At this time, the prediction unit 110 may derive a prediction sample based on (i) an average or interpolation of neighboring reference samples of the current block, (ii) The prediction sample may be derived based on a reference sample existing in a specific (prediction) direction with respect to the prediction sample among the samples. (i) may be referred to as a non-directional mode or a non-angle mode, and (ii) may be referred to as a directional mode or an angular mode.
  • the prediction mode may have, for example, 33 directional prediction modes and at least two non-directional modes.
  • the non-directional mode may include a DC prediction mode and a planar mode (Planar mode).
  • the prediction unit 110 may determine a prediction mode applied to a current block using a prediction mode applied to a neighboring block.
  • the prediction unit 110 may derive a prediction sample for a current block based on a sample specified by a motion vector on a reference picture.
  • the prediction unit 110 may derive a prediction sample for a current block by applying one of a skip mode, a merge mode, and a motion vector prediction (MVP) mode.
  • the prediction unit 110 can use motion information of a neighboring block as motion information of a current block.
  • difference residual between the predicted sample and the original sample is not transmitted unlike the merge mode.
  • MVP mode a motion vector of a current block can be derived by using a motion vector of a neighboring block as a motion vector predictor to use as a motion vector predictor of a current block.
  • a neighboring block may include a spatial neighboring block existing in a current picture and a temporal neighboring block existing in a reference picture.
  • the reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic).
  • the motion information may include a motion vector and a reference picture index.
  • Information such as prediction mode information and motion information may be (entropy) encoded and output in the form of a bit stream.
  • the highest picture on the reference picture list may be used as a reference picture.
  • the reference pictures included in the picture order count can be sorted on the basis of the picture order count (POC) difference between the current picture and the corresponding reference picture.
  • POC picture order count
  • the POC corresponds to the display order of the pictures and can be distinguished from the coding order.
  • the subtraction unit 121 generates residual samples that are the difference between the original sample and the predicted sample. When the skip mode is applied, a residual sample may not be generated as described above.
  • the transforming unit 122 transforms the residual samples on a transform block basis to generate a transform coefficient.
  • the transforming unit 122 can perform the transform according to the size of the transform block and a prediction mode applied to the coding block or the prediction block spatially overlapping the transform block. For example, if intraprediction is applied to the coding block or the prediction block that overlaps the transform block and the transform block is a 4 ⁇ 4 residue array, the residual sample is transformed into a discrete sine transform (DST) In other cases, the residual samples can be converted using a DCT (Discrete Cosine Transform) conversion kernel.
  • DST discrete sine transform
  • the quantization unit 123 may quantize the transform coefficients to generate quantized transform coefficients.
  • the reordering unit 124 rearranges the quantized transform coefficients.
  • the reordering unit 124 may rearrange the block-shaped quantized transform coefficients into a one-dimensional vector form through a scanning method of coefficients.
  • the reordering unit 124 may be a part of the quantization unit 123, although the reordering unit 124 is described as an alternative configuration.
  • the entropy encoding unit 130 may perform entropy encoding on the quantized transform coefficients.
  • Entropy encoding may include, for example, an encoding method such as exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC)
  • CAVLC context-adaptive variable length coding
  • CABAC context-adaptive binary arithmetic coding
  • the entropy encoding unit 130 may encode the information necessary for video restoration (such as the value of a syntax element) together with the quantized transform coefficient or separately.
  • the entropy encoded information may be transmitted or stored in units of NAL (network abstraction layer) units in the form of a bit stream.
  • NAL network abstraction layer
  • the inverse quantization unit 125 inversely quantizes the quantized values (quantized transform coefficients) in the quantization unit 123 and the inverse transformation unit 126 inversely quantizes the inversely quantized values in the inverse quantization unit 125, .
  • the adder 140 combines the residual sample and the predicted sample to reconstruct the picture.
  • the residual samples and the prediction samples are added in units of blocks so that a reconstruction block can be generated.
  • the adding unit 140 may be a part of the predicting unit 110, Meanwhile, the addition unit 140 may be referred to as a restoration unit or a restoration block generation unit.
  • the filter unit 150 may apply a deblocking filter and / or a sample adaptive offset. Through deblocking filtering and / or sample adaptive offsets, artifacts in the block boundary in the reconstructed picture or distortion in the quantization process can be corrected.
  • the sample adaptive offset can be applied on a sample-by-sample basis and can be applied after the process of deblocking filtering is complete.
  • the filter unit 150 may apply an ALF (Adaptive Loop Filter) to the restored picture.
  • the ALF may be applied to the reconstructed picture after the deblocking filter and / or sample adaptive offset is applied.
  • the memory 160 may store restored pictures (decoded pictures) or information necessary for encoding / decoding.
  • the reconstructed picture may be a reconstructed picture whose filtering procedure has been completed by the filter unit 150.
  • the stored restored picture may be used as a reference picture for (inter) prediction of another picture.
  • the memory 160 may store (reference) pictures used for inter prediction. At this time, the pictures used for inter prediction can be designated by a reference picture set or a reference picture list.
  • FIG. 2 is a schematic view illustrating a configuration of a video decoding apparatus to which the present invention can be applied.
  • the video decoding apparatus 200 includes an entropy decoding unit 210, a residual processing unit 220, a predicting unit 230, an adding unit 240, a filter unit 250, and a memory 260 .
  • the residual processing unit 220 may include a rearrangement unit 221, an inverse quantization unit 222, and an inverse transformation unit 223.
  • the video decoding apparatus 200 can restore video in response to a process in which video information is processed in the video encoding apparatus.
  • the video decoding apparatus 200 can perform video decoding using a processing unit applied in the video encoding apparatus.
  • the processing unit block of video decoding may be, for example, a coding unit and, in another example, a coding unit, a prediction unit or a conversion unit.
  • the coding unit may be partitioned along the quad tree structure and / or the binary tree structure from the maximum coding unit.
  • a prediction unit and a conversion unit may be further used as the case may be, in which case the prediction block is a block derived or partitioned from the coding unit and may be a unit of sample prediction. At this time, the prediction unit may be divided into sub-blocks.
  • the conversion unit may be divided along the quad tree structure from the coding unit and may be a unit that derives the conversion factor or a unit that derives the residual signal from the conversion factor.
  • the entropy decoding unit 210 may parse the bitstream and output information necessary for video restoration or picture restoration. For example, the entropy decoding unit 210 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and calculates a value of a syntax element necessary for video restoration, a quantized value Lt; / RTI >
  • a coding method such as exponential Golomb coding, CAVLC, or CABAC
  • the CABAC entropy decoding method includes receiving a bean corresponding to each syntax element in a bitstream, decoding decoding target information of the decoding target syntax element, decoding information of a surrounding and decoding target block, or information of a symbol / A context model is determined and an occurrence probability of a bin is predicted according to the determined context model to perform arithmetic decoding of the bean to generate a symbol corresponding to the value of each syntax element have.
  • the CABAC entropy decoding method can update the context model using the information of the decoded symbol / bin for the context model of the next symbol / bean after determining the context model.
  • the residual value i.e., the quantized transform coefficient, which is entropy-decoded in the entropy decoding unit 210, 221).
  • the reordering unit 221 may rearrange the quantized transform coefficients into a two-dimensional block form.
  • the reordering unit 221 may perform reordering in response to the coefficient scanning performed in the encoding apparatus.
  • the rearrangement unit 221 may be a part of the inverse quantization unit 222, although the rearrangement unit 221 has been described as an alternative configuration.
  • the inverse quantization unit 222 may dequantize the quantized transform coefficients based on the (inverse) quantization parameters, and output the transform coefficients. At this time, the information for deriving the quantization parameter may be signaled from the encoding device.
  • the inverse transform unit 223 may invert the transform coefficients to derive the residual samples.
  • the prediction unit 230 may predict a current block and may generate a predicted block including prediction samples of the current block.
  • the unit of prediction performed in the prediction unit 230 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 230 may determine whether intra prediction or inter prediction is to be applied based on the prediction information.
  • a unit for determining whether to apply intra prediction or inter prediction may differ from a unit for generating a prediction sample.
  • units for generating prediction samples in inter prediction and intra prediction may also be different.
  • whether inter prediction or intra prediction is to be applied can be determined in units of CU.
  • the prediction mode may be determined in units of PU to generate prediction samples.
  • a prediction mode may be determined in units of PU, and prediction samples may be generated in units of TU.
  • the prediction unit 230 may derive a prediction sample for the current block based on the surrounding reference samples in the current picture.
  • the prediction unit 230 may apply a directional mode or a non-directional mode based on the neighbor reference samples of the current block to derive a prediction sample for the current block.
  • a prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
  • the prediction unit 230 may derive a prediction sample for a current block based on a sample specified on a reference picture by a motion vector on a reference picture.
  • the prediction unit 230 may derive a prediction sample for a current block by applying a skip mode, a merge mode, or an MVP mode.
  • motion information necessary for inter-prediction of a current block provided in the video encoding apparatus for example, information on a motion vector, a reference picture index, and the like may be acquired or derived based on the prediction information
  • motion information of a neighboring block can be used as motion information of the current block.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • the prediction unit 230 may construct a merge candidate list using the motion information of the available neighboring blocks and use the information indicated by the merge index on the merge candidate list as the motion vector of the current block.
  • the merge index may be signaled from the encoding device.
  • the motion information may include a motion vector and a reference picture. When the motion information of temporal neighboring blocks is used in the skip mode and the merge mode, the highest picture on the reference picture list can be used as a reference picture.
  • the difference between the predicted sample and the original sample is not transmitted.
  • a motion vector of a current block can be derived using a motion vector of a neighboring block as a motion vector predictor.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • a merge candidate list may be generated using a motion vector of the reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block that is a temporally neighboring block.
  • the motion vector of the candidate block selected in the merge candidate list is used as the motion vector of the current block.
  • the prediction information may include a merge index indicating a candidate block having an optimal motion vector selected from the candidate blocks included in the merge candidate list.
  • the predicting unit 230 can derive the motion vector of the current block using the merge index.
  • a motion vector predictor candidate list is generated by using a motion vector of the reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block which is a temporally neighboring block . That is, the motion vector of the reconstructed spatial neighboring block and / or the motion vector corresponding to the neighboring block Col may be used as a motion vector candidate.
  • the information on the prediction may include a predicted motion vector index indicating an optimal motion vector selected from the motion vector candidates included in the list.
  • the predicting unit 230 can use the motion vector index to select a predictive motion vector of the current block from the motion vector candidates included in the motion vector candidate list.
  • the predicting unit of the encoding apparatus can obtain the motion vector difference (MVD) between the motion vector of the current block and the motion vector predictor, and can output it as a bit stream. That is, MVD can be obtained by subtracting the motion vector predictor from the motion vector of the current block.
  • the predicting unit 230 may obtain the motion vector difference included in the information on the prediction, and derive the motion vector of the current block through addition of the motion vector difference and the motion vector predictor.
  • the prediction unit may also acquire or derive a reference picture index or the like indicating the reference picture from the information on the prediction.
  • the adder 240 may add a residual sample and a prediction sample to reconstruct a current block or a current picture.
  • the adder 240 may add the residual samples and the prediction samples on a block-by-block basis to reconstruct the current picture.
  • the adder 240 has been described as an alternative configuration, but the adder 240 may be a part of the predictor 230.
  • the addition unit 240 may be referred to as a restoration unit or a restoration block generation unit.
  • the filter unit 250 may apply deblocking filtered sample adaptive offsets, and / or ALFs, to the reconstructed pictures.
  • the sample adaptive offset may be applied on a sample-by-sample basis and may be applied after deblocking filtering.
  • the ALF may be applied after deblocking filtering and / or sample adaptive offsets.
  • the memory 260 may store restored pictures (decoded pictures) or information necessary for decoding.
  • the reconstructed picture may be a reconstructed picture whose filtering procedure has been completed by the filter unit 250.
  • the memory 260 may store pictures used for inter prediction.
  • the pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
  • the reconstructed picture can be used as a reference picture for another picture.
  • the memory 260 may output the restored picture according to the output order.
  • the inter prediction may be performed through motion compensation using motion information.
  • the motion information for the current block may be generated by applying a skip mode, a merge mode, or an adaptive motion vector prediction (AMVP) mode, and may be encoded and output.
  • the motion information may include L0 motion information for the L0 direction and / or L1 motion information for the L1 direction.
  • the L0 motion information may include an L0 reference picture index and a motion vector L0 (Motion Vector L0, MVL0) indicating an L0 reference picture included in a reference picture list L0 (List 0, L0) for the current block
  • the L1 motion information may include an L1 reference picture index and an MVL1 indicating an L1 reference picture contained in a reference picture list L0 (List 1, L1) for the current block.
  • the L0 direction may be referred to as a past direction or a forward direction.
  • the L1 direction may be referred to as a future direction or a reverse direction.
  • the reference picture list L0 may include previous pictures in the output order than the current picture
  • the reference picture list L1 may include pictures after the current picture in the output order.
  • the MVL0 may be referred to as an L0 motion vector
  • the MVL1 may be referred to as an L1 motion vector.
  • inter prediction In performing the prediction on the current block, if inter prediction is performed based on the L0 motion information, it may be referred to as LO prediction.
  • L1 prediction When inter prediction is performed based on L1 motion information, it may be referred to as L1 prediction.
  • Prediction may be referred to as bi-prediction when inter prediction is performed based on motion information and L1 motion information.
  • a method of transmitting the motion information includes a method of directly encoding and transmitting the motion information (for example, an AMVP mode) and a method of generating a list based on motion information of a neighboring block with respect to the current block, (E.g., a merge mode) may be included.
  • a method of directly encoding and transmitting the motion information for example, an AMVP mode
  • a method of generating a list based on motion information of a neighboring block with respect to the current block (E.g., a merge mode) may be included.
  • the process of encoding the motion vector of the current block in the image coding system includes a motion vector prediction process and a process of encoding a motion vector difference between the motion vector and the motion vector predictor .
  • the motion vector prediction process is proposed.
  • the motion vector predicting step may include a step of constructing a motion information candidate list by deriving motion information of a neighboring block of the current block, and a step of selecting one of motion information candidates included in the motion information candidate list, And using them as a motion vector predictor (MVP).
  • MVP motion vector predictor
  • the motion information candidate list may be referred to as an MVP candidate list.
  • the syntax for encoding the motion vector in inter prediction may be as shown in the following table.
  • inter_pred_idc may indicate a syntax element of an index indicating the direction of inter prediction of the current block.
  • inter_pred_idc may indicate a syntax element indicating whether the inter prediction performed on the current block is any of L0 prediction, L1 prediction, and pair prediction.
  • Ref_idx_l0 is a syntax element indicating a reference picture of the current block among the reference pictures included in the L0
  • ref_idx_l1 is a syntax element indicating a reference picture of the current block among the reference pictures included in the L1
  • mvp_l0_flag is MVPL0 (the motion vector predictor L0) is a syntax element of a flag indicating MVPL0 of the current block among the MVPL0 candidates included in the candidate list
  • mvp_l1_flag is a syntax element of a flag indicating MVPL1 among MVPL1 candidates included in the candidate list MVPL1 (motion vector predictor L1, MVPL1)
  • the syntax element mvd_coding carries the MVD information of the current block and may include syntax elements representing, for example, MVDL0 (motion vector difference L0, MVDL0) and / or MVDL1 (motion vector difference L1, MVDL1) have.
  • the MVPL0 candidate list may represent a motion information candidate list including MVPL0 candidates for a reference picture included in the L0, and MVPL0 may represent an MVP associated with a reference picture included in the L0.
  • the MVPL1 candidate list may represent a motion information candidate list including MVPL1 candidates for the reference pictures included in the L1, and MVPL1 may represent MVPs associated with the reference pictures included in the L1.
  • the ref_idx_l0 may be transmitted (signaled) when the L0 prediction or the pair prediction is performed on the current block, and ref_idx_l1 is transmitted when the L1 prediction or the pair prediction is performed on the current block .
  • the mvp_l0_flag may be transmitted when the L0 prediction or the pair prediction is performed on the current block, and the mvp_l1_flag may be transmitted when the L1 prediction or the pair prediction is performed on the current block.
  • the mvd_coding may include the MVDL0 and / or the MVDL1, and each MVD may be composed of an x component and a y component, and the sign and size of each component may be transmitted separately.
  • mvp_l0_flag or mvp_l1_flag may indicate a syntax element of an index indicating a candidate included in the MVPL0 candidate list or the MVPL1 candidate list, and the MVPL0 candidate list or MVPL1 candidate list may include two MVP candidates In this case, since one of the two MVP candidates is selected as the MVP of the current block, it can be expressed by a flag.
  • the motion information candidate list of the current block may be configured as described below.
  • FIG. 3 shows neighboring blocks for generating a motion information candidate list of the current block.
  • the motion information candidate list of the current block may be derived based on the neighboring blocks of the current block. That is, the merge candidate list or the MVP candidate list may be configured based on neighboring blocks at predetermined positions around the target block. For example, as shown in FIG. 3, two blocks A0 310 and A1 320 located on the left side of the target block and three blocks B0 330, B1 340 and B2 350 ), The merge candidate list or MVP candidate list may be constructed.
  • A0 310 may be referred to as a lower left neighboring block
  • A1 320 may be referred to as a left neighboring block.
  • B0 330 is an upper right neighboring block
  • B1 340 is an upper neighboring block
  • B2 350 is an upper left neighboring block .
  • one of a motion vector of the A0 310 and a motion vector of the A1 320 is a motion information candidate for the reference picture list of the current block Can be derived.
  • the reference picture list may indicate the L0 or the L1
  • the motion information candidate may indicate the MVP candidate.
  • the reference picture of the A0 310 and the reference picture of the A1 320 are different from the reference picture of the current block included in the reference picture list of the current block,
  • the motion vector of the current block 320 may not be derived as the motion information candidate of the current block.
  • the motion vector of the A0 310 or the motion vector of the A1 320 may be scaled, and the scaled motion vector may be derived as a motion information candidate of the current block.
  • the motion vector of the B0 330, the motion vector of the B1 340, and the motion vector of the B2 350 may be derived as motion information candidates for the reference picture list.
  • the reference picture list may represent L0 or L1.
  • the reference picture of the B0 330, the reference picture of the B1 340 and the reference picture of the B2 350 are different from the reference pictures of the current block included in the reference picture list of the current block,
  • the motion vector of the current block 330, the motion vector of the B1 340, and the motion vector of the B2 350 may not be derived as motion information candidates of the current block.
  • a motion vector of the B0 330, a motion vector of the B1 340, or a motion vector of the B2 350 may be scaled, and the scaled motion vector may be scaled It can be derived as a candidate.
  • the motion information candidates derived based on A0 and A1 are identical to the motion information candidates derived on the basis of B0, B1 and B2, the motion information candidates derived based on A0 and A1, One of the motion information candidates derived based on B1 and B2 may be deleted.
  • a motion vector of a collocated block may be scaled,
  • the scaled motion vector may be inserted as a motion information candidate in the motion information candidate list of the current block.
  • the co-located block may represent a corresponding block of the current block in a collocated picture, and the motion information candidate list may represent an MVP candidate list.
  • the co-located block may be referred to as a temporal neighboring block.
  • the zero vector may be inserted as the motion information candidate in the motion information candidate list even after the above process is performed.
  • the motion information candidate list may be derived as described above, but the motion information candidate list may be updated by further considering the receiving end motion information derivation method.
  • the motion vector prediction accuracy of the current block can be improved by updating the motion information candidate list. That is, MVP, which is more similar to the motion vector of the current block, can be derived through updating of the motion information candidate list. If the motion vector prediction accuracy of the current block is improved, the bit amount for MVD of the current block can be reduced and the overall video encoding / decoding efficiency can be improved.
  • the receiving end side motion information derivation method may represent a method of deriving motion information based on a template of the current block and a template of a reference block in a reference picture of the current block. That is, a reference block for a template having a minimum difference from the template of the current block can be derived, and motion information (or a motion vector) representing the reference block can be derived.
  • the decoding apparatus may set an arbitrary peripheral region of the current block as a template of the current block and perform motion information search on the current block using the same template as the template of the current block on the reference picture .
  • FIG. 4 shows an example of a template of the current block.
  • the template of the current block may be a specific region including neighboring samples of the current block. Since the left neighboring samples, the left upper neighbor sample, and the upper neighbor samples of the current block may have already been decoded at the decoding time of the current block and thus can be used for the motion information search process in the decoding apparatus, , The upper left neighbor sample and the upper neighbor sample may be included in the template of the current block.
  • the template may be a specific region including left neighboring samples, upper left neighboring samples, and upper neighboring samples of the current block.
  • the template may be a specific region including the left neighboring samples and the upper neighboring samples of the current block.
  • the template may be a specific region including the upper neighbor samples of the current block.
  • the template may be a specific region including the left neighboring samples of the current block.
  • a template having a minimum difference from the template of the current block among the templates of the blocks in the reference picture may be derived and a motion vector indicating a reference block of the derived template may be derived .
  • the difference may be called a cost.
  • the cost may be derived as the sum of the absolute values of the differences between the templates of the current block and the corresponding samples of the template of the reference block.
  • a cost function for deriving the motion vector of the current block can be expressed by the following equation. That is, the cost can be derived based on the following equation.
  • (I, j) represents the position (i, j) of the sample in the block
  • Cost distortion is the cost
  • Temp ref is the restored sample of the (i, j) coordinate in the template of the reference block in the reference picture
  • Temp cur Re represents a restoration sample of the (i, j) coordinate in the template of the current block.
  • the difference between the template of the reference block and the corresponding sample between the templates of the current block can be accumulated and a motion vector indicating a reference block having the smallest accumulation of the difference among the blocks in the reference picture can be derived .
  • the accumulation of the difference may represent the cost.
  • the motion information candidate list of the current block may be constructed as described above, and the motion information candidate list may be configured by the receiving end side motion information derivation method.
  • the motion information candidate lists Can be obtained.
  • the motion information candidate list of the current block may be configured based on the neighboring blocks of the current block
  • the motion information candidate list of the current block may be configured based on the template of the current block.
  • the motion information candidate list based on the neighboring blocks of the current block may be referred to as a first motion information candidate list and may be referred to as the second motion information candidate list based on the template of the current block.
  • a template of the current block can be set, a template having a small cost with respect to the template of the current block among the templates of blocks in the reference block of the current block can be derived, May be derived as a candidate of the second motion information candidate list.
  • the motion vector derived through the receiving end side motion information derivation method that is, the candidate of the second motion information candidate list May be inserted as candidates of the first motion information candidate list.
  • the method of updating the motion information candidate list based on the template of the current block may be described as follows.
  • FIG. 5 shows an example of updating a motion information candidate list based on the template of the current block.
  • the decoding apparatus may construct the motion information candidate list based on neighboring blocks of the current block (S500).
  • the decoding apparatus may derive modified motion information based on the template of the current block (S510). That is, the decoding apparatus can predict the corrected motion information using the receiving end side motion information derivation method.
  • the decoding apparatus can set the template of the current block as described above and derive a template having a small cost from the template of the current block among the templates of the blocks in the reference picture of the current block.
  • the decoding apparatus may derive the modified motion information indicating a reference block of the derived template.
  • the decoding apparatus may determine whether the candidates of the motion information candidate list are identical to the modified motion information (S520). If the candidates of the motion information candidate list and the modified motion information are not the same, the decoding apparatus may insert the modified motion information as candidates of the motion information candidate list in operation S530.
  • the candidate representing the modified motion information may be referred to as a modified candidate. In this case, the decoding apparatus can perform inter-prediction of the current block based on the candidate motion information candidate list. On the other hand, when the candidates of the motion information candidate list are identical to the modified motion information, the decoding apparatus can perform inter prediction of the current block based on the motion information candidate list in which the candidate is not inserted.
  • a method of determining the order of the candidates by applying the receiving end side motion information derivation method to the candidates of the configured motion information candidate list Can be.
  • the cost function can be calculated by applying the receiving end side motion information derivation method to the templates of the current block and the template of the current block and the motion information candidate list can be rearranged in order of decreasing cost of the cost function.
  • the reference blocks indicated by the candidates of the motion information candidate list can be derived, the cost of the templates of the reference blocks and the template of the current block can be calculated, and the cost of the motion information Candidates in the candidate list can be rearranged. Since the number of candidates in the motion information candidate list is two, there is no change in the amount of information to be encoded.
  • the number of bits of the index indicating the candidates of the motion information candidate list may be variable and the first candidate among the candidates of the motion information candidate list may be coded with the smallest bit number.
  • the method of updating the motion information candidate list based on the template of the current block may be described as follows.
  • FIG. 6 shows an example of updating a motion information candidate list based on the template of the current block.
  • the decoding apparatus may construct the motion information candidate list based on neighboring blocks of the current block (S600).
  • the decoding apparatus may calculate a cost function value of the candidates of the motion information candidate list based on the template of the current block (S610). That is, the decoding apparatus can calculate the cost function value of the candidates of the motion information candidate list based on the method of deriving the receiving end side motion information.
  • the decoding apparatus can set the template of the current block as described above and the templates of the reference blocks indicated by the candidates of the motion information candidate list can be derived.
  • the templates of the reference blocks may have the same shape as the template of the current block.
  • the cost of the template of the current block and the template of each reference block can be calculated.
  • the cost may represent the cost function value.
  • the decoding apparatus may determine whether the order of the candidates in the motion information candidate list is the same as the ascending order of the cost function values of the candidates in operation S620. If the order of the candidates of the motion information candidate list is not equal to the ascending order of the cost function values of the candidates, the decoding apparatus may rearrange the candidates of the motion information candidate list in order of decreasing the cost function value (S630 ). In this case, the decoding apparatus can perform inter prediction of the current block based on the reordered motion information candidate list. Meanwhile, when the candidates of the motion information candidate list and the modified motion information are the same, the decoding apparatus can perform the inter prediction of the current block based on the motion information candidate list that has not been rearranged.
  • a motion information candidate list including more than two candidates is constructed, and the motion information candidate list A method of determining the order of the candidates by applying the side motion information derivation method may be used.
  • a motion information candidate list including N candidates of more than two can be derived based on neighboring blocks of the current block, and templates of reference blocks indicated by the candidates and templates of the current block
  • the cost function can be calculated by applying the receiving end side motion information derivation method to the motion information candidate list, and the motion information candidate list can be rearranged in a descending order of cost.
  • the reference blocks indicated by the candidates of the motion information candidate list can be derived, the cost of the templates of the reference blocks and the template of the current block can be calculated, and the cost of the motion information Candidates in the candidate list can be rearranged.
  • two candidates of the candidates of the motion information candidate list are selected and modified to form a motion information candidate list.
  • the first candidate and the second candidate among the candidates of the reordered motion information candidate list may be selected, and the modified motion information candidate list may be configured based on the first candidate and the second candidate.
  • the prediction accuracy can be improved as compared with the case of performing the inter prediction using the existing motion information candidate list.
  • the method of updating the motion information candidate list based on the template of the current block may be described as follows.
  • FIG. 7 shows an example of updating a motion information candidate list based on the template of the current block.
  • the decoding apparatus may construct the motion information candidate list including N candidates based on neighboring blocks of the current block (S700). Where N may be a number greater than two (N > 2).
  • the decoding apparatus may calculate a cost function value of the candidates of the motion information candidate list based on the template of the current block (S710).
  • the decoding apparatus can set the template of the current block as described above and the templates of the reference blocks indicated by the candidates of the motion information candidate list can be derived.
  • the templates of the reference blocks may have the same shape as the template of the current block.
  • the cost of the template of the current block and the template of each reference block can be calculated.
  • the cost may represent the cost function value.
  • the decoding apparatus can arrange the candidates of the motion information candidate list in ascending order of the cost function values in the order of the candidates of the motion information candidate list and construct the modified motion information candidate list based on the candidates of the previous order in operation S720. If the order of the candidates of the motion information candidate list is not the same as the ascending order of the cost function values of the candidates, the decoding apparatus can rearrange the candidates of the motion information candidate list in descending order of the cost function value. Next, the decoding apparatus can construct a modified motion information candidate list based on the candidates of the motion information candidate list in the order ahead of the motion information candidate list. For example, the modified motion information candidate list may be configured based on the first candidate and the second candidate among the candidates of the reordered motion information candidate list. In this case, the decoding apparatus can perform inter-prediction of the current block based on the modified motion information candidate list.
  • a method of rearranging the candidates of the motion information candidate list considering the direction of the motion vector indicated by the candidate of the motion information candidate list may be proposed. Minimizing the difference between a motion vector derived through motion estimation and a predicted motion vector, i.e., a candidate included in the motion information candidate list, in predicting the motion vector of the current block improves the coding efficiency . Therefore, it is possible to further reduce the motion vector difference (MVD) of the current block if the candidates of the motion information candidate list indicate different directions.
  • VMD motion vector difference
  • the direction of the motion vector of each candidate included in the motion information candidate list configured to derive the motion vector of the current block may be divided into an x component and a y component of the motion vector,
  • the cases can be defined as different directions.
  • the sign of the x component of the motion vector represented by the first candidate included in the motion information candidate list is different from the sign of the x component of the motion vector represented by the second candidate
  • the x component of the first candidate and the second candidate When the y component of the motion vector represented by the first candidate and the y component of the motion vector represented by the second candidate are different from each other, the direction of the y component of the first candidate and the second candidate is It can be said that it is different.
  • the first candidate may represent a first candidate of the motion information candidate list
  • the second candidate may represent a candidate other than the first candidate among the candidates of the motion information candidate list.
  • both the x component direction and the y component direction of the first candidate and the second candidate are different, it can be set to the highest priority, and the x component direction or the y component of the first candidate and the second candidate If only the directions are different, it can be set to the medium priority, and if the x component direction and the y component direction of the first candidate and the second candidate are all the same, the lowest priority can be set.
  • the pseudo code for setting the priority order considering the directions of the motion vector represented by the first candidate and the motion vector represented by the second candidate is as follows.
  • mv0.x may represent the x component of the motion vector represented by the first candidate
  • mv0.y may represent the y component of the motion vector represented by the first candidate
  • mv1.x may represent the x component of the motion vector represented by the second candidate
  • mv1.y may represent the y component of the motion vector represented by the second candidate.
  • the motion information candidate list may be rearranged based on the priority.
  • a method of updating the motion information candidate list based on the priority of the first candidate may be described as follows.
  • FIG. 8 shows an example of updating the motion information candidate list based on the priorities derived according to the directions of the motion vectors of the candidates.
  • the decoding apparatus may construct the motion information candidate list including N candidates based on neighboring blocks of the current block (S800). Where N may be a number greater than two (N > 2).
  • the decoding apparatus may derive a second candidate as an n-th candidate to derive a priority based on the first candidate of the motion information candidate list (S810).
  • the decoding apparatus may calculate the priority of the n-th candidate based on the motion vector of the first candidate and the direction of the motion vector of the n-th candidate (S820).
  • the decoding apparatus may calculate the priority based on the x-component direction and the y-component direction of the first candidate and the second candidate. For example, if both the x component direction and the y component direction of the first candidate and the second candidate are different, it can be set to the highest priority, and only the x component direction or the y component direction of the first candidate and the second candidate Can be set to the intermediate priority and the lowest priority can be set if the x component direction and y component direction of the first candidate and the second candidate are all the same.
  • the direction of the x component of the first candidate and the second candidate may be different.
  • the y component of the motion vector of the first candidate and the y component of the motion vector of the second candidate are smaller than 0, the direction of the y component of the first candidate and the second candidate may be different.
  • the decoding apparatus may determine whether the n-th candidate is the N-th candidate (S830). If the nth candidate is not the Nth candidate, the decoding apparatus may calculate the priority of the (n + 1) th candidate based on the direction of the motion vector of the first candidate and the motion vector of the (n + 1) th candidate ). If the n-th candidate is the N-th candidate, the decoding apparatus can sort the order of the candidates of the motion information candidate list based on the priorities of the candidates, and the motion information candidate list (S850). The decoding apparatus can sort the order of the candidates of the motion information candidate list in descending order of priority of the candidates.
  • the decoding apparatus may construct a modified motion information candidate list based on the candidates in the preceding order among the aligned motion information candidate lists.
  • the modified motion information candidate list may be configured based on the first candidate and the second candidate among the candidates of the aligned motion information candidate list.
  • the decoding apparatus can perform inter-prediction of the current block based on the modified motion information candidate list.
  • both the above-described method of deriving the receiving end side motion information and the method of considering the priority derived based on the first candidate of the motion information candidate list can be considered.
  • FIG. 9 shows an example of updating the motion information candidate list based on the template of the current block and the priorities derived according to the directions of the motion vectors of the candidates of the motion information candidate list.
  • the decoding apparatus may construct the motion information candidate list including N candidates based on neighboring blocks of the current block (S900). Where N may be a number greater than two (N > 2).
  • the decoding apparatus may derive modified motion information based on the template of the current block (S910).
  • the decoding apparatus can set the template of the current block as described above and derive a template having a small cost from the template of the current block among the templates of the blocks in the reference picture of the current block.
  • the decoding apparatus may derive the modified motion information indicating a reference block of the derived template.
  • the decoding apparatus may determine whether the candidates of the motion information candidate list are identical to the modified motion information (S920). If the candidates of the motion information candidate list and the corrected motion information are not the same, the decoding apparatus may insert the modified motion information as candidates of the motion information candidate list (S930). On the other hand, when the candidates of the motion information candidate list are identical to the modified motion information, the modified motion information may not be inserted as candidates of the motion information candidate list.
  • the decoding apparatus may derive a second candidate as an n-th candidate to derive a priority based on the first candidate of the motion information candidate list (S940).
  • the decoding apparatus may calculate the priority of the n-th candidate based on the motion vector of the first candidate and the direction of the motion vector of the n-th candidate (S950).
  • the decoding apparatus may calculate the priority based on the x-component direction and the y-component direction of the first candidate and the second candidate.
  • both the x component direction and the y component direction of the first candidate and the second candidate are different, it can be set to the highest priority, and only the x component direction or the y component direction of the first candidate and the second candidate Can be set to the intermediate priority and the lowest priority can be set if the x component direction and y component direction of the first candidate and the second candidate are all the same.
  • the direction of the x component of the first candidate and the second candidate may be different.
  • the direction of the y component of the first candidate and the second candidate may be different.
  • the decoding device may determine whether the n-th candidate is the N-th candidate (S960). If the nth candidate is not the Nth candidate, the decoding apparatus may calculate the priority of the (n + 1) th candidate based on the direction of the motion vector of the first candidate and the motion vector of the (n + 1) th candidate ). If the n-th candidate is the N-th candidate, the decoding apparatus can sort the order of the candidates of the motion information candidate list based on the priorities of the candidates, and the motion information candidate list (S980). The decoding apparatus can sort the order of the candidates of the motion information candidate list in descending order of priority of the candidates.
  • the decoding apparatus may construct a modified motion information candidate list based on the candidates in the preceding order among the aligned motion information candidate lists.
  • the modified motion information candidate list may be configured based on the first candidate and the second candidate among the candidates of the aligned motion information candidate list.
  • the decoding apparatus can perform inter-prediction of the current block based on the modified motion information candidate list.
  • the modified motion information candidate list is derived as described above, the modified candidate most similar to the motion vector of the current block among the candidates of the motion information candidate list is the first candidate of the modified motion information candidate list And a motion vector which is the direction opposite to the first candidate may be set as the second candidate of the modified motion information candidate list. Diversity of the motion vectors represented by the candidates of the modified motion information list can be guaranteed, and coding efficiency of the inter prediction can be improved.
  • the decoding apparatus can calculate the cost function values of the candidates of the motion information candidate list based on the template of the current block, and rearrange the candidates of the motion information candidate list in order of decreasing the cost function value .
  • the decoding apparatus can sort the motion information candidate list based on the priorities of the candidates derived based on the first candidate of the motion information candidate list, and generate a motion information candidate list based on the candidates of the previous order Can be configured.
  • the candidate with the smallest cost function value derived based on the template of the current block may be set as the first candidate of the motion information candidate list, and the motion Since the information candidate list can be reconstructed, the diversity of the motion vectors represented by the candidates of the modified motion information list can be guaranteed, and the coding efficiency of the inter prediction can be improved through this.
  • FIG. 10 schematically shows a video encoding method by an encoding apparatus according to the present invention.
  • the method disclosed in Fig. 10 can be performed by the encoding apparatus disclosed in Fig. Specifically, for example, S1000 to S1020 in FIG. 10 may be performed by the predicting unit of the encoding apparatus, and S1030 may be performed by the entropy encoding unit of the encoding apparatus.
  • the encoding apparatus generates motion information of the current block (S1000).
  • the encoding apparatus may apply inter prediction to the current block.
  • the encoding apparatus When inter prediction is applied to the current block, the encoding apparatus generates motion information on the current block by applying any one of a skip mode, a merge mode, and an adaptive motion vector prediction (AMVP) mode can do.
  • the encoding apparatus In the skip mode and the merge mode, the encoding apparatus can generate motion information of the current block based on motion information of a neighboring block of the current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may be bi-prediction motion information, or may be uni-prediction motion information.
  • the bi-predictive motion information may include an L0 reference picture index, an L0 motion vector, an L1 reference picture index, and an L1 motion vector
  • the short motion prediction information may include an L0 reference picture index and a L0 motion vector
  • An L1 reference picture index and an L1 motion vector L0 denotes a reference picture list L0 (List 0)
  • L1 denotes a reference picture list L1 (List 1).
  • the encoding apparatus may derive a motion vector of the current block using a motion vector of a neighboring block of the current block as a motion vector predictor (MVP) of the current block. Accordingly, the encoding apparatus can generate the motion information including the MVP flag indicating one of the candidates included in the motion information candidate list and the reference picture index for the motion vector.
  • MVP motion vector predictor
  • the encoding apparatus generates a motion information candidate list based on neighboring blocks of the current block (S1010).
  • the encoding apparatus may generate a motion information candidate list based on neighboring blocks of the current block.
  • the motion information candidate list may indicate a merge candidate list.
  • the motion information candidate list may include a motion vector predictor (MVP) candidate list Lt; / RTI >
  • MVP motion vector predictor
  • the encoding apparatus can generate a motion information candidate list including neighboring blocks of the current block, and when the AMVP mode is applied to the current block,
  • the motion information candidate list may be generated based on motion vectors of neighboring blocks of the current block.
  • the motion vectors may be derived as (MVP) candidates included in the motion information candidate list.
  • the encoding apparatus can generate the motion information candidate list based on motion vectors of neighboring blocks of the current block, and the motion information candidate list includes two candidates .
  • the motion information candidate list may include N candidates.
  • the motion information candidate list may include modified candidates of the current block.
  • the encoding apparatus may derive the modified candidate based on a template of the current block.
  • the template of the current block may be represented as a specific region including surrounding samples of the current block.
  • the template may be a specific region including left neighboring samples, upper left neighboring samples, and upper neighboring samples of the current block.
  • the template may be a specific region including the left neighboring samples of the current block and the upper neighboring samples.
  • the template may be a specific region including the upper neighbor samples of the current block.
  • the template may be a specific region including the left neighboring samples of the current block.
  • a template of a current block can be derived based on neighboring samples of the current block and a reference block having a template having a minimum cost from a template of the current block among the reference blocks in the reference block of the current block
  • the modified candidate may be derived based on the motion information indicating the specific reference block. That is, the modified candidate may represent a motion vector (or motion information) indicating the specific reference block.
  • the cost may be determined based on Equation (1).
  • the encoding apparatus updates the candidates of the motion information candidate list to generate a modified motion information candidate list (S1020).
  • the encoding apparatus can update the motion information candidate list to generate the modified motion information candidate list.
  • the encoding apparatus may generate the modified motion information candidate list based on the template of the current block.
  • the template of the current block can be derived based on the surrounding samples of the current block, and the templates of the reference blocks indicated by the candidates of the motion information candidate list can be derived.
  • the template of the current block may be represented as a specific region including surrounding samples of the current block as described above.
  • the templates of the reference blocks may represent specific regions corresponding to the template of the current block. That is, each of the templates of the reference blocks may be derived based on the surrounding samples of the neighboring samples and the corresponding reference blocks of the current block.
  • the templates of the reference blocks may have the same shape as the template of the current block.
  • the costs of the candidates may be derived based on the templates of the reference blocks and the template of the current block.
  • the cost may be determined based on Equation (1).
  • the order of the candidates of the motion information candidate list may be updated based on the costs of the candidates. That is, the order of the candidates of the motion information candidate list is updated in the order of smaller cost, and the modified motion information candidate list can be generated.
  • the modified motion information candidate list can be generated by arranging in the candidate order having the largest cost in the candidate having the smallest cost among the candidates.
  • the candidates may be sorted in the candidate order having the largest cost in the candidate having the smallest cost, and the first candidate and the second candidate in the sorted candidate
  • the modified motion information candidate list can be generated. That is, the modified motion information candidate list may be generated based on the first candidate and the second candidate in the updated order.
  • the encoding apparatus may generate the modified motion information candidate list based on the directions of the motion vectors indicated by the candidates of the motion information candidate list. Specifically, the priority of each of the remaining candidates may be derived based on a motion vector represented by the first candidate of the motion information candidate list and the motion vectors represented by the remaining candidates, and the priority of each candidate of the remaining candidates.
  • the motion information candidate list is updated based on the motion information candidate list, and the modified motion information candidate list is generated.
  • the remaining candidates may represent candidates other than the first candidate among the candidates of the motion information candidate list.
  • the priority of each candidate of the remaining candidates can be derived as follows. If the x-component direction and the y-component direction of the motion vector represented by the first candidate and the motion vector represented by the candidate are all different, the priority of each candidate can be derived with the highest priority. When the direction of one of the x component direction and the y component direction of the motion vector indicated by the first candidate and the motion vector indicated by the candidate is different, the priority of each candidate may be derived as a medium priority. In addition, when the x-component direction and the y-component direction of the motion vector represented by the first candidate and the motion vector represented by the candidate are all the same, the priority of each candidate can be derived with the lowest priority.
  • the motion vector represented by the first candidate and the x component of the motion vector represented by the candidate when the value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is less than 0, the motion vector represented by the first candidate and the x component And when the value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is greater than 0, the motion vector represented by the first candidate and the motion represented by each candidate
  • the x component direction of the vector may be the same.
  • a motion vector represented by the first candidate and the y component of the motion vector represented by the candidate When a value obtained by multiplying the y component of the motion vector represented by the first candidate by the y component of the motion vector represented by the candidate is less than 0, the motion vector represented by the first candidate and the y component And a motion vector represented by the first candidate is multiplied by a y component of a motion vector represented by each candidate is greater than 0, a motion vector represented by the first candidate and a motion represented by each candidate
  • the y component direction of the vector may be the same.
  • the priority of each candidate can be derived with the highest priority.
  • a value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is smaller than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is greater than 0, the priority of each candidate can be derived as a medium priority.
  • the value of the x component of the motion vector represented by the first candidate multiplied by the x component of the motion vector represented by the candidate is greater than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is smaller than 0, the priority of each candidate can be derived as a medium priority.
  • the value of the x component of the motion vector represented by the first candidate multiplied by the x component of the motion vector represented by the candidate is greater than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is greater than 0, the priority of each candidate can be derived with the lowest priority.
  • the order of the candidates of the motion information candidate list may be updated based on the priority of each candidate of the remaining candidates. That is, the order of the remaining candidates may be updated in the order of the highest-priority candidate to the lowest-priority candidate, and the modified motion information candidate list may be generated. That is, the order of the remaining candidates may be updated in descending order of priority, and the modified motion information candidate list may be generated. In other words, the modified motion information candidate list may be generated in the order of the highest priority candidate to the lowest priority candidate next to the first candidate.
  • the candidates may be sorted in order of the highest priority candidate to the lowest priority candidate, and the first candidate and the second candidate of the aligned candidates
  • the modified motion information candidate list can be generated. That is, the modified motion information candidate list may be generated based on the first candidate and the second candidate in the updated order.
  • the encoding apparatus may generate the modified motion information candidate list based on the template of the current block and the directions of the motion vectors indicated by the candidates of the motion information candidate list.
  • the template of the current block can be derived based on the neighboring samples of the current block
  • the templates of the reference blocks indicated by the candidates of the motion information candidate list can be derived
  • the costs of the candidates can be derived based on the template of the current block.
  • the cost may be determined based on Equation (1).
  • the order of the candidates of the motion information candidate list is updated based on the costs of the candidates so that the first motion information candidate list can be generated. That is, the order of the candidates of the motion information candidate list is updated in the order of smaller cost so that the first motion information candidate list can be generated.
  • the first motion information candidate list may be generated by arranging in the candidate order having the largest cost in the candidate having the smallest cost among the candidates.
  • the priority of each candidate of the remaining candidates can be derived based on the directions of the motion vectors indicated by the first candidate and the remaining candidates of the first motion information candidate list,
  • the first motion information candidate list is updated based on the priorities of the candidates of the remaining candidates so that the modified motion information candidate list can be generated.
  • the remaining candidates may represent candidates other than the first candidate among the candidates of the first motion information candidate list.
  • the priority of each candidate of the remaining candidates can be derived as follows. If the x-component direction and the y-component direction of the motion vector represented by the first candidate and the motion vector represented by the candidate are all different, the priority of each candidate can be derived with the highest priority. When the direction of one of the x component direction and the y component direction of the motion vector indicated by the first candidate and the motion vector indicated by the candidate is different, the priority of each candidate may be derived as a medium priority. In addition, when the x-component direction and the y-component direction of the motion vector represented by the first candidate and the motion vector represented by the candidate are all the same, the priority of each candidate can be derived with the lowest priority.
  • the motion vector represented by the first candidate and the x component of the motion vector represented by the candidate when the value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is less than 0, the motion vector represented by the first candidate and the x component And when the value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is greater than 0, the motion vector represented by the first candidate and the motion represented by each candidate
  • the x component direction of the vector may be the same.
  • a motion vector represented by the first candidate and the y component of the motion vector represented by the candidate When a value obtained by multiplying the y component of the motion vector represented by the first candidate by the y component of the motion vector represented by the candidate is less than 0, the motion vector represented by the first candidate and the y component And a motion vector represented by the first candidate is multiplied by a y component of a motion vector represented by each candidate is greater than 0, a motion vector represented by the first candidate and a motion represented by each candidate
  • the y component direction of the vector may be the same.
  • the priority of each candidate can be derived with the highest priority.
  • a value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is smaller than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is greater than 0, the priority of each candidate can be derived as a medium priority.
  • the value of the x component of the motion vector represented by the first candidate multiplied by the x component of the motion vector represented by the candidate is greater than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is smaller than 0, the priority of each candidate can be derived as a medium priority.
  • the value of the x component of the motion vector represented by the first candidate multiplied by the x component of the motion vector represented by the candidate is greater than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is greater than 0, the priority of each candidate can be derived with the lowest priority.
  • the order of the candidates of the first motion information candidate list may be updated based on the priority of each candidate of the remaining candidates. That is, the order of the remaining candidates may be updated in the order of the highest-priority candidate to the lowest-priority candidate, and the modified motion information candidate list may be generated. That is, the order of the remaining candidates may be updated in descending order of priority, and the modified motion information candidate list may be generated. In other words, the modified motion information candidate list may be generated in the order of the highest priority candidate to the lowest priority candidate next to the first candidate.
  • the candidates of the first motion information candidate list may be sorted in order of the highest priority candidate to the lowest priority candidate next to the first candidate, and based on the first candidate and the second candidate of the aligned candidates
  • the modified motion information candidate list can be generated. That is, the modified motion information candidate list may be generated based on the first candidate and the second candidate in the updated order.
  • the encoding apparatus encodes information on the inter prediction of the current block and outputs the encoded information (S1030).
  • the encoding apparatus can generate a merge index indicating a merge candidate selected to derive the motion information of the current block.
  • the encoding apparatus can encode and output the merge index.
  • the merge index may be included in the information on the inter prediction.
  • the information on the inter prediction may include information on motion information of the current block. More specifically, the inter prediction information may include the L0 reference picture index and the L1 reference picture index of the current block, and MVPL0 (motion vector predictor L0, MVPL0) and MVPL1 (motion vector predictor L1, MVPL1) have. In addition, the information on the inter prediction may include MVDL0 (motion vector difference L0, MVDL0) and MVDL1 (motion vector difference L1, MVDL1).
  • the information on the inter prediction may include information on motion information of the current block. More specifically, the information on the inter prediction includes an MVP flag indicating one of the candidates included in the motion information candidate list (or the modified motion information candidate list), a motion vector of a candidate pointed to by the MVP flag (MVP candidate) And a motion vector difference (MVD) of the current block.
  • MVP flag indicates one of the candidates included in the motion information candidate list (or the modified motion information candidate list)
  • MVP candidate a motion vector of a candidate pointed to by the MVP flag
  • MVPD motion vector difference
  • the current block when the motion information is generated, the current block may be predicted based on the motion information, and the motion information may be stored.
  • the motion information may be used for motion information of a block included in a neighboring block or another picture to be decoded after the decoding of the current block.
  • the merge candidate list of the neighboring blocks may include the current block indicating the motion information.
  • the motion information when the AMVP mode is applied to the neighboring blocks, the motion information may be included in the MVP candidate list of the neighboring blocks as MVP candidates.
  • the encoding apparatus can generate a prediction sample based on the motion information.
  • the encoding apparatus may generate a residual sample based on the original sample and the generated prediction sample.
  • the encoding apparatus may generate information on the residual based on the residual samples.
  • the information on the residual may include transform coefficients relating to the residual sample.
  • the encoding apparatus may derive the reconstructed sample based on the prediction sample and the residual sample. That is, the encoding apparatus may add the prediction sample and the residual sample to derive the reconstructed sample.
  • the encoding apparatus can encode the information on the residual and output it in the form of a bit stream.
  • the bitstream may be transmitted to a decoding device via a network or a storage medium.
  • FIG. 11 schematically shows a video decoding method by a decoding apparatus according to the present invention.
  • the method disclosed in Fig. 11 can be performed by the decoding apparatus disclosed in Fig. Specifically, for example, S1100 of FIG. 11 may be performed by the entropy decoding unit of the decoding apparatus, and S1110 to S1140 may be performed by the predicting unit of the decoding apparatus.
  • the decoding apparatus obtains information on inter prediction of a current block through a bit stream (S1100).
  • the current block may be inter-predicted or intra-predicted.
  • the decoding apparatus can acquire information on the inter prediction of the current block through the bit stream.
  • the decoding apparatus can generate the merge candidate list based on the neighboring blocks of the current block, and obtain the merge index through the bit stream.
  • the merge index may indicate a merge candidate included in the merge candidate list, and the information on the inter prediction may include the merge index.
  • the information on the inter prediction may include information on motion information of the current block. More specifically, the information on the inter prediction includes an MVP flag indicating one of the candidates included in the motion information candidate list (or the modified motion information candidate list), a motion vector of a candidate pointed to by the MVP flag (MVP candidate) And a motion vector difference (MVD) of the current block.
  • the decoding apparatus may derive the motion information of the current block based on the MVP candidate indicated by the candidate indicated by the MVP flag and the MVD.
  • the inter-prediction information may include the L0 reference picture index and / or the L1 reference picture index of the current block, and MVDL0 (motion vector difference L0, MVDL0) and / or MVDL1 (motion vector difference L1, MVDL1) May be included.
  • the decoding apparatus generates a motion information candidate list based on the neighboring blocks of the current block (S1110).
  • the decoding apparatus can generate a motion information candidate list based on the neighboring blocks of the current block.
  • the motion information candidate list may indicate a merge candidate list.
  • the motion information candidate list may include a motion vector predictor (MVP) candidate list Lt; / RTI >
  • MVP motion vector predictor
  • the decoding apparatus can generate a motion information candidate list including neighboring blocks of the current block, and when the AMVP mode is applied to the current block,
  • the motion information candidate list may be generated based on motion vectors of neighboring blocks of the current block.
  • the motion vectors may be derived as (MVP) candidates included in the motion information candidate list.
  • the decoding apparatus can generate the motion information candidate list based on motion vectors of neighboring blocks of the current block, and the motion information candidate list includes two candidates .
  • the motion information candidate list may include N candidates.
  • the motion information candidate list may include modified candidates of the current block.
  • the decoding apparatus may derive the modified candidate based on a template of the current block.
  • the template of the current block may be represented as a specific region including surrounding samples of the current block.
  • the template may be a specific region including left neighboring samples, upper left neighboring samples, and upper neighboring samples of the current block.
  • the template may be a specific region including the left neighboring samples of the current block and the upper neighboring samples.
  • the template may be a specific region including the upper neighbor samples of the current block.
  • the template may be a specific region including the left neighboring samples of the current block.
  • a template of a current block can be derived based on neighboring samples of the current block and a reference block having a template having a minimum cost from a template of the current block among the reference blocks in the reference block of the current block
  • the modified candidate may be derived based on the motion information indicating the specific reference block. That is, the modified candidate may represent a motion vector (or motion information) indicating the specific reference block.
  • the cost may be determined based on Equation (1).
  • the decoding apparatus updates the candidates of the motion information candidate list to generate a modified motion information candidate list (S1120).
  • the decoding apparatus may update the motion information candidate list to generate the modified motion information candidate list.
  • the decoding apparatus may generate the modified motion information candidate list based on the template of the current block.
  • the template of the current block can be derived based on the surrounding samples of the current block, and the templates of the reference blocks indicated by the candidates of the motion information candidate list can be derived.
  • the template of the current block may be represented as a specific region including surrounding samples of the current block as described above.
  • the templates of the reference blocks may represent specific regions corresponding to the template of the current block. That is, each of the templates of the reference blocks may be derived based on the surrounding samples of the neighboring samples and the corresponding reference blocks of the current block.
  • the templates of the reference blocks may have the same shape as the template of the current block.
  • the costs of the candidates may be derived based on the templates of the reference blocks and the template of the current block.
  • the cost may be determined based on Equation (1).
  • the order of the candidates of the motion information candidate list may be updated based on the costs of the candidates. That is, the order of the candidates of the motion information candidate list is updated in the order of smaller cost, and the modified motion information candidate list can be generated.
  • the modified motion information candidate list can be generated by arranging in the candidate order having the largest cost in the candidate having the smallest cost among the candidates.
  • the candidates may be sorted in the candidate order having the largest cost in the candidate having the smallest cost, and the first candidate and the second candidate in the sorted candidate
  • the modified motion information candidate list can be generated. That is, the modified motion information candidate list may be generated based on the first candidate and the second candidate in the updated order.
  • the decoding apparatus may generate the modified motion information candidate list based on the directions of the motion vectors indicated by the candidates of the motion information candidate list. Specifically, the priority of each of the remaining candidates may be derived based on a motion vector represented by the first candidate of the motion information candidate list and the motion vectors represented by the remaining candidates, and the priority of each candidate of the remaining candidates.
  • the motion information candidate list is updated based on the motion information candidate list, and the modified motion information candidate list is generated.
  • the remaining candidates may represent candidates other than the first candidate among the candidates of the motion information candidate list.
  • the priority of each candidate of the remaining candidates can be derived as follows. If the x-component direction and the y-component direction of the motion vector represented by the first candidate and the motion vector represented by the candidate are all different, the priority of each candidate can be derived with the highest priority. When the direction of one of the x component direction and the y component direction of the motion vector indicated by the first candidate and the motion vector indicated by the candidate is different, the priority of each candidate may be derived as a medium priority. In addition, when the x-component direction and the y-component direction of the motion vector represented by the first candidate and the motion vector represented by the candidate are all the same, the priority of each candidate can be derived with the lowest priority.
  • the motion vector represented by the first candidate and the x component of the motion vector represented by the candidate when the value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is less than 0, the motion vector represented by the first candidate and the x component And when the value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is greater than 0, the motion vector represented by the first candidate and the motion represented by each candidate
  • the x component direction of the vector may be the same.
  • a motion vector represented by the first candidate and the y component of the motion vector represented by the candidate When a value obtained by multiplying the y component of the motion vector represented by the first candidate by the y component of the motion vector represented by the candidate is less than 0, the motion vector represented by the first candidate and the y component And a motion vector represented by the first candidate is multiplied by a y component of a motion vector represented by each candidate is greater than 0, a motion vector represented by the first candidate and a motion represented by each candidate
  • the y component direction of the vector may be the same.
  • the priority of each candidate can be derived with the highest priority.
  • a value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is smaller than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is greater than 0, the priority of each candidate can be derived as a medium priority.
  • the value of the x component of the motion vector represented by the first candidate multiplied by the x component of the motion vector represented by the candidate is greater than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is smaller than 0, the priority of each candidate can be derived as a medium priority.
  • the value of the x component of the motion vector represented by the first candidate multiplied by the x component of the motion vector represented by the candidate is greater than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is greater than 0, the priority of each candidate can be derived with the lowest priority.
  • the order of the candidates of the motion information candidate list may be updated based on the priority of each candidate of the remaining candidates. That is, the order of the remaining candidates may be updated in the order of the highest-priority candidate to the lowest-priority candidate, and the modified motion information candidate list may be generated. That is, the order of the remaining candidates may be updated in descending order of priority, and the modified motion information candidate list may be generated. In other words, the modified motion information candidate list may be generated in the order of the highest priority candidate to the lowest priority candidate next to the first candidate.
  • the candidates may be sorted in order of the highest priority candidate to the lowest priority candidate, and the first candidate and the second candidate of the aligned candidates
  • the modified motion information candidate list can be generated. That is, the modified motion information candidate list may be generated based on the first candidate and the second candidate in the updated order.
  • the decoding apparatus may generate the modified motion information candidate list based on the template of the current block and the directions of the motion vectors indicated by the candidates of the motion information candidate list.
  • the template of the current block can be derived based on the neighboring samples of the current block
  • the templates of the reference blocks indicated by the candidates of the motion information candidate list can be derived
  • the costs of the candidates can be derived based on the template of the current block.
  • the cost may be determined based on Equation (1).
  • the order of the candidates of the motion information candidate list is updated based on the costs of the candidates so that the first motion information candidate list can be generated. That is, the order of the candidates of the motion information candidate list is updated in the order of smaller cost so that the first motion information candidate list can be generated.
  • the first motion information candidate list may be generated by arranging in the candidate order having the largest cost in the candidate having the smallest cost among the candidates.
  • the priority of each candidate of the remaining candidates can be derived based on the directions of the motion vectors indicated by the first candidate and the remaining candidates of the first motion information candidate list,
  • the first motion information candidate list is updated based on the priorities of the candidates of the remaining candidates so that the modified motion information candidate list can be generated.
  • the remaining candidates may represent candidates other than the first candidate among the candidates of the first motion information candidate list.
  • the priority of each candidate of the remaining candidates can be derived as follows. If the x-component direction and the y-component direction of the motion vector represented by the first candidate and the motion vector represented by the candidate are all different, the priority of each candidate can be derived with the highest priority. When the direction of one of the x component direction and the y component direction of the motion vector indicated by the first candidate and the motion vector indicated by the candidate is different, the priority of each candidate may be derived as a medium priority. In addition, when the x-component direction and the y-component direction of the motion vector represented by the first candidate and the motion vector represented by the candidate are all the same, the priority of each candidate can be derived with the lowest priority.
  • the motion vector represented by the first candidate and the x component of the motion vector represented by the candidate when the value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is less than 0, the motion vector represented by the first candidate and the x component And when the value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is greater than 0, the motion vector represented by the first candidate and the motion represented by each candidate
  • the x component direction of the vector may be the same.
  • a motion vector represented by the first candidate and the y component of the motion vector represented by the candidate When a value obtained by multiplying the y component of the motion vector represented by the first candidate by the y component of the motion vector represented by the candidate is less than 0, the motion vector represented by the first candidate and the y component And a motion vector represented by the first candidate is multiplied by a y component of a motion vector represented by each candidate is greater than 0, a motion vector represented by the first candidate and a motion represented by each candidate
  • the y component direction of the vector may be the same.
  • the priority of each candidate can be derived with the highest priority.
  • a value obtained by multiplying the x component of the motion vector represented by the first candidate by the x component of the motion vector represented by the candidate is smaller than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is greater than 0, the priority of each candidate can be derived as a medium priority.
  • the value of the x component of the motion vector represented by the first candidate multiplied by the x component of the motion vector represented by the candidate is greater than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is smaller than 0, the priority of each candidate can be derived as a medium priority.
  • the value of the x component of the motion vector represented by the first candidate multiplied by the x component of the motion vector represented by the candidate is greater than 0 and the y component of the motion vector represented by the first candidate and the motion vector represented by each candidate If the value obtained by multiplying the y component is greater than 0, the priority of each candidate can be derived with the lowest priority.
  • the order of the candidates of the first motion information candidate list may be updated based on the priority of each candidate of the remaining candidates. That is, the order of the remaining candidates may be updated in the order of the highest-priority candidate to the lowest-priority candidate, and the modified motion information candidate list may be generated. That is, the order of the remaining candidates may be updated in descending order of priority, and the modified motion information candidate list may be generated. In other words, the modified motion information candidate list may be generated in the order of the highest priority candidate to the lowest priority candidate next to the first candidate of the first motion information candidate list.
  • the candidates of the first motion information candidate list may be sorted in order of the highest priority candidate to the lowest priority candidate next to the first candidate, and based on the first candidate and the second candidate of the aligned candidates
  • the modified motion information candidate list can be generated. That is, the modified motion information candidate list may be generated based on the first candidate and the second candidate in the updated order.
  • the decoding apparatus derives motion information of the current block based on the inter prediction information and the modified motion information candidate list (S1130).
  • the information on the inter prediction may indicate whether a skip mode, a merge mode, or an adaptive motion vector prediction (AMVP) mode is applied to the current block.
  • AMVP adaptive motion vector prediction
  • the decoding apparatus can obtain a merge index indicating a candidate of the candidates included in the motion information candidate list.
  • the merge index may be included in the information on the inter prediction.
  • the decoding apparatus may derive the motion information of the candidate pointed by the merge index among the candidates of the corrected motion information candidate list as the motion information of the current block.
  • the decoding apparatus When the AMVP mode is applied to the current block, the decoding apparatus includes an MVP flag indicating one of the candidates included in the modified motion information candidate list, a candidate motion vector (MVP candidate) indicated by the MVP flag, A motion vector difference (MVD) can be obtained.
  • the MVP flag and the MVD may be included in the information on the inter prediction.
  • the decoding apparatus may derive the motion information of the current block based on the MVP candidate indicated by the candidate indicated by the MVP flag and the MVD.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may be bi-prediction motion information, or may be uni-prediction motion information.
  • the bi-predictive motion information may include an L0 reference picture index, an L0 motion vector, an L1 reference picture index, and an L1 motion vector
  • the short motion prediction information may include an L0 reference picture index and a L0 motion vector
  • An L1 reference picture index and an L1 motion vector An L1 reference picture index and an L1 motion vector.
  • L0 denotes a reference picture list L0 (List 0)
  • L1 denotes a reference picture list L1 (List 1).
  • the decoding apparatus performs inter-prediction of the current block based on the motion information (S1140).
  • a prediction block of the current block can be derived based on the motion information, and a reconstruction block can be derived based on the prediction block.
  • the decoding apparatus may derive a reference block intra-reference block based on the motion information.
  • the motion information may include a motion vector and a reference picture index.
  • the decoding apparatus can derive a reference picture indicated by the reference picture index of the reference pictures in the reference picture list as a reference picture of the current block, and a block indicated by the motion vector in the reference picture as a reference block of the current block .
  • the decoding apparatus may generate a prediction sample based on the reference block and may use the prediction sample directly as a reconstruction sample according to a prediction mode or may add a residual sample to the prediction sample to generate a reconstruction sample .
  • the decoding apparatus may receive information on a residual from the bitstream, if there is a residual sample for the current block, for the current block.
  • the information on the residual may include a transform coefficient relating to the residual sample.
  • the decoding apparatus may derive the residual sample (or residual sample array) for the current block based on the residual information.
  • the decoding apparatus may generate a reconstructed sample based on the prediction sample and the residual sample, and may derive a reconstructed block or a reconstructed picture based on the reconstructed sample.
  • an in-loop filtering procedure such as deblocking filtering and / or SAO procedure to the restored picture in order to improve subjective / objective picture quality as necessary.
  • more precise motion information can be derived by generating a motion information candidate list including the modified candidate of the current block, thereby reducing the bit amount of information for inter prediction of the current block, The efficiency can be improved.
  • the present invention it is possible to generate a modified motion information candidate list by updating the candidates of the motion information candidate list without receiving additional additional information, thereby obtaining more accurate motion information, Accordingly, the bit amount of the information for the inter prediction of the current block can be reduced and the overall coding efficiency can be improved.
  • the above-described method according to the present invention can be implemented in software, and the encoding apparatus and / or decoding apparatus according to the present invention can perform image processing of, for example, a TV, a computer, a smart phone, a set- Device.
  • the above-described method may be implemented by a module (a process, a function, and the like) that performs the above-described functions.
  • the module is stored in memory and can be executed by the processor.
  • the memory may be internal or external to the processor and may be coupled to the processor by any of a variety of well known means.
  • the processor may comprise an application-specific integrated circuit (ASIC), other chipset, logic circuitry and / or a data processing device.
  • the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory cards, storage media, and / or other storage devices.

Abstract

la présente invention concerne un procédé de décodage d'image réalisé par un dispositif de décodage, consistant : à acquérir des informations sur une inter-prédiction d'un bloc courant par l'intermédiaire d'un train de bits; à générer une liste de candidats d'informations de mouvement du bloc courant sur la base de blocs voisins du bloc courant; à générer une liste de candidats d'informations de mouvement modifiées par mise à jour de candidats de la liste de candidats d'informations de mouvement; à dériver des informations de mouvement du bloc courant sur la base des informations sur la prédiction inter et de la liste d'informations de mouvement modifiées; et à réaliser une prédiction inter du bloc courant sur la base des informations de mouvement.
PCT/KR2017/014073 2017-12-04 2017-12-04 Procédé et dispositif de décodage d'image sur la base d'une liste de candidats d'informations de mouvement modifiées dans un système de codage d'image WO2019112072A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2017/014073 WO2019112072A1 (fr) 2017-12-04 2017-12-04 Procédé et dispositif de décodage d'image sur la base d'une liste de candidats d'informations de mouvement modifiées dans un système de codage d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2017/014073 WO2019112072A1 (fr) 2017-12-04 2017-12-04 Procédé et dispositif de décodage d'image sur la base d'une liste de candidats d'informations de mouvement modifiées dans un système de codage d'image

Publications (1)

Publication Number Publication Date
WO2019112072A1 true WO2019112072A1 (fr) 2019-06-13

Family

ID=66751055

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/014073 WO2019112072A1 (fr) 2017-12-04 2017-12-04 Procédé et dispositif de décodage d'image sur la base d'une liste de candidats d'informations de mouvement modifiées dans un système de codage d'image

Country Status (1)

Country Link
WO (1) WO2019112072A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140092874A (ko) * 2011-11-04 2014-07-24 노키아 코포레이션 비디오 인코딩 방법 및 장치
KR20150042164A (ko) * 2010-01-19 2015-04-20 삼성전자주식회사 축소된 예측 움직임 벡터의 후보들에 기초해 움직임 벡터를 부호화, 복호화하는 방법 및 장치
KR101700367B1 (ko) * 2011-06-27 2017-01-26 삼성전자주식회사 영상 복호화 방법 및 장치
WO2017048008A1 (fr) * 2015-09-17 2017-03-23 엘지전자 주식회사 Procédé et appareil de prédiction inter dans un système de codage vidéo
US20170105000A1 (en) * 2011-06-30 2017-04-13 JVC Kenwood Corporation Picture coding device, picture coding method, picture coding program, picture decoding device, picture decoding method, and picture decoding program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150042164A (ko) * 2010-01-19 2015-04-20 삼성전자주식회사 축소된 예측 움직임 벡터의 후보들에 기초해 움직임 벡터를 부호화, 복호화하는 방법 및 장치
KR101700367B1 (ko) * 2011-06-27 2017-01-26 삼성전자주식회사 영상 복호화 방법 및 장치
US20170105000A1 (en) * 2011-06-30 2017-04-13 JVC Kenwood Corporation Picture coding device, picture coding method, picture coding program, picture decoding device, picture decoding method, and picture decoding program
KR20140092874A (ko) * 2011-11-04 2014-07-24 노키아 코포레이션 비디오 인코딩 방법 및 장치
WO2017048008A1 (fr) * 2015-09-17 2017-03-23 엘지전자 주식회사 Procédé et appareil de prédiction inter dans un système de codage vidéo

Similar Documents

Publication Publication Date Title
WO2017188566A1 (fr) Procédé et appareil d'inter-prédiction dans un système de codage d'images
WO2018070632A1 (fr) Procédé et dispositif de décodage vidéo dans un système de codage vidéo
WO2017052081A1 (fr) Procédé et appareil de prédiction inter dans un système de codage d'images
WO2017022973A1 (fr) Procédé d'interprédiction, et dispositif, dans un système de codage vidéo
WO2018117546A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2017043786A1 (fr) Procédé et dispositif de prédiction intra dans un système de codage vidéo
WO2017034331A1 (fr) Procédé et dispositif de prédiction intra d'échantillon de chrominance dans un système de codage vidéo
WO2017069590A1 (fr) Procédé et dispositif de décodage d'image à base de modélisation dans un système de codage d'image
WO2018062702A1 (fr) Procédé et appareil de prédiction intra dans un système de codage d'images
WO2017188565A1 (fr) Procédé et dispositif de décodage d'image dans un système de codage d'image
WO2017057877A1 (fr) Procédé et appareil de filtrage d'image dans un système de codage d'image
WO2018056709A1 (fr) Procédé et dispositif d'inter-prédiction dans un système de codage d'image
WO2017048008A1 (fr) Procédé et appareil de prédiction inter dans un système de codage vidéo
WO2018021585A1 (fr) Procédé et appareil d'intra-prédiction dans un système de codage d'image
WO2018066791A1 (fr) Procédé et appareil de décodage d'image dans un système de codage d'images
WO2019194507A1 (fr) Procédé de codage d'image basé sur une prédiction de mouvement affine, et dispositif associé
WO2018128223A1 (fr) Procédé et appareil d'inter-prédiction dans un système de codage d'image
WO2018128222A1 (fr) Procédé et appareil de décodage d'image dans un système de codage d'image
WO2017052272A1 (fr) Procédé et appareil pour une prédiction intra dans un système de codage vidéo
WO2017195914A1 (fr) Procédé et appareil d'inter-prédiction dans un système de codage vidéo
WO2020141932A1 (fr) Procédé et appareil de prédiction inter utilisant des mmvd de cpr
WO2019031703A1 (fr) Appareil et procédé de décodage d'image conformément à un modèle linéaire dans un système de codage d'image
WO2019066175A1 (fr) Procédé et dispositif de décodage d'image conformes à une structure divisée de blocs dans un système de codage d'image
WO2019194436A1 (fr) Procédé de codage d'images basé sur un vecteur de mouvement, et appareil associé
WO2019225932A1 (fr) Procédé et appareil de décodage d'image à l'aide de dmvr dans un système de codage d'images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17933931

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17933931

Country of ref document: EP

Kind code of ref document: A1