WO2018097692A2 - 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 - Google Patents

영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 Download PDF

Info

Publication number
WO2018097692A2
WO2018097692A2 PCT/KR2017/013672 KR2017013672W WO2018097692A2 WO 2018097692 A2 WO2018097692 A2 WO 2018097692A2 KR 2017013672 W KR2017013672 W KR 2017013672W WO 2018097692 A2 WO2018097692 A2 WO 2018097692A2
Authority
WO
WIPO (PCT)
Prior art keywords
block
prediction
current
motion information
prediction block
Prior art date
Application number
PCT/KR2017/013672
Other languages
English (en)
French (fr)
Korean (ko)
Other versions
WO2018097692A3 (ko
Inventor
조승현
임성창
강정원
고현석
이진호
이하현
전동산
김휘용
최진수
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to CN202311024704.8A priority Critical patent/CN116866593A/zh
Priority to CN202311020975.6A priority patent/CN116886928A/zh
Priority to CN201780073517.5A priority patent/CN110024394B/zh
Priority to CN202311021525.9A priority patent/CN116886929A/zh
Priority to CN202311023493.6A priority patent/CN116886930A/zh
Priority to CN202311025877.1A priority patent/CN116866594A/zh
Publication of WO2018097692A2 publication Critical patent/WO2018097692A2/ko
Publication of WO2018097692A3 publication Critical patent/WO2018097692A3/ko

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the present invention relates to a video encoding / decoding method, apparatus, and a recording medium storing a bitstream. Specifically, the present invention relates to a method and apparatus for image encoding / decoding using superimposed block motion compensation.
  • HD high definition
  • UHD ultra high definition
  • An inter-screen prediction technique for predicting pixel values included in the current picture from a picture before or after the current picture using an image compression technology
  • an intra-picture prediction technology for predicting pixel values included in the current picture using pixel information in the current picture
  • transformation and quantization techniques for compressing the energy of the residual signal
  • entropy coding technique for assigning short codes to high-frequency values and long codes for low-frequency values.
  • Image data can be effectively compressed and transmitted or stored.
  • the present invention can provide a method and apparatus for performing overlapped block motion compensation, which reduces computational complexity in weighted calculation of overlapped block motion compensation and derivation of neighboring block motion information.
  • the image decoding method may include generating a first prediction block of the current block by using motion information of a current block, generating a second prediction block among motion information of at least one neighboring lower block of the current lower block. Determining the motion information available for the step of generating the second prediction block of the at least one current lower block using the determined at least one motion information and the first prediction block of the current block and the at least one And generating a final prediction block based on the weighted sum of the second prediction blocks of the current lower block.
  • the determining of the motion information available for generating the second prediction block may be available for generating the second prediction block based on at least one of a magnitude and a direction of a motion vector of the neighboring lower block.
  • the motion information can be determined.
  • the determining of the motion information available for generating the second prediction block may include determining the motion information based on a reference picture POC (Picture Of Count) of the neighboring lower block and a reference picture POC of the current block. It is possible to determine the motion information available for generating the 2 prediction blocks.
  • POC Picture Of Count
  • determining the motion information available for generating the second prediction block may be performed only when a reference picture POC of the neighboring lower block and a reference picture POC of the current block are the same.
  • the motion information of the neighboring lower block may be determined to be motion information available for generating the second prediction block.
  • the shape of the current lower block may be at least one of a square shape and a rectangular shape.
  • the generating of the second prediction block may be performed by using motion information of at least one neighboring lower block only when the current block is not a motion vector derivation mode and a video motion compensation mode.
  • One second prediction block may be generated.
  • generating the final prediction block may include: some rows adjacent to a boundary between the first prediction block and the second prediction block when the current lower block is included in a boundary region of the current block.
  • the final prediction block may be generated by weighting the samples located in some columns.
  • a sample located in a part of a row or part of a column adjacent to a boundary of the first prediction block and the second prediction block may include a block size of the current lower block, a magnitude and a direction of a motion vector of the current lower block. And may be determined based on at least one of the inter prediction prediction indicator of the current block and the reference picture POC of the current block.
  • generating the final prediction block may include different weights for each sample of the first prediction block and the second prediction block according to at least one of a magnitude and a direction of a motion vector of the current lower block. Can be applied to perform weighted polymerization.
  • the image encoding method may include generating a first prediction block of the current block by using motion information of a current block, generating a second prediction block among motion information of at least one neighboring lower block of the current lower block. Determining the motion information available for the step of generating the second prediction block of the at least one current lower block using the determined at least one motion information and the first prediction block of the current block and the at least one And generating a final prediction block based on the weighted sum of the second prediction blocks of the current lower block.
  • the determining of the motion information available for generating the second prediction block may be used for generating the second prediction block based on at least one of a magnitude and a direction of a motion vector of the neighboring lower block.
  • the motion information can be determined.
  • the determining of motion information available for generating the second prediction block may include determining the motion information based on a reference picture POC of the neighboring lower block and a reference picture POC of the current block. It is possible to determine the motion information available for generating the 2 prediction blocks.
  • the determining of motion information available for generating the second prediction block may be performed only when a reference picture POC of the neighboring lower block and a reference picture POC of the current block are the same.
  • the motion information of the neighboring lower block may be determined to be motion information available for generating the second prediction block.
  • the shape of the current lower block may be at least one of a square shape and a rectangular shape.
  • the generating of the second prediction block may be performed by using motion information of at least one neighboring lower block only when the current block is not in the motion vector derivation mode and the attack motion compensation mode.
  • One second prediction block may be generated.
  • generating the final prediction block may include: some rows adjacent to a boundary between the first prediction block and the second prediction block when the current lower block is included in a boundary region of the current block.
  • the final prediction block may be generated by weighting the samples located in some columns.
  • a sample located in a part of a row or part of a column adjacent to a boundary of the first prediction block and the second prediction block may include a block size of the current lower block, a magnitude and a direction of a motion vector of the current lower block. And may be determined based on at least one of the inter prediction prediction indicator of the current block and the reference picture POC of the current block.
  • the generating of the final prediction block may include different sample weights of the first prediction block and the second prediction block according to at least one of a magnitude and a direction of a motion vector of the current lower block. Can be applied to perform weighted polymerization.
  • the recording medium of the present invention uses the motion information of the current block to generate a first prediction block of the current block, a motion usable for generating a second prediction block among motion information of at least one neighboring lower block of the current lower block. Determining information, generating a second prediction block of at least one current lower block using the determined at least one motion information, and first prediction block of the current block and the at least one current lower block.
  • the bitstream generated by the image encoding method may include storing the final prediction block based on the weighted sum of the second prediction blocks.
  • an image encoding / decoding method and apparatus with improved compression efficiency can be provided.
  • the computational complexity of the encoder and the decoder of an image can be reduced.
  • FIG. 1 is a block diagram illustrating a configuration of an encoding apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a configuration of a decoding apparatus according to an embodiment of the present invention.
  • FIG. 3 is a diagram schematically illustrating a division structure of an image when encoding and decoding an image.
  • FIG. 4 is a diagram for describing an embodiment of an inter prediction process.
  • FIG. 5 is a flowchart illustrating an image encoding method according to an embodiment of the present invention.
  • FIG. 6 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating a video encoding method according to another embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating an image decoding method according to another embodiment of the present invention.
  • FIG. 9 is a diagram for describing an example of deriving a spatial motion vector candidate of a current block.
  • FIG. 10 is a diagram for describing an example of deriving a temporal motion vector candidate of a current block.
  • FIG. 11 illustrates an example in which a spatial merge candidate is added to a merge candidate list.
  • FIG. 12 illustrates an example in which a temporal merge candidate is added to a merge candidate list.
  • FIG. 13 is a diagram for explaining an example of performing block motion compensation superimposed on a lower block basis.
  • FIG. 14 is a diagram for explaining an example of performing overlapped block motion compensation by using motion information of a lower block of a corresponding position block.
  • FIG. 15 is a diagram for explaining an example in which overlapped block motion compensation is performed using motion information of a block adjacent to a boundary area of a reference block.
  • FIG. 16 is a diagram for explaining an example of performing block motion compensation superimposed on a lower block group basis.
  • 17 is a diagram for explaining an example of the number of motion information used for overlapping block motion compensation.
  • 18 and 19 are diagrams for describing a derivation order of motion information used to generate a second prediction block.
  • FIG. 20 is a diagram for explaining an example of determining whether motion information available for generating a second prediction block is compared by comparing a POC of a reference picture of a current lower block and a POC of a reference picture of a neighboring lower block.
  • FIG. 21 is a diagram for describing an embodiment of applying a weight when calculating a weighted sum of a first prediction block and a second prediction block.
  • FIG. 22 is a diagram for describing an embodiment in which different weights are applied according to sample positions in a block when calculating a weighted sum of a first prediction block and a second prediction block.
  • FIG. 23 is a diagram for explaining an embodiment in which weighted sums of a first prediction block and a second prediction block are cumulatively calculated in a predetermined order when overlapping block motion compensation is performed.
  • FIG. 24 is a diagram for explaining an embodiment in which a weighted sum of a first prediction block and a second prediction block is calculated when overlapping block motion compensation is performed.
  • 25 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • any component of the invention When any component of the invention is said to be “connected” or “connected” to another component, it may be directly connected to or connected to that other component, but other components may be present in between. It should be understood that it may. On the other hand, when a component is referred to as being “directly connected” or “directly connected” to another component, it should be understood that there is no other component in between.
  • each component shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit.
  • each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
  • Integrated and separate embodiments of the components are also included within the scope of the present invention without departing from the spirit of the invention.
  • Some components of the present invention are not essential components for performing essential functions in the present invention but may be optional components for improving performance.
  • the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
  • an image may mean one picture constituting a video, and may represent a video itself.
  • "encoding and / or decoding of an image” may mean “encoding and / or decoding of a video” and may mean “encoding and / or decoding of one of images constituting the video.” It may be.
  • the picture may have the same meaning as the image.
  • Encoder Refers to a device that performs encoding.
  • Decoder Means an apparatus that performs decoding.
  • An MxN array of samples An MxN array of samples.
  • M and N mean positive integer values, and a block may often mean a two-dimensional sample array.
  • a block may mean a unit.
  • the current block may mean an encoding target block to be encoded at the time of encoding, and a decoding target block to be decoded at the time of decoding.
  • the current block may be at least one of a coding block, a prediction block, a residual block, and a transform block.
  • Sample The basic unit of a block. It can be expressed as a value from 0 to 2 Bd -1 according to the bit depth (B d ). In the present invention, a sample may be used in the same meaning as a pixel or a pixel.
  • Unit A unit of image encoding and decoding.
  • the unit may be a region obtained by dividing one image.
  • a unit may mean a divided unit when a single image is divided into subdivided units to be encoded or decoded.
  • a predetermined process may be performed for each unit.
  • One unit may be further divided into subunits having a smaller size than the unit.
  • the unit may be a block, a macroblock, a coding tree unit, a coding tree block, a coding unit, a coding block, a prediction.
  • the unit may mean a unit, a prediction block, a residual unit, a residual block, a transform unit, a transform block, or the like.
  • the unit may refer to a luma component block, a chroma component block corresponding thereto, and a syntax element for each block in order to refer to the block separately.
  • the unit may have various sizes and shapes, and in particular, the shape of the unit may include a geometric figure that can be represented in two dimensions such as a square, a trapezoid, a triangle, a pentagon, as well as a rectangle.
  • the unit information may include at least one of a type of a unit indicating a coding unit, a prediction unit, a residual unit, a transform unit, and the like, a size of a unit, a depth of a unit, an encoding and decoding order of the unit, and the like.
  • Coding tree unit consists of two color difference component (Cb, Cr) coding tree blocks associated with one luminance component (Y) coding tree block. It may also mean including the blocks and syntax elements for each block.
  • Each coding tree unit may be split using one or more partitioning methods such as a quad tree and a binary tree to form sub-units such as a coding unit, a prediction unit, and a transform unit. It may be used as a term for a pixel block that becomes a processing unit in a decoding / encoding process of an image, such as splitting an input image.
  • Coding Tree Block A term used to refer to any one of a Y coded tree block, a Cb coded tree block, and a Cr coded tree block.
  • Neighbor block A block adjacent to the current block.
  • the block adjacent to the current block may mean a block in which the boundary of the current block is in contact or a block located within a predetermined distance from the current block.
  • the neighboring block may mean a block adjacent to a vertex of the current block.
  • the block adjacent to the vertex of the current block may be a block vertically adjacent to a neighboring block horizontally adjacent to the current block or a block horizontally adjacent to a neighboring block vertically adjacent to the current block.
  • the neighboring block may mean a restored neighboring block.
  • Reconstructed Neighbor Block A neighboring block that is already encoded or decoded spatially / temporally around the current block.
  • the restored neighboring block may mean a restored neighboring unit.
  • the reconstructed spatial neighboring block may be a block in the current picture and a block already reconstructed through encoding and / or decoding.
  • the reconstructed temporal neighboring block may be a reconstructed block or a neighboring block at the same position as the current block of the current picture within the reference picture.
  • Unit Depth The degree to which the unit is divided. In the tree structure, the root node has the shallowest depth, and the leaf node has the deepest depth. In addition, when a unit is expressed in a tree structure, a level in which the unit exists may mean a unit depth.
  • Bitstream means a string of bits including encoded image information.
  • Parameter Set Corresponds to header information among structures in the bitstream. At least one of a video parameter set, a sequence parameter set, a picture parameter set, and an adaptation parameter set may be included in the parameter set. In addition, the parameter set may include slice header and tile header information.
  • Parsing This may mean determining a value of a syntax element by entropy decoding the bitstream or may mean entropy decoding itself.
  • This may mean at least one of a syntax element, a coding parameter, a value of a transform coefficient, and the like, of a coding / decoding target unit.
  • the symbol may mean an object of entropy encoding or a result of entropy decoding.
  • Prediction unit A basic unit when performing prediction, such as inter prediction, intra prediction, inter compensation, intra compensation, motion compensation.
  • One prediction unit may be divided into a plurality of partitions or lower prediction units having a small size.
  • Prediction Unit Partition A prediction unit partitioned form.
  • Reference Picture List refers to a list including one or more reference pictures used for inter prediction or motion compensation.
  • the types of reference picture lists may be LC (List Combined), L0 (List 0), L1 (List 1), L2 (List 2), L3 (List 3), and the like. Lists can be used.
  • Inter Prediction Indicator This may mean an inter prediction direction (unidirectional prediction, bidirectional prediction, etc.) of the current block. Alternatively, this may mean the number of reference pictures used when generating the prediction block of the current block. Alternatively, this may mean the number of prediction blocks used when performing inter prediction or motion compensation on the current block.
  • Reference Picture Index refers to an index indicating a specific reference picture in the reference picture list.
  • Reference Picture refers to an image referenced by a specific block for inter prediction or motion compensation.
  • Motion Vector A two-dimensional vector used for inter prediction or motion compensation, and may mean an offset between an encoding / decoding target image and a reference image.
  • (mvX, mvY) may represent a motion vector
  • mvX may represent a horizontal component
  • mvY may represent a vertical component.
  • Motion Vector Candidate A block that is a prediction candidate when predicting a motion vector, or a motion vector of the block.
  • the motion vector candidate may be included in the motion vector candidate list.
  • a motion vector candidate list may mean a list constructed using motion vector candidates.
  • Motion Vector Candidate Index An indicator indicating a motion vector candidate in a motion vector candidate list. It may also be referred to as an index of a motion vector predictor.
  • Motion Information At least among motion vector, reference picture index, inter prediction indicator, as well as reference picture list information, reference picture, motion vector candidate, motion vector candidate index, merge candidate, merge index, etc. It may mean information including one.
  • Merge Candidate List A list constructed using merge candidates.
  • Merge Candidate Means a spatial merge candidate, a temporal merge candidate, a combined merge candidate, a combined both prediction merge candidate, a zero merge candidate, and the like.
  • the merge candidate may include motion information such as an inter prediction prediction indicator, a reference image index for each list, and a motion vector.
  • Merge Index Means information indicating a merge candidate in the merge candidate list.
  • the merge index may indicate a block inducing a merge candidate among blocks reconstructed adjacent to the current block in spatial / temporal manner.
  • the merge index may indicate at least one of motion information included in the merge candidate.
  • Transform Unit A basic unit when performing residual signal encoding / decoding such as transform, inverse transform, quantization, inverse quantization, and transform coefficient encoding / decoding.
  • One transform unit may be divided into a plurality of transform units having a small size.
  • Scaling The process of multiplying the transform coefficient level by the factor.
  • the transform coefficients can be generated as a result of scaling on the transform coefficient level. Scaling can also be called dequantization.
  • Quantization Parameter A value used when generating a transform coefficient level for a transform coefficient in quantization. Alternatively, it may mean a value used when scaling transform levels are generated in inverse quantization to generate transform coefficients.
  • the quantization parameter may be a value mapped to a quantization step size.
  • Residual quantization parameter (Delta Quantization Parameter): A difference value between the predicted quantization parameter and the quantization parameter of the encoding / decoding target unit.
  • Scan A method of sorting the order of coefficients in a block or matrix. For example, sorting a two-dimensional array into a one-dimensional array is called a scan. Alternatively, arranging the one-dimensional array in the form of a two-dimensional array may also be called a scan or an inverse scan.
  • Transform Coefficient A coefficient value generated after the transform is performed in the encoder. Alternatively, this may mean a coefficient value generated after performing at least one of entropy decoding and inverse quantization in the decoder.
  • a quantized level or a quantized transform coefficient level obtained by applying quantization to a transform coefficient or a residual signal may also mean transform coefficients. Can be included.
  • Quantized Level A value generated by performing quantization on a transform coefficient or a residual signal in an encoder. Or, it may mean a value that is the object of inverse quantization before performing inverse quantization in the decoder. Similarly, the quantized transform coefficient level resulting from the transform and quantization may also be included in the meaning of the quantized level.
  • Non-zero Transform Coefficient A non-zero transform coefficient, or a non-zero transform coefficient level.
  • Quantization Matrix A matrix used in a quantization or inverse quantization process to improve the subjective or objective image quality of an image.
  • the quantization matrix may also be called a scaling list.
  • Quantization Matrix Coefficient means each element in the quantization matrix. Quantization matrix coefficients may also be referred to as matrix coefficients.
  • Default Matrix A predetermined quantization matrix defined in advance in the encoder and the decoder.
  • Non-default Matrix A quantization matrix that is not predefined in the encoder and the decoder and is signaled by the user.
  • FIG. 1 is a block diagram illustrating a configuration of an encoding apparatus according to an embodiment of the present invention.
  • the encoding apparatus 100 may be an encoder, a video encoding apparatus, or an image encoding apparatus.
  • the video may include one or more images.
  • the encoding apparatus 100 may sequentially encode one or more images.
  • the encoding apparatus 100 may include a motion predictor 111, a motion compensator 112, an intra predictor 120, a switch 115, a subtractor 125, a transformer 130, and quantization.
  • the unit 140 may include an entropy encoder 150, an inverse quantizer 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
  • the encoding apparatus 100 may encode the input image in an intra mode and / or an inter mode.
  • the encoding apparatus 100 may generate a bitstream through encoding of an input image, and may output the generated bitstream.
  • the generated bitstream can be stored in a computer readable recording medium or streamed via wired / wireless transmission medium.
  • the switch 115 may be switched to intra, and when the inter mode is used as the prediction mode, the switch 115 may be switched to inter.
  • the intra mode may mean an intra prediction mode
  • the inter mode may mean an inter prediction mode.
  • the encoding apparatus 100 may generate a prediction block for the input block of the input image.
  • the encoding apparatus 100 may encode a residual between the input block and the prediction block.
  • the input image may be referred to as a current image that is a target of current encoding.
  • the input block may be referred to as a current block or an encoding target block that is a target of the current encoding.
  • the intra prediction unit 120 may use a pixel value of a block that is already encoded / decoded around the current block as a reference pixel.
  • the intra predictor 120 may perform spatial prediction using the reference pixel, and generate prediction samples for the input block through spatial prediction.
  • Intra prediction may refer to intra prediction.
  • the motion predictor 111 may search an area that best matches the input block from the reference image in the motion prediction process, and derive a motion vector using the searched area.
  • the reference picture may be stored in the reference picture buffer 190.
  • the motion compensator 112 may generate a prediction block by performing motion compensation using a motion vector.
  • inter prediction may mean inter prediction or motion compensation.
  • the motion predictor 111 and the motion compensator 112 may generate a prediction block by applying an interpolation filter to a part of a reference image when the motion vector does not have an integer value.
  • a motion prediction and a motion compensation method of a prediction unit included in a coding unit based on a coding unit may include a skip mode, a merge mode, and an improved motion vector prediction. It may determine whether the advanced motion vector prediction (AMVP) mode or the current picture reference mode is used, and may perform inter prediction or motion compensation according to each mode.
  • AMVP advanced motion vector prediction
  • the subtractor 125 may generate a residual block using the difference between the input block and the prediction block.
  • the residual block may be referred to as the residual signal.
  • the residual signal may mean a difference between the original signal and the prediction signal.
  • the residual signal may be a signal generated by transforming or quantizing the difference between the original signal and the prediction signal, or by transforming and quantizing.
  • the residual block may be a residual signal in block units.
  • the transform unit 130 may generate a transform coefficient by performing transform on the residual block, and output a transform coefficient.
  • the transform coefficient may be a coefficient value generated by performing transform on the residual block.
  • the transform unit 130 may omit the transform on the residual block.
  • Quantized levels can be generated by applying quantization to transform coefficients or residual signals.
  • the quantized level may also be referred to as a transform coefficient.
  • the quantization unit 140 may generate a quantized level by quantizing the transform coefficient or the residual signal according to the quantization parameter, and output the quantized level. In this case, the quantization unit 140 may quantize the transform coefficients using the quantization matrix.
  • the entropy encoder 150 may generate a bitstream by performing entropy encoding according to probability distribution on values calculated by the quantizer 140 or coding parameter values calculated in the encoding process. And output a bitstream.
  • the entropy encoder 150 may perform entropy encoding on information about pixels of an image and information for decoding an image.
  • the information for decoding the image may include a syntax element.
  • the entropy encoder 150 may use an encoding method such as exponential Golomb, context-adaptive variable length coding (CAVLC), or context-adaptive binary arithmetic coding (CABAC) for entropy encoding.
  • CAVLC context-adaptive variable length coding
  • CABAC context-adaptive binary arithmetic coding
  • the entropy encoder 150 may perform entropy coding using a variable length coding (VLC) table.
  • VLC variable length coding
  • the entropy coding unit 150 derives the binarization method of the target symbol and the probability model of the target symbol / bin, and then derives the derived binarization method, the probability model, and the context model. Arithmetic coding may also be performed using.
  • the entropy encoder 150 may change a two-dimensional block shape coefficient into a one-dimensional vector form through a transform coefficient scanning method to encode a transform coefficient level.
  • a coding parameter may include information derived from an encoding or decoding process as well as information (flag, index, etc.) coded by an encoder and signaled to a decoder, such as a syntax element, and when encoding or decoding an image. It may mean necessary information.
  • signaling a flag or index may mean that the encoder entropy encodes the flag or index and includes the flag or index in the bitstream, and the decoder may include the flag or index from the bitstream. It may mean entropy decoding.
  • the encoded current image may be used as a reference image for another image to be processed later. Accordingly, the encoding apparatus 100 may reconstruct or decode the encoded current image and store the reconstructed or decoded image as a reference image.
  • the quantized level may be dequantized in inverse quantization unit 160.
  • the inverse transform unit 170 may perform an inverse transform.
  • the inverse quantized and / or inverse transformed coefficients may be summed with the prediction block via the adder 175.
  • a reconstructed block may be generated by adding the inverse quantized and / or inverse transformed coefficients and the prediction block.
  • the inverse quantized and / or inverse transformed coefficient may mean a coefficient in which at least one or more of inverse quantization and inverse transformation have been performed, and may mean a reconstructed residual block.
  • the recovery block may pass through the filter unit 180.
  • the filter unit 180 may apply at least one of a deblocking filter, a sample adaptive offset (SAO), an adaptive loop filter (ALF), and the like to the reconstructed block or the reconstructed image. have.
  • the filter unit 180 may be referred to as an in-loop filter.
  • the deblocking filter may remove block distortion generated at boundaries between blocks.
  • it may be determined whether to apply the deblocking filter to the current block based on the pixels included in the several columns or rows included in the block.
  • different filters may be applied according to the required deblocking filtering strength.
  • a sample offset may be used to add an appropriate offset to the pixel value to compensate for encoding errors.
  • the sample adaptive offset may correct the offset with the original image on a pixel basis for the deblocked image. After dividing the pixels included in the image into a predetermined number of areas, an area to be offset may be determined, an offset may be applied to the corresponding area, or an offset may be applied in consideration of edge information of each pixel.
  • the adaptive loop filter may perform filtering based on a comparison value between the reconstructed image and the original image. After dividing a pixel included in an image into a predetermined group, a filter to be applied to the corresponding group may be determined and filtering may be performed for each group. Information related to whether to apply the adaptive loop filter may be signaled for each coding unit (CU), and the shape and filter coefficient of the adaptive loop filter to be applied according to each block may vary.
  • CU coding unit
  • the reconstructed block or the reconstructed image that has passed through the filter unit 180 may be stored in the reference picture buffer 190.
  • 2 is a block diagram illustrating a configuration of a decoding apparatus according to an embodiment of the present invention.
  • the decoding apparatus 200 may be a decoder, a video decoding apparatus, or an image decoding apparatus.
  • the decoding apparatus 200 may include an entropy decoder 210, an inverse quantizer 220, an inverse transform unit 230, an intra predictor 240, a motion compensator 250, and an adder 255.
  • the filter unit 260 may include a reference picture buffer 270.
  • the decoding apparatus 200 may receive a bitstream output from the encoding apparatus 100.
  • the decoding apparatus 200 may receive a bitstream stored in a computer readable recording medium or may receive a bitstream streamed through a wired / wireless transmission medium.
  • the decoding apparatus 200 may decode the bitstream in an intra mode or an inter mode.
  • the decoding apparatus 200 may generate a reconstructed image or a decoded image through decoding, and output the reconstructed image or the decoded image.
  • the switch When the prediction mode used for decoding is an intra mode, the switch may be switched to intra. When the prediction mode used for decoding is an inter mode, the switch may be switched to inter.
  • the decoding apparatus 200 may obtain a reconstructed residual block by decoding the input bitstream, and generate a prediction block. When the reconstructed residual block and the prediction block are obtained, the decoding apparatus 200 may generate a reconstruction block to be decoded by adding the reconstructed residual block and the prediction block.
  • the decoding target block may be referred to as a current block.
  • the entropy decoder 210 may generate symbols by performing entropy decoding according to a probability distribution of the bitstream.
  • the generated symbols may include symbols in the form of quantized levels.
  • the entropy decoding method may be an inverse process of the above-described entropy encoding method.
  • the entropy decoder 210 may change the one-dimensional vector form coefficient into a two-dimensional block form through a transform coefficient scanning method to decode the transform coefficient level.
  • the quantized level may be inverse quantized by the inverse quantizer 220 and inversely transformed by the inverse transformer 230.
  • the quantized level may be generated as a reconstructed residual block as a result of inverse quantization and / or inverse transformation.
  • the inverse quantization unit 220 may apply a quantization matrix to the quantized level.
  • the intra predictor 240 may generate a prediction block by performing spatial prediction using pixel values of blocks that are already decoded around the decoding target block.
  • the motion compensator 250 may generate a predictive block by performing motion compensation using a reference vector stored in the motion vector and the reference picture buffer 270.
  • the motion compensator 250 may generate a prediction block by applying an interpolation filter to a portion of the reference image.
  • it may be determined whether a motion compensation method of a prediction unit included in the coding unit is a skip mode, a merge mode, an AMVP mode, or a current picture reference mode based on the coding unit, and each mode According to the present invention, motion compensation may be performed.
  • the adder 255 may generate a reconstructed block by adding the reconstructed residual block and the predictive block.
  • the filter unit 260 may apply at least one of a deblocking filter, a sample adaptive offset, and an adaptive loop filter to the reconstructed block or the reconstructed image.
  • the filter unit 260 may output the reconstructed image.
  • the reconstructed block or reconstructed picture may be stored in the reference picture buffer 270 to be used for inter prediction.
  • 3 is a diagram schematically illustrating a division structure of an image when encoding and decoding an image. 3 schematically shows an embodiment in which one unit is divided into a plurality of sub-units.
  • a coding unit may be used in encoding and decoding.
  • a coding unit may be used as a basic unit of image encoding / decoding.
  • the coding unit may be used as a unit in which the intra picture mode and the inter screen mode are divided during image encoding / decoding.
  • the coding unit may be a basic unit used for a process of prediction, transform, quantization, inverse transform, inverse quantization, or encoding / decoding of transform coefficients.
  • the image 300 is sequentially divided into units of a largest coding unit (LCU), and a split structure is determined by units of an LCU.
  • the LCU may be used as the same meaning as a coding tree unit (CTU).
  • the division of the unit may mean division of a block corresponding to the unit.
  • the block division information may include information about a depth of a unit.
  • the depth information may indicate the number and / or degree of division of the unit.
  • One unit may be hierarchically divided with depth information based on a tree structure. Each divided subunit may have depth information.
  • the depth information may be information indicating the size of a CU and may be stored for each CU.
  • the partition structure may mean a distribution of a coding unit (CU) in the LCU 310. This distribution may be determined according to whether to divide one CU into a plurality of CUs (two or more positive integers including 2, 4, 8, 16, etc.).
  • the horizontal and vertical sizes of the CUs created by splitting are either half of the horizontal and vertical sizes of the CU before splitting, or smaller than the horizontal and vertical sizes of the CU before splitting, depending on the number of splits.
  • the depth of the LCU may be 0, and the depth of the smallest coding unit (SCU) may be a predefined maximum depth.
  • the LCU may be a coding unit having a maximum coding unit size as described above, and the SCU may be a coding unit having a minimum coding unit size.
  • the division starts from the LCU 310, and the depth of the CU increases by one each time the division reduces the horizontal size and / or vertical size of the CU.
  • information on whether the CU is split may be expressed through split information of the CU.
  • the split information may be 1 bit of information. All CUs except the SCU may include partition information. For example, if the value of the partition information is the first value, the CU may not be split, and if the value of the partition information is the second value, the CU may be split.
  • an LCU having a depth of 0 may be a 64 ⁇ 64 block. 0 may be the minimum depth.
  • An SCU of depth 3 may be an 8x8 block. 3 may be the maximum depth.
  • CUs of 32x32 blocks and 16x16 blocks may be represented by depth 1 and depth 2, respectively.
  • the horizontal and vertical sizes of the divided four coding units may each have a size of half compared to the horizontal and vertical sizes of the coding unit before being split. have.
  • the four divided coding units may each have a size of 16x16.
  • the coding unit is divided into quad-tree shapes.
  • the horizontal or vertical size of the divided two coding units may have a half size compared to the horizontal or vertical size of the coding unit before splitting.
  • the two split coding units may have a size of 16x32.
  • the coding unit is divided into a binary-tree.
  • the LCU 320 of FIG. 3 is an example of an LCU to which both quadtree type partitioning and binary tree type partitioning are applied.
  • FIG. 4 is a diagram for describing an embodiment of an inter prediction process.
  • the quadrangle shown in FIG. 4 may represent an image. Also, in FIG. 4, an arrow may indicate a prediction direction. Each picture may be classified into an I picture (Intra Picture), a P picture (Predictive Picture), a B picture (Bi-predictive Picture), and the like.
  • the I picture may be encoded through intra prediction without inter prediction.
  • the P picture may be encoded through inter prediction using only reference pictures existing in one direction (eg, forward or reverse direction).
  • the B picture may be encoded through inter-picture prediction using reference pictures that exist in both directions (eg, forward and reverse).
  • the encoder may perform inter prediction or motion compensation, and the decoder may perform motion compensation corresponding thereto.
  • Inter prediction or motion compensation may be performed using a reference picture and motion information.
  • the motion information on the current block may be derived during inter prediction by each of the encoding apparatus 100 and the decoding apparatus 200.
  • the motion information may be derived using motion information of the restored neighboring block, motion information of a collocated block (col block), and / or a block adjacent to the call block.
  • the call block may be a block corresponding to a spatial position of the current block in a collocated picture (col picture).
  • the call picture may be one picture among at least one reference picture included in the reference picture list.
  • the method of deriving the motion information may vary depending on the prediction mode of the current block.
  • a prediction mode applied for inter prediction may include an AMVP mode, a merge mode, a skip mode, a current picture reference mode, and the like.
  • the merge mode may be referred to as a motion merge mode.
  • a motion vector candidate list may be generated.
  • a motion vector candidate may be derived using the generated motion vector candidate list.
  • the motion information of the current block may be determined based on the derived motion vector candidate.
  • the motion vector of the collocated block or the motion vector of the block adjacent to the collocated block may be referred to as a temporal motion vector candidate, and the restored motion vector of the neighboring block is a spatial motion vector candidate. It may be referred to as).
  • the encoding apparatus 100 may calculate a motion vector difference (MVD) between the motion vector and the motion vector candidate of the current block, and may entropy-encode the MVD.
  • the encoding apparatus 100 may generate a bitstream by entropy encoding a motion vector candidate index.
  • the motion vector candidate index may indicate an optimal motion vector candidate selected from the motion vector candidates included in the motion vector candidate list.
  • the decoding apparatus 200 may entropy decode the motion vector candidate index from the bitstream, and select the motion vector candidate of the decoding target block from the motion vector candidates included in the motion vector candidate list using the entropy decoded motion vector candidate index. .
  • the decoding apparatus 200 may derive the motion vector of the decoding object block through the sum of the entropy decoded MVD and the motion vector candidate.
  • the bitstream may include a reference picture index and the like indicating a reference picture.
  • the reference image index may be entropy encoded and signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream.
  • the decoding apparatus 200 may generate a prediction block for the decoding target block based on the derived motion vector and the reference image index information.
  • the merge mode may mean merging of motions for a plurality of blocks.
  • the merge mode may refer to a mode of deriving motion information of the current block from motion information of neighboring blocks.
  • a merge candidate list may be generated using motion information of the restored neighboring block and / or motion information of the call block.
  • the motion information may include at least one of 1) a motion vector, 2) a reference picture index, and 3) an inter prediction prediction indicator.
  • the prediction indicator may be unidirectional (L0 prediction, L1 prediction) or bidirectional.
  • the merge candidate list may represent a list in which motion information is stored.
  • the motion information stored in the merge candidate list includes motion information (spatial merge candidate) of neighboring blocks adjacent to the current block and motion information (temporary merge candidate (collocated)) of the block corresponding to the current block in the reference picture. temporal merge candidate)), new motion information generated by a combination of motion information already present in the merge candidate list, and zero merge candidate.
  • the encoding apparatus 100 may generate a bitstream by entropy encoding at least one of a merge flag and a merge index, and may signal the decoding apparatus 200.
  • the merge flag may be information indicating whether to perform a merge mode for each block
  • the merge index may be information on which block among neighboring blocks adjacent to the current block is to be merged.
  • the neighboring blocks of the current block may include at least one of a left neighboring block, a top neighboring block, and a temporal neighboring block of the current block.
  • the skip mode may be a mode in which motion information of a neighboring block is applied to the current block as it is.
  • the encoding apparatus 100 may entropy-code information about which block motion information to use as the motion information of the current block and signal the decoding apparatus 200 through the bitstream. In this case, the encoding apparatus 100 may not signal the syntax element regarding at least one of the motion vector difference information, the coding block flag, and the transform coefficient level to the decoding apparatus 200.
  • the current picture reference mode may mean a prediction mode using a pre-restored region in the current picture to which the current block belongs. In this case, a vector may be defined to specify the pre-restored region.
  • Whether the current block is encoded in the current picture reference mode may be encoded using a reference picture index of the current block.
  • a flag or index indicating whether the current block is a block encoded in the current picture reference mode may be signaled or may be inferred through the reference picture index of the current block.
  • the current picture When the current block is encoded in the current picture reference mode, the current picture may be added at a fixed position or an arbitrary position in the reference picture list for the current block.
  • the fixed position may be, for example, a position at which the reference picture index is 0 or the last position.
  • a separate reference image index indicating the arbitrary position may be signaled.
  • FIG. 5 is a flowchart illustrating an image encoding method according to an embodiment of the present invention
  • FIG. 6 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
  • the encoding apparatus may derive a motion vector candidate (S501), and generate a motion vector candidate list based on the derived motion vector candidate (S502).
  • a motion vector may be determined using the generated motion vector candidate list (S503), and motion compensation may be performed using the motion vector (S504).
  • the encoding apparatus may entropy-encode the information on the motion compensation (S505).
  • the decoding apparatus may entropy decode information about motion compensation received from the encoding apparatus (S601) and derive a motion vector candidate (S602).
  • the decoding apparatus may generate a motion vector candidate list based on the derived motion vector candidate (S603), and determine the motion vector using the generated motion vector candidate list (S604). Thereafter, the decoding apparatus may perform motion compensation by using the motion vector (S605).
  • FIG. 7 is a flowchart illustrating a video encoding method according to another embodiment of the present invention
  • FIG. 8 is a flowchart illustrating a video decoding method according to another embodiment of the present invention.
  • the encoding apparatus may derive a merge candidate (S701) and generate a merge candidate list based on the derived merge candidate.
  • motion information may be determined using the generated merge candidate list (S702), and motion compensation of the current block may be performed using the determined motion information (S703).
  • the encoding apparatus may entropy-encode the information on the motion compensation (S704).
  • the decoding apparatus may entropy decode information about motion compensation received from the encoding apparatus (S801), derive a merge candidate (S802), and generate a merge candidate list based on the derived merge candidate.
  • the motion information of the current block may be determined using the generated merge candidate list (S803). Thereafter, the decoding apparatus may perform motion compensation using the motion information (S804).
  • FIGS. 7 and 8 may be examples of applying the merge mode described with reference to FIG. 4.
  • the motion vector candidate for the current block may include at least one of a spatial motion vector candidate or a temporal motion vector candidate.
  • the spatial motion vector of the current block can be derived from the reconstructed block around the current block.
  • a motion vector of a reconstructed block around the current block may be determined as a spatial motion vector candidate for the current block.
  • FIG. 9 is a diagram for describing an example of deriving a spatial motion vector candidate of a current block.
  • the spatial motion vector candidate of the current block may be derived from neighboring blocks adjacent to the current block (X).
  • the neighboring block adjacent to the current block includes a block B1 adjacent to the top of the current block, a block A1 adjacent to the left of the current block, a block B0 adjacent to the upper right corner of the current block, and an upper left corner of the current block. At least one of the block B2 adjacent to the corner and the block A0 adjacent to the lower left corner of the current block may be included.
  • a neighboring block adjacent to the current block may have a square shape or a non-square shape.
  • the motion vector of the neighboring block may be determined as a spatial motion vector candidate of the current block. Whether the motion vector of the neighboring block exists or whether the motion vector of the neighboring block is available as a spatial motion vector candidate of the current block is based on whether the neighboring block exists or whether the neighboring block is encoded through inter prediction. It can be determined as. In this case, whether the motion vector of the neighboring block exists or whether the motion vector of the neighboring block is available as the spatial motion vector candidate of the current block may be determined according to a predetermined priority. For example, in the example shown in FIG. 9, the availability of the motion vector may be determined in the order of blocks in positions A0, A1, B0, B1, and B2.
  • scaling of the motion vector of the neighboring block may be determined as a candidate for the spatial motion vector of the current block.
  • the scaling may be performed based on at least one of the distance between the current image and the reference image referenced by the current block and the distance between the current image and the reference image referenced by the neighboring block.
  • the spatial motion vector candidate of the current block is derived by scaling the motion vector of the neighboring block according to the ratio of the distance between the current image and the reference image referenced by the current block and the distance between the current image and the reference image referenced by the neighboring block. Can be.
  • the motion vector of the neighboring block is scaled as a spatial motion vector candidate of the current block. Even in this case, scaling may be performed based on at least one of the distance between the current picture and the reference picture referenced by the current block and the distance between the current picture and the reference picture referenced by the neighboring block.
  • a motion vector of a neighboring block may be scaled based on a reference picture indicated by a reference picture index having a predefined value and determined as a spatial motion vector candidate.
  • the predefined value may be a positive integer including 0.
  • the distance between the reference picture of the current block and the reference picture of the current block indicated by the reference picture index having a predefined value and the distance between the current picture and the reference picture of the neighboring block having a predefined value may be determined.
  • a spatial motion vector candidate of the current block may be derived based on at least one or more of encoding parameters of the current block.
  • the temporal motion vector candidate of the current block may be derived from a reconstructed block included in a co-located picture of the current picture.
  • the corresponding location image is an image in which encoding / decoding is completed before the current image, and may be an image having a temporal order different from that of the current image.
  • FIG. 10 is a diagram for describing an example of deriving a temporal motion vector candidate of a current block.
  • a temporal motion vector candidate of the current block may be derived from a block including an inner position of the block corresponding to.
  • the temporal motion vector candidate may mean a motion vector of the corresponding location block.
  • the temporal motion vector candidate of the current block X may include a block H or a center point of the block C adjacent to the lower left corner of the block C corresponding to the same spatial position as the current block. Can be derived from.
  • a block H or a block C3 used to derive a temporal motion vector candidate of the current block may be referred to as a 'collocated block'.
  • At least one of a temporal motion vector candidate, a corresponding position image, a corresponding position block, a prediction list utilization flag, and a reference image index may be derived based on at least one or more of coding parameters.
  • the temporal motion vector candidate of the current block corresponds to the corresponding position. It can be obtained by scaling the motion vector of the block.
  • the scaling may be performed based on at least one of the distance between the current image and the reference image referenced by the current block and the distance between the corresponding position image and the reference image referenced by the corresponding position block.
  • the temporal motion vector of the current block is scaled by scaling a motion vector of the corresponding position block according to a ratio of the distance between the current image and the reference image referenced by the current block and the distance between the corresponding position image and the reference image referenced by the corresponding position block.
  • Candidates can be derived.
  • Generating the motion vector candidate list may include adding or removing the motion vector candidate to the motion vector candidate list and adding the combined motion vector candidate to the motion vector candidate list.
  • the encoding apparatus and the decoding apparatus may add the derived motion vector candidate to the motion vector candidate list in the order of derivation of the motion vector candidate.
  • the motion vector candidate list mvpListLX is assumed to mean a motion vector candidate list corresponding to the reference picture lists L0, L1, L2, and L3.
  • a motion vector candidate list corresponding to L0 in the reference picture list may be referred to as mvpListL0.
  • a motion vector having a predetermined value other than the spatial motion vector candidate and the temporal motion vector candidate may be added to the motion vector candidate list. For example, when the number of motion vector candidates included in the motion vector list is smaller than the maximum number of motion vector candidates, a motion vector having a value of 0 may be added to the motion vector candidate list.
  • the combined motion vectors are added to the motion vector candidate list using at least one or more of the motion vector candidates included in the motion vector candidate list. can do.
  • a combined motion vector candidate is generated using at least one or more of a spatial motion vector candidate, a temporal motion vector candidate, and a zero motion vector candidate included in the motion vector candidate list, and the generated combined motion vector candidate is moved. It can be included in the vector candidate list.
  • the combined motion vector candidate may be generated based on at least one or more of the encoding parameters, or the combined motion vector candidate may be added to the motion vector candidate list based on at least one or more of the encoding parameters.
  • the motion vector candidate indicated by the motion vector candidate index among the motion vector candidates included in the motion vector candidate list may be determined as a predicted motion vector for the current block.
  • the encoding apparatus may calculate a difference between the motion vector and the predicted motion vector, and calculate a motion vector difference value.
  • the decoding apparatus may calculate a motion vector by adding the predicted motion vector and the motion vector difference.
  • the merge candidate for the current block may include at least one of a spatial merge candidate, a temporal merge candidate, or an additional merge candidate.
  • deriving a spatial merge candidate may mean deriving a spatial merge candidate and adding it to the merge candidate list.
  • the spatial merge candidate of the current block may be derived from neighboring blocks adjacent to the current block (X).
  • the neighboring block adjacent to the current block is the block B1 adjacent to the top of the current block, the block A1 adjacent to the left of the current block, the block B0 adjacent to the upper right corner of the current block, and the upper left corner of the current block.
  • At least one of an adjacent block B2 and a block A0 adjacent to a lower left corner of the current block may be included.
  • a neighboring block adjacent to the current block it may be determined whether a neighboring block adjacent to the current block can be used for deriving a spatial merge candidate of the current block.
  • whether a neighboring block adjacent to the current block can be used for deriving a spatial merge candidate of the current block may be determined according to a predetermined priority. For example, in the example illustrated in FIG. 9, spatial merge candidate derivation availability may be determined in the order of blocks of positions A1, B1, B0, A0, and B2. The spatial merge candidates determined based on the availability determination order may be sequentially added to the merge candidate list of the current block.
  • FIG. 11 illustrates an example in which a spatial merge candidate is added to a merge candidate list.
  • spatial merge candidates derived from the merge candidate list may be sequentially added.
  • the spatial merge candidate may be derived based on at least one of encoding parameters.
  • the motion information of the spatial merge candidate may have three or more motion information such as L2 and L3 as well as the motion information of L0 and L1.
  • the reference picture list may include at least one of L0, L1, L2, and L3.
  • the temporal merge candidate of the current block may be derived from a reconstructed block included in a co-located picture of the current picture.
  • the corresponding location image is an image in which encoding / decoding is completed before the current image, and may be an image having a temporal order different from that of the current image.
  • Deriving a temporal merge candidate may mean deriving a temporal merge candidate and adding it to the merge candidate list.
  • a temporal merge candidate of the current block may be derived from a block including an inner position of the block corresponding to.
  • the temporal merge candidate may mean motion information of the corresponding location block.
  • the temporal merge candidate of the current block X is from a block H adjacent to the lower left corner of the block C or a block C3 including a center point of the block C corresponding to a position spatially identical to the current block. Can be induced.
  • a block H or a block C3 used to derive a temporal merge candidate of the current block may be referred to as a 'collocated block'.
  • a temporal merge candidate of the current block can be derived from the block H including the outer position of the block C
  • the block H may be set as the corresponding position block of the current block.
  • the temporal merge candidate of the current block may be derived based on the motion information of the block H.
  • block C3 including an internal position of block C may be set as a corresponding position block of the current block.
  • the temporal merge candidate of the current block may be derived based on the motion information of the block C3.
  • the temporal merge candidate for the current block is not derived or the block It may be derived from blocks at positions other than H and block C3.
  • the temporal merge candidate of the current block may be derived from a plurality of blocks in the corresponding position image.
  • a plurality of temporal merge candidates for the current block may be derived from block H and block C3.
  • FIG. 12 illustrates an example in which a temporal merge candidate is added to a merge candidate list.
  • the temporal merge candidate derived to the merge candidate list may be added.
  • the motion vector of the temporal merge candidate of the current block is It can be obtained by scaling the motion vector of the corresponding position block.
  • the scaling may be performed based on at least one of the distance between the current image and the reference image referenced by the current block and the distance between the corresponding position image and the reference image referenced by the corresponding position block.
  • the motion vector of can be derived.
  • At least one of a temporal merge candidate, a corresponding location image, a corresponding location block, a prediction list utilization flag, and a reference picture index may be derived based on at least one of encoding parameters of a current block, a neighboring block, or a corresponding location block.
  • the merge candidate list may be generated by adding the merge candidate list to the merge candidate list in the derived merge candidate order.
  • the additional merge candidate means at least one of a modified spatial merge candidate, a modified temporal merge candidate, a combined merge candidate, and a merge candidate having a predetermined motion information value. can do.
  • deriving an additional merge candidate may mean deriving an additional merge candidate and adding it to the merge candidate list.
  • the changed spatial merge candidate may refer to a merge candidate in which at least one of motion information of the derived spatial merge candidate is changed.
  • the changed temporal merge candidate may mean a merge candidate which changed at least one of motion information of the derived temporal merge candidate.
  • the combined merge candidate may include at least one of spatial information on a merge candidate list, a temporal merge candidate, a changed spatial merge candidate, a changed temporal merge candidate, a combined merge candidate, and motion information of merge candidates having predetermined motion information values. It may mean a merge candidate derived by combining motion information.
  • the combined merge candidate does not exist in the merge candidate list but can be derived from a block that can derive at least one or more of a spatial merge candidate and a temporal merge candidate, and a modified temporal candidate derived from the resulting spatial merge candidate. It may mean a merge candidate derived by combining at least one motion information among a spatial merge candidate, a change temporal merge candidate, a combined merge candidate, and a merge candidate having a predetermined motion information value.
  • the combined merge candidate may be derived using motion information entropy decoded from the bitstream in the decoder.
  • the motion information used for the merge candidate derivation combined in the encoder may be entropy encoded in the bitstream.
  • the combined merge candidate may mean a combined two-prediction merge candidate.
  • the combined two-prediction merge candidate is a merge candidate using bi-prediction and may mean a merge candidate having L0 motion information and L1 motion information.
  • the merge candidate having a predetermined motion information value may mean a zero merge candidate having a motion vector of (0, 0). Meanwhile, the merge candidate having a predetermined motion information value may be preset to use the same value in the encoding apparatus and the decoding apparatus.
  • the size of the merge candidate list may be determined based on encoding parameters of the current block, neighboring blocks, or corresponding position blocks, and the size may be changed based on the encoding parameters.
  • the encoder may determine a merge candidate used for motion compensation among merge candidates in the merge candidate list through motion estimation, and may encode a merge candidate index (merge_idx) indicating the determined merge candidate in the bitstream.
  • merge_idx merge candidate index
  • the encoder may determine the motion information of the current block by selecting a merge candidate from the merge candidate list based on the merge candidate index described above to generate the prediction block.
  • the prediction block of the current block may be generated by performing motion compensation based on the determined motion information.
  • the decoder may decode the merge candidate index in the bitstream to determine the merge candidate in the merge candidate list indicated by the merge candidate index.
  • the determined merge candidate may be determined as motion information of the current block.
  • the determined motion information is used for motion compensation of the current block. In this case, the motion compensation may be the same as the meaning of inter prediction.
  • the encoding apparatus and the decoding apparatus may calculate the motion vector using the predicted motion vector and the motion vector difference value. Once the motion vector is calculated, inter prediction or motion compensation may be performed using the calculated motion vector (S504 and S605).
  • the encoding apparatus and the decoding apparatus may perform inter prediction or motion compensation by using the determined motion information (S703 and S804).
  • the current block may have motion information of the determined merge candidate.
  • the current block may have at least one to N motion vectors according to the prediction direction. Using the motion vector, at least one to N prediction blocks may be generated to derive the last prediction block of the current block.
  • the prediction block generated using the motion vector may be determined as the final prediction block of the current block.
  • a plurality of prediction blocks are generated using the plurality of motion vectors (or motion information), and based on the weighted sum of the plurality of prediction blocks, The final prediction block of the block can be determined.
  • Reference pictures including each of a plurality of prediction blocks indicated by a plurality of motion vectors (or motion information) may be included in different reference picture lists or may be included in the same reference picture list.
  • a plurality of prediction blocks are generated based on at least one of a spatial motion vector candidate, a temporal motion vector candidate, a motion vector having a predetermined value, or a combined motion vector candidate, and based on a weighted sum of the plurality of prediction blocks.
  • the final prediction block of the current block may be determined.
  • a plurality of prediction blocks may be generated based on motion vector candidates indicated by a preset motion vector candidate index, and the final prediction block of the current block may be determined based on a weighted sum of the plurality of prediction blocks.
  • a plurality of prediction blocks may be generated based on motion vector candidates existing in a preset motion vector candidate index range, and a final prediction block of the current block may be determined based on a weighted sum of the plurality of prediction blocks.
  • the weight applied to each prediction block may have a value equal to 1 / N (where N is the number of generated prediction blocks). For example, when two prediction blocks are generated, the weight applied to each prediction block is 1/2, and when three prediction blocks are generated, the weight applied to each prediction block is 1/3 and four predictions When the block is generated, the weight applied to each prediction block may be 1/4. Alternatively, different weights may be assigned to each prediction block to determine a final prediction block of the current block.
  • the weight does not have to have a fixed value for each prediction block, and may have a variable value for each prediction block.
  • weights applied to each prediction block may be the same or different from each other.
  • the weights applied to the two prediction blocks are not only (1/2, 1/2), but also (1/3, 2/3), (1/4, 3 / 4), (2/5, 3/5), (3/8, 5/8), etc. may be a variable value for each block.
  • the weight may be a value of a positive real number or a value of a negative real number.
  • a negative real value may be included, such as (-1/2, 3/2), (-1/3, 4/3), (-1/4, 5/4), and the like.
  • one or more weight information for the current block may be signaled through the bitstream.
  • the weight information may be signaled for each prediction block or for each reference picture. It is also possible for a plurality of prediction blocks to share one weight information.
  • the encoding apparatus and the decoding apparatus may determine whether to use the predicted motion vector (or motion information) based on the prediction block list utilization flag. For example, when the prediction block list utilization flag indicates 1 as the first value for each reference picture list, the encoding apparatus and the decoding apparatus may use the predicted motion vector of the current block to perform inter prediction or motion compensation. When indicating a second value of 0, the encoding apparatus and the decoding apparatus may indicate that the inter prediction or the motion compensation is not performed using the predicted motion vector of the current block. Meanwhile, the first value of the prediction block list utilization flag may be set to 0 and the second value may be set to 1.
  • Equation 1 to Equation 3 below are examples of generating a final prediction block of the current block when the inter prediction prediction indicators of the current block are PRED_BI, PRED_TRI, and PRED_QUAD, and the prediction direction for each reference picture list is unidirectional. Indicates.
  • P_BI, P_TRI, and P_QUAD may represent final prediction blocks of the current block
  • WF_LX may indicate a weight value of the prediction block generated using LX
  • OFFSET_LX may indicate an offset value for the prediction block generated using LX
  • P_LX means a prediction block generated using a motion vector (or motion information) for LX of the current block.
  • RF means a rounding factor and may be set to 0, positive or negative.
  • the LX reference picture list includes a long-term reference picture, a reference picture without deblocking filter, a reference picture without sample adaptive offset, and an adaptive loop filter.
  • the reference image without loop filter reference image with deblocking filter and adaptive offset only, reference image with deblocking filter and adaptive loop filter only, reference with sample adaptive offset and adaptive loop filter only
  • the image, the deblocking filter, the sample adaptive offset, and the adaptive loop filter may all include at least one of reference images.
  • the LX reference picture list may be at least one of an L2 reference picture list and an L3 reference picture list.
  • the final prediction block for the current block may be obtained based on the weighted sum of the prediction blocks.
  • the weights applied to the prediction blocks derived from the same reference picture list may have the same value or may have different values.
  • At least one of the weights WF_LX and the offset OFFSET_LX for the plurality of prediction blocks may be an encoding parameter that is entropy encoded / decoded.
  • weights and offsets may be derived from encoded / decoded neighboring blocks around the current block.
  • the neighboring block around the current block may include at least one of a block used to derive the spatial motion vector candidate of the current block or a block used to derive the temporal motion vector candidate of the current block.
  • the weight and offset may be determined based on a display order (POC) of the current picture and each reference picture.
  • POC display order
  • the weight or offset may be set to a smaller value, and as the distance between the current picture and the reference picture becomes closer, the weight or offset may be set to a larger value.
  • the weight or offset value may have an inverse relationship with the display order difference between the current image and the reference image.
  • the weight or offset value may be proportional to the display order difference between the current picture and the reference picture.
  • At least one or more of the weight or offset may be entropy encoded / decoded.
  • the weighted sum of the prediction blocks may be calculated based on at least one of the encoding parameters.
  • the weighted sum of the plurality of prediction blocks may be applied only in some regions within the prediction block.
  • the partial region may be a region corresponding to a boundary in the prediction block.
  • the weighted sum may be performed in sub-block units of the prediction block.
  • Inter-prediction or motion compensation may be performed using the same prediction block or the same final prediction block in the lower blocks of the smaller block size in the block of the block size indicated by the region information.
  • interblock prediction or motion compensation may be performed using the same prediction block or the same final prediction block in lower blocks having a deeper block depth within a block of a block depth indicated by region information.
  • the weighted sum may be calculated using at least one or more motion vector candidates present in the motion vector candidate list and used as the final prediction block of the current block.
  • prediction blocks may be generated only with spatial motion vector candidates, a weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • prediction blocks may be generated from the spatial motion vector candidates and the temporal motion vector candidates, a weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • prediction blocks may be generated only with combined motion vector candidates, a weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • prediction blocks may be generated only with motion vector candidates having specific motion vector candidate indices, the weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • prediction blocks may be generated only with motion vector candidates existing within a specific motion vector candidate index range, the weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • the weighted sum may be calculated using at least one merge candidate present in the merge candidate list and used as the final prediction block of the current block.
  • prediction blocks may be generated only with spatial merge candidates, a weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • prediction blocks may be generated from spatial merge candidates and temporal merge candidates, a weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • prediction blocks may be generated only with combined merge candidates, a weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • prediction blocks may be generated only with merge candidates having specific merge candidate indices, the weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • prediction blocks may be generated only with merge candidates existing within a specific merge candidate index range, the weighted sum of the prediction blocks may be used, and the calculated weighted sum may be used as the final prediction block of the current block.
  • the encoder and the decoder may perform motion compensation by using motion vectors / information of the current block.
  • the final prediction block resulting from the motion compensation may be generated using at least one or more prediction blocks.
  • the current block may mean at least one of a current coding block and a current prediction block.
  • the final predicted block may be generated by performing an overlapped block motion compensation that is overlapped with a region corresponding to the boundary of the current block.
  • the area corresponding to the boundary in the current block may be an area in the current block adjacent to the boundary of the neighboring block of the current block.
  • the area corresponding to the boundary in the current block is one of the upper boundary area, the left boundary area, the lower boundary area, the right boundary area, the upper right corner area, the lower right corner area, the upper left corner area, and the lower left corner area of the current block. It may include at least one.
  • the region corresponding to the boundary in the current block may be a region corresponding to a part of the prediction block of the current block.
  • the overlapped block motion compensation is performed by calculating a weighted sum of a prediction block generated using motion information of a prediction block region corresponding to a boundary within a current block and a block encoded / decoded adjacent to the current block to perform motion compensation. Can mean.
  • the weighted summation may be performed in units of sub-blocks after dividing the current block into a plurality of sub-blocks. That is, motion compensation may be performed using motion information of a block encoded / decoded adjacent to the current block in lower block units.
  • the lower block may mean a sub block.
  • the weighted sum calculation may use a first prediction block generated in units of lower blocks using motion information of the current block and a second prediction block generated using motion information of neighboring lower blocks spatially adjacent to the current block.
  • using motion information may mean deriving motion information.
  • the first prediction block may mean a prediction block generated using motion information of a lower block to be encoded / decoded in the current block.
  • the second prediction block may also mean a prediction block generated using motion information of a neighboring lower block spatially adjacent to the encoding / decoding target lower block in the current block.
  • the final prediction block may be generated using a weighted sum of the first prediction block and the second prediction block. That is, the overlapped block motion compensation may generate the final prediction block by using motion information of another block in addition to the motion information of the current block.
  • AMVP Advanced Motion Vector Prediction
  • merge mode affine motion compensation mode
  • decoder motion vector derivation mode adaptive motion vector resolution mode
  • local illumination compensation mode bidirectional optical flow
  • AMVP Advanced Motion Vector Prediction
  • merge mode affine motion compensation mode
  • decoder motion vector derivation mode decoder motion vector derivation mode
  • adaptive motion vector resolution mode adaptive motion vector resolution mode
  • local illumination compensation mode bidirectional optical flow
  • bidirectional optical flow In the case of at least one of the modes, the current prediction block may be divided into lower blocks and then overlapped block motion compensation may be performed for each lower block.
  • block motion compensation superimposed on at least one of Advanced Temporal Motion Vector Predictor (ATMVP) candidate and Spatial-Temporal Motion Vector Predictor (STMVP) candidate can be performed.
  • ATMVP Advanced Temporal Motion Vector Predictor
  • STMVP Spatial-Temporal Motion Vector Predictor
  • the encoding apparatus may entropy encode information about motion compensation through a bitstream, and the decoding apparatus may entropy decode information about motion compensation included in the bitstream.
  • the entropy encoding / information on the decoded motion compensation that is, the inter prediction indicator (Inter Prediction Indicator) (inter_pred_idc), the reference image index (ref_idx_l0, ref_idx_l1, ref_idx_l2, ref_idx_l3), the motion vector candidate index (mvp_l0_idx, mvp_l1_idx, mvp_l2_idx , mvp_l3_idx, motion vector difference, skip mode availability information (cu_skip_flag), merge mode availability information (merge_flag), merge index information (merge_index), weight values (wf_l0, wf_l1, wf_l2, wf_l3), and It may include at least one
  • the inter prediction prediction indicator When the inter prediction prediction indicator is encoded / decoded by inter prediction of the current block, it may mean at least one of the inter prediction directions or the number of prediction directions of the current block.
  • the inter-prediction indicator may indicate unidirectional prediction or multi-directional prediction such as bidirectional prediction, three-way prediction, or four-direction prediction.
  • the inter prediction prediction indicator may mean the number of reference pictures that the current block uses when generating the prediction block. Alternatively, one reference picture may be used for a plurality of direction predictions. In this case, N (N> M) direction prediction may be performed using M reference images.
  • the inter prediction prediction indicator may mean the number of prediction blocks used when performing inter prediction or motion compensation on a current block.
  • the reference picture indicator may indicate unidirectional PRED_LX, bidirectional PRED_BI, three-way PRED_TRI, four-direction PRED_QUAD, or more according to the number of prediction directions of the current block.
  • the prediction list utilization flag indicates whether a prediction block is generated using the corresponding reference picture list.
  • the prediction list utilization flag indicates 1 as the first value, it indicates that the prediction block can be generated using the reference picture list, and when 0 indicates the second value, the corresponding reference picture list is used. It may indicate that no prediction block is generated.
  • the first value of the prediction list utilization flag may be set to 0 and the second value may be set to 1.
  • the prediction block of the current block may be generated using motion information corresponding to the reference picture list.
  • the reference picture index may specify a reference picture referenced by the current block in each reference picture list.
  • One or more reference picture indexes may be entropy encoded / decoded for each reference picture list.
  • the current block may perform motion compensation using one or more reference picture indexes.
  • the motion vector candidate index indicates a motion vector candidate for the current block in the motion vector candidate list generated for each reference picture list or reference picture index. At least one motion vector candidate index may be entropy encoded / decoded for each motion vector candidate list.
  • the current block may perform motion compensation using at least one motion vector candidate index.
  • the motion vector difference represents a difference value between the motion vector and the predicted motion vector.
  • One or more motion vector differences may be entropy encoded / decoded with respect to the motion vector candidate list generated for each reference picture list or reference picture index for the current block.
  • the current block may perform motion compensation using one or more motion vector differences.
  • the skip mode usage information (cu_skip_flag) may indicate the use of the skip mode when the first value is 1, and may not indicate the use of the skip mode when the second value is 0. Based on whether the skip mode is used, motion compensation of the current block may be performed using the skip mode.
  • the merge mode use information may indicate the use of the merge mode when the first value is 1, and may not indicate the use of the merge mode when the second value has 0. Based on whether the merge mode is used, motion compensation of the current block may be performed using the merge mode.
  • the merge index information merge_index may mean information indicating a merge candidate in a merge candidate list.
  • the merge index information may mean information on a merge index.
  • the merge index information may indicate a block in which a merge candidate is derived among blocks reconstructed adjacent to the current block in a spatial / temporal manner.
  • the merge index information may indicate at least one or more of the motion information that the merge candidate has.
  • the merge index information may indicate the first merge candidate in the merge candidate list when the first index has 0, and when the merge index information has the first value 1, the merge index information may indicate the second merge candidate in the merge candidate list. If the third value is 2, the third merge candidate in the merge candidate list may be indicated.
  • the merge candidate corresponding to the value may be indicated according to the order in the merge candidate list.
  • N may mean a positive integer including 0.
  • the motion compensation of the current block may be performed using the merge mode.
  • a final prediction block for the current block may be generated through a weighted sum of each prediction block.
  • the weighting factor used for the weighted sum operation may include a reference picture list, a reference picture, a motion vector candidate index, motion vector difference, a motion vector, skip mode information, and a merge mode.
  • Entropy encoding / decoding may be performed as much as at least one of usage information, merge index information, or at least one number.
  • the weighting factor of each prediction block may be entropy encoded / decoded based on the inter prediction prediction indicator.
  • the weighting factor may include at least one of a weight and an offset.
  • Information on motion compensation may be entropy encoded / decoded in units of blocks or may be entropy encoded / decoded at a higher level.
  • the information on motion compensation may be entropy encoded / decoded in units of blocks such as a CTU, a CU, or a PU, a video parameter set, a sequence parameter set, a picture parameter set.
  • Entropy encoding / decoding may be performed at a higher level such as an adaptation parameter set or a slice header.
  • the information about the motion compensation may be entropy encoded / decoded based on the information difference value on the motion compensation indicating the difference value between the information on the motion compensation and the information prediction value on the motion compensation.
  • At least one of the information on the motion compensation may be derived based on at least one or more of coding parameters.
  • At least one or more pieces of information on the motion compensation may be entropy decoded from the bitstream based on at least one or more of encoding parameters. At least one or more pieces of information on the motion compensation may be entropy encoded into a bitstream based on at least one or more of encoding parameters.
  • the motion compensation information includes motion vector, motion vector candidate, motion vector candidate index, motion vector difference value, motion vector prediction value, skip mode usage information (skip_flag), merge mode usage information (merge_flag), merge index information (merge_index) ), Motion vector resolution information, overlapped block motion compensation information, local illumination compensation information, affine motion compensation information, decoder motion vector
  • the apparatus may further include at least one of decoder-side motion vector derivation information and bi-directional optical flow information.
  • the decoder motion vector derivation may mean pattern matched motion vector derivation.
  • the motion vector resolution information may be information indicating whether a specific resolution is used for at least one of a motion vector and a motion vector difference value.
  • the resolution may mean precision.
  • the specific resolutions are 16-pixel units, 8-pel units, 4-pixel units, integer-pel units, 1 / 2-pixel units. (1 / 2-pel) units, 1 / 4-pel (1 / 4-pel) units, 1 / 8-pixel (1 / 8-pel) units, 1 / 16-pixel (1 / 16-pel) units , At least one of 1 / 32-pixel (1 / 32-pel) units and 1 / 64-pixel (1 / 64-pel) units.
  • the overlapped block motion compensation information may be information indicating whether a weighted sum of the prediction blocks of the current block is further calculated by additionally using a motion vector of a neighboring block spatially adjacent to the current block block when motion compensation of the current block is performed.
  • the local lighting compensation information may be information indicating whether at least one of a weight value and an offset value is applied when generating the prediction block of the current block.
  • at least one of the weight value and the offset value may be a value calculated based on the reference block.
  • the affine motion compensation information may be information indicating whether to use an affine motion model when compensating for a current block.
  • the affine motion model may be a model that divides one block into a plurality of lower blocks using a plurality of parameters and calculates a motion vector of the divided lower blocks using representative motion vectors.
  • the decoder motion vector derivation information may be information indicating whether a decoder derives and uses a motion vector necessary for motion compensation.
  • the information about the motion vector may not be entropy encoded / decoded based on the decoder motion vector derivation information.
  • information on the merge mode may be entropy encoded / decoded. That is, the decoder motion vector derivation information may indicate whether the decoder uses the merge mode.
  • the bidirectional optical flow information may be information indicating whether motion compensation is performed by correcting a motion vector on a pixel basis or a lower block basis. Based on the bidirectional optical flow information, the motion vector of the pixel unit or the lower block unit may not be entropy encoded / decoded. Here, the motion vector correction may be to change the motion vector value of a block unit in a pixel unit or a lower block unit.
  • the current block may perform motion compensation by using at least one of information on motion compensation, and entropy encode / decode at least one of information on motion compensation.
  • the truncated rice binarization method When entropy coding / decoding information related to motion compensation, the truncated rice binarization method, the K-th order Exp_Golomb binarization method, the limited K-th order exp-Golomb A binarization method such as a binarization method, a fixed-length binarization method, a unary binarization method, or a truncated unary binarization method may be used.
  • the context model may be determined using at least one of region information, information on the depth of the current block, and information on the size of the current block.
  • Entropy encoding / decoding information on motion compensation information about motion compensation of neighboring blocks, information about motion compensation previously encoded / decoded, information about depth of current block, and size of current block
  • Entropy encoding / decoding may be performed using at least one of the information as a prediction value for the information on the motion compensation of the current block.
  • FIG. 13 is a diagram for explaining an example of performing block motion compensation superimposed on a lower block basis.
  • the hatched block is an area to which overlapped block motion compensation is applied and may be a lower block corresponding to a boundary in the current block or a lower block in the current block. Also, the block indicated by the thick line may be the current block.
  • the arrow may mean that motion information of adjacent neighboring lower blocks is used for motion compensation of the current lower block.
  • the position corresponding to the arrow tail may mean 1) a neighboring lower block adjacent to the current block or 2) a neighboring lower block adjacent to the current lower block in the current block.
  • the position corresponding to the head of the arrow may mean a current lower block in the current block.
  • a weighted sum of the first prediction block and the second prediction block may be calculated.
  • motion information used when generating the first prediction block motion information about a current lower block in the current block may be used.
  • motion information used when generating the second prediction block at least one of motion information of a neighboring subblock adjacent to the current block and motion information of a neighboring subblock adjacent to the current subblock in the current block may be used.
  • the motion information used for generating the second prediction block is based on the position of the current lower block in the current block, based on the position of the upper block, the left block, the lower block, the right block, the upper right block, the lower right block, and the upper left block. It may be motion information of at least one block among a block and a lower left block.
  • the position of the available neighboring lower block may be determined according to the position of the current lower block. For example, when the current lower block is located at the upper boundary, at least one peripheral lower block located at the top, upper right and left upper ends of the current lower block may be used. When the current lower block is located at the left boundary, at least one peripheral lower block located at the left, upper left and lower left ends of the current lower block may be used.
  • the upper block, the left block, the lower block, the right block, the upper right block, the lower right block, the upper left block, and the lower left block are based on the position of the current lower block.
  • the lower right peripheral block, the upper right peripheral block, the lower right peripheral block, the upper left peripheral block, and the lower left peripheral block may be named.
  • motion information used to generate the second prediction block may vary according to the motion vector size of the neighboring lower block adjacent to the current block or the neighboring lower block within the current block.
  • the second prediction block may be generated using only motion information of one direction having a large size by comparing the magnitudes of the motion vectors in the L0 and L1 directions.
  • the second prediction block may be generated using only a motion vector of which the sum of the absolute value of the x component and the y component of the motion vector among the L0 and L1 direction motion vectors of the neighboring lower block is greater than or equal to a predefined value.
  • the predefined value may be a positive integer including 0, and may be a value determined by information signaled from the encoder to the decoder or set equally to the encoder and the decoder.
  • the motion information used for generating the second prediction block may vary according to the motion vector size and direction of the current lower block.
  • the second prediction using at least one of the motion information of the left block and the right block You can create a block.
  • the second prediction using at least one of the motion information of the upper block and the lower block You can create a block.
  • the second prediction block may be generated using at least one of motion information of the left block and the right block.
  • the predefined value may be a positive integer including 0, and may be a value determined by information signaled from the encoder to the decoder or set equally to the encoder and the decoder.
  • the second prediction block may be generated using at least one of motion information of the upper block and the lower block.
  • the predefined value may be a positive integer including 0, and may be a value determined by information signaled from the encoder to the decoder or set equally to the encoder and the decoder.
  • the size of the lower block may have NxM, where N and M may be positive integers. N and M may be the same or different from each other.
  • the lower block size may be 4x4 or 8x8, and the lower block size information may be entropy encoded / decoded in a sequence unit.
  • the size of the lower block may be determined according to the size of the current block. For example, when the size of the current block is less than or equal to K samples, a 4x4 lower block may be used. When the size of the current block is larger than K samples, an 8x8 lower block may be used. Where K is a positive integer, for example 256.
  • the information on the size of the lower block may be entropy encoded / decoded in at least one of a sequence unit, a picture unit, a slice unit, a tile unit, a CTU unit, a CU unit, and a PU unit.
  • the size of the lower block may use a size predefined in the encoder and the decoder.
  • the lower block may be at least one of a square shape and a rectangular shape.
  • the lower block may have a square shape.
  • the lower block may have a rectangular shape.
  • the information on the shape of the lower block may be entropy encoded / decoded in at least one or more of a sequence unit, a picture unit, a slice unit, a tile unit, a CTU unit, a CU unit, and a PU unit.
  • the shape of the lower block may use a form predefined in the encoder and the decoder.
  • FIG. 14 is a diagram for explaining an example of performing overlapped block motion compensation by using motion information of a lower block of a corresponding position block.
  • motion information of a corresponding position lower block corresponding to a position spatially identical to a current block in a corresponding position image or a reference image may be used as motion information used to generate the second prediction block.
  • motion information of a lower block temporally adjacent to a current block in a corresponding position block may be used for overlapping block motion compensation of the current lower block.
  • the position corresponding to the tail of the arrow may mean a lower block in the corresponding position block.
  • the position corresponding to the head of the arrow may mean a current lower block in the current block.
  • At least one or more of the motion information of the corresponding sub-block in the corresponding position image, the motion information of the neighboring sub-block spatially adjacent to the current block, the motion information of at least one neighboring sub-block spatially adjacent to the current sub-block in the current block May be used to generate the second prediction block.
  • FIG. 15 is a diagram for explaining an example in which overlapped block motion compensation is performed using motion information of a block adjacent to a boundary area of a reference block.
  • a reference block in a reference picture is identified using at least one of a motion vector and a reference picture index of the current block, and the second prediction block is generated from motion information of a neighboring block adjacent to the boundary of the identified reference block. It can be used as motion information used for.
  • the neighboring block may include a block encoded / decoded adjacent to a lower block located in the lower boundary region or the right boundary region of the reference block.
  • motion information of a block encoded / decoded adjacent to a lower boundary area and a right boundary area of a reference block may be used for overlapping block motion compensation of a current lower block.
  • motion information of a block encoded / decoded adjacent to a lower boundary area and a right boundary area of a reference block motion information of a neighboring lower block spatially adjacent to the current block, and spatially to a current lower block within the current block
  • At least one or more pieces of motion information among adjacent neighboring lower blocks may be used to generate the second prediction block.
  • the merge candidate list may be a list used in the merge mode among the inter prediction modes.
  • the spatial merge candidate in the merge candidate list may be used as motion information used for generating the second prediction block.
  • a temporal merge candidate in the merge candidate list may be used as motion information used to generate the second prediction block.
  • the merge candidates in the merge candidate list may be used as motion information used for generating the second prediction block.
  • At least one or more motion vectors among the motion vector candidates included in the motion vector candidate list may be used as a motion vector used to generate the second prediction block.
  • the motion vector candidate list may be a list used in the AMVP mode among the inter prediction modes.
  • the spatial motion vector candidate in the motion vector candidate list may be used as motion information used to generate the second prediction block.
  • the temporal motion vector candidate in the motion vector candidate list may be used as motion information used to generate the second prediction block.
  • the region to which the overlapped block motion compensation is applied may be different.
  • the area to which the overlapped block motion compensation is applied is set to an area adjacent to one side boundary of the block (ie, a lower block located at the block boundary) or an area not adjacent to the block boundary (ie, a lower block not located at the block boundary). Can be.
  • block motion compensation superimposed on an area not adjacent to a block boundary may use at least one of a merge candidate and a motion vector candidate as motion information used for the second prediction block.
  • block motion compensation superimposed on an area not adjacent to a block boundary may be performed by using motion information of a spatial merge candidate or a spatial motion vector candidate.
  • block motion compensation superimposed on an area not adjacent to a block boundary may be performed using motion information of a temporal merge candidate or a temporal motion vector candidate.
  • block motion compensation superimposed on a lower boundary region and a right boundary region of a block may be performed using motion information of a spatial merge candidate or a spatial motion vector candidate.
  • block motion compensation superimposed on a lower boundary region and a right boundary region of a block may be performed using motion information of a temporal merge candidate or a temporal motion vector candidate.
  • motion information derived from a specific position block in a merge candidate list or a motion vector candidate list may be used for overlapping block motion compensation for a specific region.
  • the motion information may be used to compensate for overlapping block motion of the right boundary region of the block.
  • the motion information may be used to compensate for the overlapped block motion of the lower boundary region of the block.
  • FIG. 16 is a diagram for explaining an example of performing block motion compensation superimposed on a lower block group basis.
  • lower block based nested block motion compensation may be performed in one or more block units that combine several lower blocks.
  • a block unit in which several lower blocks are summed may mean a lower block group unit.
  • an area divided in the hatched area may mean a lower block group.
  • the arrow may mean that motion information of adjacent neighboring lower blocks is used for motion compensation of the current lower block group.
  • the position corresponding to the arrow tail may mean 1) a neighboring lower block adjacent to the current block, 2) a neighboring lower block group adjacent to the current block, or 3) a neighboring lower block adjacent to the current lower block in the current block.
  • the position corresponding to the head of the arrow may mean the current lower block group in the current block.
  • a weighted sum of the first prediction block and the second prediction block may be calculated.
  • motion information used when generating the first prediction block motion information about a current lower block group in the current block may be used.
  • the motion information of the current lower block group in the current block may be any one of an average value, a median value, a minimum value, a maximum value, or a weighted sum of the motion information of the lower block included in the lower block group.
  • the motion information used when generating the second prediction block includes motion information of neighboring subblocks adjacent to the current block, motion information of a neighboring subblock group adjacent to the current block, and neighboring subblocks adjacent to the current subblock within the current block. At least one or more of the motion information may be used.
  • the motion information of the neighboring lower block group adjacent to the current block may be any one of an average value, a median value, a minimum value, a maximum value, or a weighted sum of the motion information of the lower block included in the neighboring lower block group.
  • At least one lower block group unit may exist in the current block, and the horizontal size of the lower block group unit may be equal to or smaller than the horizontal size of the current lower block.
  • the vertical size of the lower block group unit may be equal to or smaller than the vertical size of the current lower block.
  • block motion compensation superimposed on at least one of lower blocks located at the upper boundary of the current block and lower blocks located at the left boundary of the current block may be performed.
  • the blocks adjacent to the lower boundary and the right boundary of the current block are not encoded / decoded, the blocks overlapping at least one of lower blocks located at the lower boundary of the current block and lower blocks located at the right boundary of the current block are overlapped. Motion compensation may not be performed. Alternatively, since the blocks adjacent to the lower boundary and the right boundary of the current block are not encoded / decoded, at least one of the lower blocks located at the lower boundary of the current block and the lower blocks located at the right boundary of the current block are present in the current lower block.
  • the overlapped block motion compensation may be performed using at least one or more pieces of motion information of an upper block, a left block, an upper left block, a lower left block, and an upper right block.
  • the current block is a merge mode and at least one of an improved temporal motion vector prediction candidate and a spatial-temporal motion vector prediction candidate, lower blocks located at a lower boundary in the current block and lower blocks located at a right boundary in the current block.
  • Block motion compensation superimposed on at least one of the above may not be performed.
  • the overlapped block motion compensation may be performed on at least one of each color component of the current block.
  • the color component may include at least one of a luminance component and a color difference component.
  • the overlapped block motion compensation may be performed according to the inter prediction prediction indicator of the current block. That is, the current block may be performed when at least one of unidirectional prediction, bidirectional prediction, three direction prediction, four direction prediction, and the like. It may also be performed only when the current block is unidirectional prediction. It may also be performed only if the current block is bidirectional prediction.
  • 17 is a diagram for explaining an example of the number of motion information used for overlapping block motion compensation.
  • the motion information used to generate the second prediction block may be up to K pieces. That is, up to K second prediction blocks may be generated and used for overlapping block motion compensation.
  • K pieces may be positive integers including 0, and may be 1, 2, 3, or 4, for example.
  • the meaning of deriving the motion information may mean generating the second prediction block using the derived motion information and using the same to compensate for the overlapped block motion.
  • blocks corresponding to the upper boundary area in the current block may derive motion information from at least one of a neighboring upper block, a neighboring upper left block, and a neighboring upper right block that are neighboring lower blocks adjacent to the current block.
  • blocks corresponding to the left boundary area in the current block may derive motion information from at least one of a neighboring left block, a neighboring upper left block, and a neighboring lower left block that are neighboring lower blocks adjacent to the current block.
  • blocks corresponding to the upper left boundary area in the current block may derive motion information from at least one of a neighboring upper block, a neighboring left block, and a neighboring upper left block that are neighboring lower blocks adjacent to the current block.
  • blocks corresponding to a right upper boundary area in the current block may derive motion information from at least one of a neighboring upper block, a neighboring upper left block, and a neighboring upper right block that are neighboring lower blocks adjacent to the current block.
  • blocks corresponding to the lower left boundary area in the current block may derive motion information from at least one of a neighboring left block, a neighboring top left block, and a neighboring bottom left block that are neighboring lower blocks adjacent to the current block.
  • the maximum motion information used for generating the second prediction block is maximum. Up to eight can be derived. That is, 8-connectivity may be used to derive motion information used for generating the second prediction block.
  • the current subblocks in the current block are the neighboring top block, the neighboring left block, the neighboring bottom block, the neighboring right block, the neighboring top left block, the neighboring bottom left block, the neighboring neighboring subblock within the current block.
  • the motion information may be derived from at least one of a lower right block and a peripheral upper right block.
  • motion information used to generate the second prediction block may also be derived from the corresponding location lower block in the corresponding location image.
  • motion information used for generating the second prediction block may be derived.
  • the number of motion information used for generating the second prediction block may be determined according to the size or direction of the motion vector.
  • motion information used to generate the second prediction block may be up to K pieces.
  • K pieces may be positive integers including 0, for example, 4 pieces.
  • K motion information may be used to generate the second prediction block.
  • K pieces may be positive integers including 0, for example, 4 pieces.
  • K motion information used for generating the second prediction block may be used.
  • K pieces may be positive integers including 0, for example, 4 pieces.
  • FIGS. 18 and 19 are diagrams for describing a derivation order of motion information used to generate a second prediction block.
  • the motion information used for generating the second prediction block may be derived in a predetermined order in the encoder and the decoder.
  • the motion information may be derived in the order of the upper block, the left block, the lower block, and the right block based on the position of the current lower block.
  • a motion information derivation order used to generate a second prediction block may be determined based on a position of a current lower block.
  • blocks corresponding to the upper boundary area in the current block may derive motion information in order of 1) a neighboring top block, 2) a neighboring top left block, and 3) a neighboring top right block that is a neighboring lower block adjacent to the current block.
  • blocks corresponding to the left boundary region in the current block may derive motion information in the order of 1) a peripheral left block, 2) a neighboring top left block, and 3) a neighboring bottom left block that is a neighboring lower block adjacent to the current block.
  • blocks corresponding to the upper left boundary area in the current block may derive motion information in the order of 1) a neighboring top block, 2) a neighboring left block, and 3) a neighboring top left block that is a neighboring lower block adjacent to the current block.
  • blocks corresponding to the upper right boundary region in the current block may derive motion information in the order of 1) a neighboring top block, 2) a neighboring top left block, and 3) a neighboring top right block that is a neighboring lower block adjacent to the current block.
  • blocks corresponding to the lower right border region in the current block may derive motion information in the order of 1) neighboring left block, 2) neighboring top left block, and 3) neighboring bottom left block that are neighboring lower blocks adjacent to the current block.
  • the current subblocks in the current block are 1) a neighboring top block, 2) a neighboring left block, 3) a neighboring bottom block, and 4) a neighboring right block that is a neighboring subblock adjacent to the current subblock in the current block. , 5) peripheral upper left block, 6) peripheral lower left block, 7) peripheral lower right block, and 8) peripheral upper right block in order of motion information. Meanwhile, the motion information may be derived in a different order from that shown in FIG. 19.
  • the motion information of the corresponding location lower block in the corresponding location image may be derived at a lower rank than the neighboring lower block spatially adjacent to the current lower block.
  • the motion information of the corresponding position lower block in the corresponding position image may be derived at a higher rank than the neighboring lower block spatially adjacent to the current lower block.
  • motion information of a block encoded / decoded adjacent to a lower boundary region and a right boundary region of a reference block in a reference image may be derived at a lower rank than a neighboring lower block spatially adjacent to the current lower block.
  • the motion information of a block encoded / decoded adjacent to the lower boundary region and the right boundary region of the reference block in the reference image may be derived at a higher rank than the neighboring lower block spatially adjacent to the current lower block.
  • the motion information of the neighboring sub-block adjacent to the current block or the neighboring sub-block adjacent to the current sub-block in the current block may be derived to the motion information used for generating the second prediction block only when a specific condition is satisfied.
  • the motion information of the existing neighboring subblock is used for generating the second prediction block. Can be induced.
  • the motion information of the at least one neighboring subblock in the inter prediction mode is used. It may be derived as motion information used for generating the second prediction block.
  • the intra prediction mode is at least one. The motion information of one neighboring lower block may not be derived from the motion information used for generating the second prediction block.
  • At least one inter-screen prediction indicator among neighboring subblocks adjacent to the current block and neighboring subblocks adjacent to the current subblock in the current block may include L0 prediction, L1 prediction, L2 prediction, L3 prediction, unidirectional prediction, bidirectional prediction, and the like. If at least one of the four direction prediction and the four direction prediction is not indicated, motion information used for generating the second prediction block may not be derived.
  • motion information used to generate the second prediction block may be derived.
  • motion information used for generating the second prediction block may be derived.
  • motion information used for generating the second prediction block may be derived.
  • the second prediction block is used to generate the second prediction block.
  • the motion information can be derived.
  • the inter prediction prediction indicator used for generating the first prediction block indicates unidirectional prediction
  • at least one of a motion vector and a reference picture index for the L0 and L1 prediction directions used for generating the second prediction block may be used.
  • motion information used for generating the second prediction block may be derived.
  • the inter prediction prediction indicator used for generating the first prediction block when the inter prediction prediction indicator indicates bidirectional prediction, the L0 and L1 prediction directions used for generating the second prediction block are used.
  • the second prediction block is generated.
  • the motion information used can be derived.
  • the second prediction block is used for generating the second prediction block.
  • Motion information can be derived.
  • FIG. 20 is a diagram for explaining an example of determining whether motion information available for generating a second prediction block is compared by comparing a POC of a reference picture of a current lower block and a POC of a reference picture of a neighboring lower block.
  • the motion information of the current subblock is used to generate the second prediction block of the current subblock. Can be.
  • the second prediction block is generated. Motion information used for can be derived.
  • the motion vector used to generate the second prediction block is generated from the first prediction block.
  • a motion vector scaling may be performed based on a reference picture or a POC of a reference picture used to derive a motion vector used to generate a second prediction block.
  • FIG. 21 is a diagram for describing an embodiment of applying a weight when calculating a weighted sum of a first prediction block and a second prediction block.
  • weights may be used for each row or column according to a sample position in the block.
  • a weighted sum between samples corresponding to the same position in the first prediction block and the second prediction block may be calculated.
  • at least one of a weight and an offset may be used when calculating the weighted sum for generating the final prediction block.
  • the weight may be a negative number less than zero and a positive number greater than zero.
  • the offset may be zero, a negative number less than zero, and a positive number greater than zero.
  • the same weight may be used at all sample positions for each prediction block when calculating the weighted sum of the first prediction block and the second prediction block.
  • a weight such as ⁇ 3/4, 7/8, 15/16, 31/32 ⁇ may be used for each row or each column in the first prediction block, and each row may be used in the second prediction block.
  • a weight such as ⁇ 1/4, 1/8, 1/16, 1/32 ⁇ may be used for each column.
  • the weights may use the same weights at sample positions belonging to the same row or at sample positions belonging to the same column.
  • each weight is closer to the boundary of the current lower block, a larger weight may be used.
  • each weight may be applied to all samples in the lower block.
  • (A), (b), (c), and (d) of FIG. 21 are prepared by using motion information of a neighboring upper block, motion information of a neighboring lower block, motion information of a neighboring left block, and motion information of a neighboring right block.
  • Examples of generating two prediction blocks may be shown.
  • the upper second prediction block, the lower second prediction block, the left second prediction block, and the right second prediction block may include motion information of a neighboring upper block, motion information of a neighboring lower block, motion information of a neighboring left block, and a neighboring right block.
  • the second prediction block may be generated based on the motion information of.
  • FIG. 22 is a diagram for describing an embodiment in which different weights are applied according to sample positions in a block when calculating a weighted sum of a first prediction block and a second prediction block.
  • different weights may be used according to sample positions in a block in the weighted sum calculation of the first prediction block and the second prediction block. That is, the weighted sum may be calculated with different weights according to the positions of blocks spatially adjacent to the current lower block.
  • a weighted sum between samples corresponding to the same position in the first prediction block and the second prediction block may be calculated.
  • the first prediction block includes ⁇ 1/2, 3/4, 7/8, 15/16, 31/32, 63/64, 127/128, 255/256, 511 / for each sample position. 512, 1023/1024 ⁇ , and the like
  • the second prediction block includes ⁇ 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1 / for each sample position. 128, 1/256, 1/512, 1/1024 ⁇ or the like.
  • the weight value used in at least one of the upper second prediction block, the left second prediction block, the lower second prediction block, and the right second prediction block may be the upper left second prediction block, the lower left second prediction block, and the lower right end. It may be greater than a weight value used in at least one of the second prediction block and the upper right second prediction block.
  • a weight value used in at least one of the upper second prediction block, the left second prediction block, the lower second prediction block, and the right second prediction block may be the upper left second prediction block, the lower left second prediction block, and the lower right corner. It may be equal to a weight value used in at least one of the second prediction block and the upper right second prediction block.
  • the weights of the second prediction blocks generated by using the motion information of the corresponding position lower block in the corresponding position image may be the same at all sample positions.
  • the weight of the second prediction block generated using the motion information of the corresponding location lower block in the corresponding location image may be equal to the weight of the first prediction block.
  • the weights of the second prediction blocks generated by using the motion information of the blocks encoded / decoded adjacent to the lower boundary region and the right boundary region of the reference image may be the same at all sample positions.
  • the weight of the second prediction block generated by using motion information of a block encoded / decoded adjacent to the lower boundary region and the right boundary region of the reference image may be equal to the weight of the first prediction block.
  • the weight value may vary depending on the motion vector size of the neighboring subblock adjacent to the current block or the neighboring subblock adjacent to the current subblock in the current block.
  • the weight of the current lower block is ⁇ 1/2, 3/4, 7/8, 15/16 ⁇ .
  • the weight of the current lower block is ⁇ 7/8, 15/16, 31/32, 63/64 ⁇ .
  • the predefined value may be a positive integer including 0.
  • the weight value may vary according to the motion vector size or the motion vector direction of the current lower block.
  • the weight of the left and right peripheral lower blocks is ⁇ 1/2, 3/4, 7/8, 15 / 16 ⁇ can be used.
  • the predefined value may be a positive integer including 0.
  • the weight of the upper and lower peripheral lower blocks is ⁇ 1/2, 3/4, 7/8, 15 / 16 ⁇ can be used.
  • the predefined value may be a positive integer including 0.
  • the weight of the current lower block is ⁇ 1/2, 3/4, 7/8 , 15/16 ⁇ .
  • the weight of the current lower block is ⁇ 7/8, 15/16, 31/32, 63/64 ⁇ Can be used.
  • the predefined value may be a positive integer including 0.
  • the weighted sum calculation may not be performed at all sample positions in the lower block, but may be performed on samples located in K rows / columns adjacent to each block boundary.
  • K may be a positive integer including 0, and may be 1 or 2, for example.
  • the weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary.
  • K may be a positive integer including 0, and may be 1 or 2, for example.
  • N and M may be positive integers, for example, N and M may be 4 or 8 or more.
  • N and M may be the same or different from each other.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary.
  • K may be a positive integer including 0, and may be 1 or 2, for example.
  • the weighted sum may be calculated for samples located in two rows / columns adjacent to each block boundary.
  • weighted sums may be calculated for samples located in one row / column adjacent to each block boundary.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary.
  • K may be a positive integer including 0, and may be 1 or 2, for example.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary according to the size of a lower block of the current block.
  • the weighted sum may be calculated for samples located in one, two, three, or four rows / columns adjacent to each block boundary.
  • the size of the lower block of the current block is 8x8, samples located in one, two, three, four, five, six, seven, or eight rows or columns adjacent to each block boundary. Weighted sum can be calculated.
  • K is a positive integer including 0 and may have as many as the number of rows / columns of the lower block.
  • weighted sums may be calculated for samples located in one or two fixed rows or columns adjacent to each block boundary in a lower block.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary according to the number of motion information used for generating the second prediction block.
  • K may be a positive integer including 0.
  • a weighted sum may be calculated for samples located in two rows / columns adjacent to each block boundary.
  • a weighted sum may be calculated for samples located in one row / column adjacent to each block boundary.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary according to the inter-screen prediction indicator of the current block.
  • K may be a positive integer including zero.
  • a weighted sum may be calculated for samples located in two rows / columns adjacent to each block boundary.
  • a weighted sum may be calculated for samples located in one row / column adjacent to each block boundary.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary according to the POC of the reference image of the current block.
  • K may be a positive integer including 0.
  • a weighted sum may be calculated for samples located in two rows / columns adjacent to each block boundary.
  • a weighted sum may be calculated for samples located in one row / column adjacent to each block boundary.
  • weighted sums are applied to samples located in K rows / columns adjacent to each block boundary according to the motion vector size of the neighboring subblock adjacent to the current block or the neighboring subblock within the current block. This can be calculated.
  • K may be a positive integer including 0.
  • the weighted sum is calculated for the samples located in two rows / columns adjacent to each block boundary. Can be.
  • the weighted sum may be calculated for the samples located in one row / column adjacent to each block boundary.
  • the predefined value may be a positive integer including 0.
  • a weighted sum may be calculated for samples located in K rows / columns adjacent to each block boundary according to the motion vector size or the motion vector direction of the current lower block.
  • K may be a positive integer including 0.
  • a weighted sum may be calculated for samples located in two rows / columns adjacent to the left and right boundaries.
  • the weighted sum may be calculated for the samples located in one row / column adjacent to the left and right boundaries.
  • the predefined value may be a positive integer including 0.
  • the weighted sum may be calculated for the samples located in two rows / columns adjacent to the upper and lower boundaries.
  • the weighted sum may be calculated for the samples located in one row / column adjacent to the upper and lower boundaries.
  • the predefined value may be a positive integer including 0.
  • the weighted sum may be calculated for the samples located in two rows / columns adjacent to each block boundary.
  • the weighted sum may be calculated for the samples located in one row / column adjacent to each block boundary.
  • the predefined value may be a positive integer including 0.
  • FIG. 23 is a diagram for explaining an embodiment in which weighted sums of a first prediction block and a second prediction block are cumulatively calculated in a predetermined order when overlapping block motion compensation is performed.
  • a weighted sum of the first prediction block and the second prediction block may be calculated in a predetermined order.
  • motion information may be derived in the order of the upper block, the left block, the lower block, and the right block adjacent to the current lower block, and a second prediction block is generated using the derived motion information to generate a first prediction.
  • the weighted sum of the block and the second prediction block can be calculated.
  • the weighted sum may be accumulated in the order to generate a final prediction block.
  • a weighted sum of the first prediction block and the second prediction block generated by using the motion information of the upper block may be calculated to generate a first weighted result block.
  • the weighted sum of the second prediction block generated using the motion information of the left weighted block and 2) the left block may be calculated to generate a second weighted result block, and the generated second weighted result block 3
  • the weighted sum of the second prediction block generated using the motion information of the lower block may be calculated to generate a third weighted result block, and the motion information of the generated third weighted result block and 4) the right block may be generated.
  • the weighted sum of the generated second prediction blocks may be calculated to generate a final prediction block.
  • the order of deriving motion information used for generating the second prediction block and the weighted sum calculation order of the second prediction block during the weighted sum calculation of the first prediction block and the second prediction block may be different.
  • FIG. 24 is a diagram for explaining an embodiment in which a weighted sum of a first prediction block and a second prediction block is calculated when overlapping block motion compensation is performed.
  • the weighted sum is not accumulated in the weighted sum calculation, and weights of the second prediction blocks generated using at least one of motion information of the first prediction block, the upper block, the left block, the lower block, and the right block. Sum can be calculated in any order.
  • the second prediction blocks generated using at least one of motion information of the upper block, the left block, the lower block, and the right block may have the same weight.
  • the weights used for the second prediction block and the weights used for the first prediction block may be the same.
  • storage spaces may be allocated by the number of first prediction blocks and second prediction blocks, and the first prediction block and the weighted sum may be calculated with the same weights between the second prediction blocks when the final prediction block is generated. .
  • the weighted sum of the second prediction block generated using the motion information of the corresponding location lower block in the corresponding location image may also be calculated.
  • K may be a positive integer, for example, may be 256.
  • entropy encoding information on whether overlapping block motion compensation is performed on the current block is performed.
  • nested block motion compensation can be performed without / decoding.
  • the encoder may perform motion prediction after subtracting the second prediction block from the original signal in the region corresponding to the boundary of the current block in the motion prediction step. In this case, when the second prediction block is subtracted, a weighted sum may be calculated between the second prediction block and the original signal.
  • Enhanced Multiple Transform DCT
  • DST Discrete Sine Transform
  • 25 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
  • a first prediction block of the current block may be generated using motion information of the current block (S2510).
  • motion information available for generating a second prediction block among motion information of at least one neighboring lower block of the current lower block may be determined.
  • the motion information available for generating the second prediction block may be determined based on at least one of a magnitude and a direction of the motion vector of the neighboring lower block.
  • determining the motion information available for generating the second prediction block may include the second prediction block based on a reference picture POC of the neighboring lower block and a reference picture POC of the current block.
  • the motion information available for generation can be determined. Specifically, it is determined that the motion information of the neighboring lower block is the motion information available for generating the second prediction block only when the reference picture POC of the neighboring lower block and the reference picture POC of the current block are the same. Can be.
  • the shape of the current lower block may be at least one of a square shape and a rectangular shape.
  • At least one second prediction block of the current lower block may be generated using at least one motion information determined in operation S2520.
  • At least one second prediction block may be generated using motion information of at least one neighboring lower block only when the current block is not the motion vector derivation mode and the attack motion compensation mode.
  • the final prediction block may be generated based on a weighted sum of the first prediction block of the current block and the second prediction block of the at least one current lower block.
  • the final prediction is performed by weighting the samples located in some rows or some columns adjacent to the boundary of the first prediction block and the second prediction block. You can create a block.
  • the samples located in some rows or some columns adjacent to the boundary of the first prediction block and the second prediction block may include a block size of the current subblock, a size and direction of a motion vector of the current subblock, It may be determined based on at least one of the inter prediction prediction indicator and the reference picture POC of the current block.
  • the step of generating the final prediction block (S2540), by applying different weights for each sample of the first prediction block and the second prediction block according to at least one of the magnitude and direction of the motion vector of the current lower block. Weighted polymerization can be performed.
  • Each step of the image decoding method of FIG. 25 may be equally applied to the image encoding method according to the present invention.
  • bitstream generated by the image encoding method according to the present invention may be stored in a recording medium.
  • the order of applying the embodiment may be different in the encoder and the decoder, and the order of applying the embodiment may be the same in the encoder and the decoder.
  • the above embodiment may be performed with respect to each of the luminance and chrominance signals, and the same embodiment may be performed with respect to the luminance and the chrominance signals.
  • the shape of the block to which the embodiments of the present invention are applied may have a square shape or a non-square shape.
  • the above embodiments of the present invention may be applied according to at least one of a coding block, a prediction block, a transform block, a block, a current block, a coding unit, a prediction unit, a transform unit, a unit, and a current unit.
  • the size here may be defined as a minimum size and / or a maximum size for the above embodiments to be applied, or may be defined as a fixed size to which the above embodiments are applied.
  • the first embodiment may be applied at the first size
  • the second embodiment may be applied at the second size. That is, the embodiments may be applied in combination according to the size.
  • the above embodiments of the present invention may be applied only when the minimum size or more and the maximum size or less. That is, the above embodiments may be applied only when the block size is included in a certain range.
  • the above embodiments may be applied only when the size of the current block is 8x8 or more.
  • the above embodiments may be applied only when the size of the current block is 4x4.
  • the above embodiments may be applied only when the size of the current block is 16x16 or less.
  • the above embodiments may be applied only when the size of the current block is 16x16 or more and 64x64 or less.
  • the above embodiments of the present invention can be applied according to a temporal layer.
  • a separate identifier is signaled to identify the temporal layer to which the embodiments are applicable and the embodiments can be applied to the temporal layer specified by the identifier.
  • the identifier here may be defined as the lowest layer and / or the highest layer to which the embodiment is applicable, or may be defined as indicating a specific layer to which the embodiment is applied.
  • a fixed temporal layer to which the above embodiment is applied may be defined.
  • the above embodiments may be applied only when the temporal layer of the current image is the lowest layer.
  • the above embodiments may be applied only when the temporal layer identifier of the current image is one or more.
  • the above embodiments may be applied only when the temporal layer of the current image is the highest layer.
  • a slice type to which the above embodiments of the present invention are applied is defined, and the above embodiments of the present invention may be applied according to the corresponding slice type.
  • the methods are described based on a flowchart as a series of steps or units, but the present invention is not limited to the order of steps, and certain steps may occur in a different order or simultaneously from other steps as described above. Can be. Also, one of ordinary skill in the art appreciates that the steps shown in the flowcharts are not exclusive, that other steps may be included, or that one or more steps in the flowcharts may be deleted without affecting the scope of the present invention. I can understand.
  • Embodiments according to the present invention described above may be implemented in the form of program instructions that may be executed by various computer components, and may be recorded in a computer-readable recording medium.
  • the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
  • Program instructions recorded on the computer-readable recording medium may be those specially designed and configured for the present invention, or may be known and available to those skilled in the computer software arts.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs, DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device may be configured to operate as one or more software modules to perform the process according to the invention, and vice versa.
  • the present invention can be used in an apparatus for encoding / decoding an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
PCT/KR2017/013672 2016-11-28 2017-11-28 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 WO2018097692A2 (ko)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN202311024704.8A CN116866593A (zh) 2016-11-28 2017-11-28 对图像编码/解码的方法和设备及存储比特流的记录介质
CN202311020975.6A CN116886928A (zh) 2016-11-28 2017-11-28 对图像编码/解码的方法和设备及存储比特流的记录介质
CN201780073517.5A CN110024394B (zh) 2016-11-28 2017-11-28 对图像编码/解码的方法和设备及存储比特流的记录介质
CN202311021525.9A CN116886929A (zh) 2016-11-28 2017-11-28 对图像编码/解码的方法和设备及存储比特流的记录介质
CN202311023493.6A CN116886930A (zh) 2016-11-28 2017-11-28 对图像编码/解码的方法和设备及存储比特流的记录介质
CN202311025877.1A CN116866594A (zh) 2016-11-28 2017-11-28 对图像编码/解码的方法和设备及存储比特流的记录介质

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20160159507 2016-11-28
KR10-2016-0159507 2016-11-28

Publications (2)

Publication Number Publication Date
WO2018097692A2 true WO2018097692A2 (ko) 2018-05-31
WO2018097692A3 WO2018097692A3 (ko) 2018-07-26

Family

ID=62195247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/013672 WO2018097692A2 (ko) 2016-11-28 2017-11-28 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체

Country Status (3)

Country Link
KR (3) KR102328179B1 (zh)
CN (6) CN110024394B (zh)
WO (1) WO2018097692A2 (zh)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019234606A1 (en) * 2018-06-05 2019-12-12 Beijing Bytedance Network Technology Co., Ltd. Interaction between ibc and atmvp
CN110876057A (zh) * 2018-08-29 2020-03-10 华为技术有限公司 一种帧间预测的方法及装置
CN110944185A (zh) * 2018-09-21 2020-03-31 腾讯美国有限责任公司 视频解码的方法和装置、计算机设备及存储介质
CN111131830A (zh) * 2018-10-31 2020-05-08 北京字节跳动网络技术有限公司 重叠块运动补偿的改进
CN111971960A (zh) * 2018-06-27 2020-11-20 Lg电子株式会社 用于基于帧间预测模式处理图像的方法及其装置
CN113498607A (zh) * 2019-03-13 2021-10-12 腾讯美国有限责任公司 用于小子块仿射帧间预测的方法和装置
CN113542768A (zh) * 2021-05-18 2021-10-22 浙江大华技术股份有限公司 运动搜索方法、装置及计算机可读存储介质
CN113615186A (zh) * 2018-12-21 2021-11-05 Vid拓展公司 对称运动矢量差译码
US11172196B2 (en) 2018-09-24 2021-11-09 Beijing Bytedance Network Technology Co., Ltd. Bi-prediction with weights in video coding and decoding
US11197003B2 (en) 2018-06-21 2021-12-07 Beijing Bytedance Network Technology Co., Ltd. Unified constrains for the merge affine mode and the non-merge affine mode
US11197007B2 (en) 2018-06-21 2021-12-07 Beijing Bytedance Network Technology Co., Ltd. Sub-block MV inheritance between color components
US11792421B2 (en) 2018-11-10 2023-10-17 Beijing Bytedance Network Technology Co., Ltd Rounding in pairwise average candidate calculations
EP4243417A3 (en) * 2019-03-11 2023-11-15 Alibaba Group Holding Limited Method, device, and system for determining prediction weight for merge mode

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020032609A1 (ko) * 2018-08-09 2020-02-13 엘지전자 주식회사 영상 코딩 시스템에서 어파인 머지 후보 리스트를 사용하는 어파인 움직임 예측에 기반한 영상 디코딩 방법 및 장치
US11736692B2 (en) * 2018-12-21 2023-08-22 Samsung Electronics Co., Ltd. Image encoding device and image decoding device using triangular prediction mode, and image encoding method and image decoding method performed thereby
CN113709486B (zh) * 2019-09-06 2022-12-23 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN113099240B (zh) * 2019-12-23 2022-05-31 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN113242427B (zh) * 2021-04-14 2024-03-12 中南大学 一种基于vvc中自适应运动矢量精度的快速方法及装置

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101553850B1 (ko) * 2008-10-21 2015-09-17 에스케이 텔레콤주식회사 동영상 부호화/복호화 장치 및 그를 위한 적응적 가중치를 사용하는 적응 중첩 블록 움직임 보상 방법 및 장치
US8837592B2 (en) * 2010-04-14 2014-09-16 Mediatek Inc. Method for performing local motion vector derivation during video coding of a coding unit, and associated apparatus
CN106231339B (zh) * 2011-01-07 2019-07-09 Lg电子株式会社 视频编码和解码的装置
WO2012140821A1 (ja) * 2011-04-12 2012-10-18 パナソニック株式会社 動画像符号化方法、動画像符号化装置、動画像復号化方法、動画像復号化装置、および動画像符号化復号化装置
KR20130002243A (ko) * 2011-06-28 2013-01-07 주식회사 케이티 블록 중첩을 이용한 화면 간 예측 방법 및 장치
WO2013051209A1 (ja) * 2011-10-05 2013-04-11 パナソニック株式会社 画像符号化方法、画像符号化装置、画像復号方法、画像復号装置、および、画像符号化復号装置
WO2013051899A2 (ko) * 2011-10-05 2013-04-11 한국전자통신연구원 스케일러블 비디오 부호화 및 복호화 방법과 이를 이용한 장치
US9883203B2 (en) * 2011-11-18 2018-01-30 Qualcomm Incorporated Adaptive overlapped block motion compensation
JP6101709B2 (ja) * 2012-01-18 2017-03-22 エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュートElectronics And Telecommunications Research Institute 映像復号化装置
CN104604232A (zh) * 2012-04-30 2015-05-06 数码士控股有限公司 用于编码多视点图像的方法及装置,以及用于解码多视点图像的方法及装置
WO2014104104A1 (ja) * 2012-12-28 2014-07-03 日本電信電話株式会社 映像符号化装置および方法、映像復号装置および方法、及びそれらのプログラム
WO2014129873A1 (ko) * 2013-02-25 2014-08-28 엘지전자 주식회사 스케일러빌러티를 지원하는 멀티 레이어 구조의 비디오 인코딩 방법 및 비디오 디코딩 방법과 이를 이용하는 장치
US9426465B2 (en) * 2013-08-20 2016-08-23 Qualcomm Incorporated Sub-PU level advanced residual prediction
US9667996B2 (en) * 2013-09-26 2017-05-30 Qualcomm Incorporated Sub-prediction unit (PU) based temporal motion vector prediction in HEVC and sub-PU design in 3D-HEVC
WO2015093565A1 (ja) * 2013-12-19 2015-06-25 シャープ株式会社 画像復号装置、画像符号化装置および残差予測装置
US20170019680A1 (en) * 2014-03-06 2017-01-19 Samsung Electronics Co., Ltd. Inter-layer video decoding method and apparatus therefor performing sub-block-based prediction, and inter-layer video encoding method and apparatus therefor performing sub-block-based prediction
KR20220079687A (ko) * 2014-06-19 2022-06-13 브이아이디 스케일, 인크. 블록 벡터 도출을 이용하여 인트라 블록 복사 코딩을 위한 방법 및 시스템

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11509915B2 (en) 2018-06-05 2022-11-22 Beijing Bytedance Network Technology Co., Ltd. Interaction between IBC and ATMVP
WO2019234639A1 (en) * 2018-06-05 2019-12-12 Beijing Bytedance Network Technology Co., Ltd. Interaction between ibc and inter-code tools
CN110572647A (zh) * 2018-06-05 2019-12-13 北京字节跳动网络技术有限公司 帧内块复制与可选时域运动矢量预测的交互
CN110572648A (zh) * 2018-06-05 2019-12-13 北京字节跳动网络技术有限公司 帧内块复制与帧间编码工具的交互
US11973962B2 (en) 2018-06-05 2024-04-30 Beijing Bytedance Network Technology Co., Ltd Interaction between IBC and affine
CN110572647B (zh) * 2018-06-05 2022-07-29 北京字节跳动网络技术有限公司 帧内块复制与可选时域运动矢量预测的交互
US11831884B2 (en) 2018-06-05 2023-11-28 Beijing Bytedance Network Technology Co., Ltd Interaction between IBC and BIO
TWI704802B (zh) * 2018-06-05 2020-09-11 大陸商北京字節跳動網絡技術有限公司 幀內塊複製與可選時域運動向量預測的交互
US11202081B2 (en) 2018-06-05 2021-12-14 Beijing Bytedance Network Technology Co., Ltd. Interaction between IBC and BIO
WO2019234606A1 (en) * 2018-06-05 2019-12-12 Beijing Bytedance Network Technology Co., Ltd. Interaction between ibc and atmvp
CN110572648B (zh) * 2018-06-05 2023-05-02 北京字节跳动网络技术有限公司 帧内块复制与帧间编码工具的交互
US11523123B2 (en) 2018-06-05 2022-12-06 Beijing Bytedance Network Technology Co., Ltd. Interaction between IBC and ATMVP
US11968377B2 (en) 2018-06-21 2024-04-23 Beijing Bytedance Network Technology Co., Ltd Unified constrains for the merge affine mode and the non-merge affine mode
US11197003B2 (en) 2018-06-21 2021-12-07 Beijing Bytedance Network Technology Co., Ltd. Unified constrains for the merge affine mode and the non-merge affine mode
US11197007B2 (en) 2018-06-21 2021-12-07 Beijing Bytedance Network Technology Co., Ltd. Sub-block MV inheritance between color components
US11477463B2 (en) 2018-06-21 2022-10-18 Beijing Bytedance Network Technology Co., Ltd. Component-dependent sub-block dividing
US11659192B2 (en) 2018-06-21 2023-05-23 Beijing Bytedance Network Technology Co., Ltd Sub-block MV inheritance between color components
US11895306B2 (en) 2018-06-21 2024-02-06 Beijing Bytedance Network Technology Co., Ltd Component-dependent sub-block dividing
CN111971960A (zh) * 2018-06-27 2020-11-20 Lg电子株式会社 用于基于帧间预测模式处理图像的方法及其装置
CN111971960B (zh) * 2018-06-27 2023-08-29 Lg电子株式会社 用于基于帧间预测模式处理图像的方法及其装置
CN110876057B (zh) * 2018-08-29 2023-04-18 华为技术有限公司 一种帧间预测的方法及装置
CN110876057A (zh) * 2018-08-29 2020-03-10 华为技术有限公司 一种帧间预测的方法及装置
CN110944185A (zh) * 2018-09-21 2020-03-31 腾讯美国有限责任公司 视频解码的方法和装置、计算机设备及存储介质
CN110944185B (zh) * 2018-09-21 2023-03-28 腾讯美国有限责任公司 视频解码的方法和装置、计算机设备及存储介质
US11616945B2 (en) 2018-09-24 2023-03-28 Beijing Bytedance Network Technology Co., Ltd. Simplified history based motion vector prediction
US11202065B2 (en) 2018-09-24 2021-12-14 Beijing Bytedance Network Technology Co., Ltd. Extended merge prediction
US11172196B2 (en) 2018-09-24 2021-11-09 Beijing Bytedance Network Technology Co., Ltd. Bi-prediction with weights in video coding and decoding
CN111131830B (zh) * 2018-10-31 2024-04-12 北京字节跳动网络技术有限公司 重叠块运动补偿的改进
CN111131830A (zh) * 2018-10-31 2020-05-08 北京字节跳动网络技术有限公司 重叠块运动补偿的改进
US11895328B2 (en) 2018-10-31 2024-02-06 Beijing Bytedance Network Technology Co., Ltd Overlapped block motion compensation
US11936905B2 (en) 2018-10-31 2024-03-19 Beijing Bytedance Network Technology Co., Ltd Overlapped block motion compensation with derived motion information from neighbors
US11792421B2 (en) 2018-11-10 2023-10-17 Beijing Bytedance Network Technology Co., Ltd Rounding in pairwise average candidate calculations
CN113615186B (zh) * 2018-12-21 2024-05-10 Vid拓展公司 对称运动矢量差译码
CN113615186A (zh) * 2018-12-21 2021-11-05 Vid拓展公司 对称运动矢量差译码
EP4243417A3 (en) * 2019-03-11 2023-11-15 Alibaba Group Holding Limited Method, device, and system for determining prediction weight for merge mode
CN113498607A (zh) * 2019-03-13 2021-10-12 腾讯美国有限责任公司 用于小子块仿射帧间预测的方法和装置
CN113498607B (zh) * 2019-03-13 2024-04-05 腾讯美国有限责任公司 视频编码方法、解码方法、装置和可读介质
CN113542768B (zh) * 2021-05-18 2022-08-09 浙江大华技术股份有限公司 运动搜索方法、装置及计算机可读存储介质
CN113542768A (zh) * 2021-05-18 2021-10-22 浙江大华技术股份有限公司 运动搜索方法、装置及计算机可读存储介质

Also Published As

Publication number Publication date
CN116886928A (zh) 2023-10-13
KR20230042673A (ko) 2023-03-29
WO2018097692A3 (ko) 2018-07-26
CN116866594A (zh) 2023-10-10
CN116866593A (zh) 2023-10-10
CN116886929A (zh) 2023-10-13
KR20210137982A (ko) 2021-11-18
CN110024394B (zh) 2023-09-01
CN116886930A (zh) 2023-10-13
KR102328179B1 (ko) 2021-11-18
CN110024394A (zh) 2019-07-16
KR20180061041A (ko) 2018-06-07

Similar Documents

Publication Publication Date Title
WO2018097692A2 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2018097693A2 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2018226015A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2018030773A1 (ko) 영상 부호화/복호화 방법 및 장치
WO2018066867A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2019182385A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2018012886A1 (ko) 영상 부호화/복호화 방법 및 이를 위한 기록 매체
WO2019177354A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2019190224A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2018012851A1 (ko) 영상 부호화/복호화 방법 및 이를 위한 기록 매체
WO2019172705A1 (ko) 샘플 필터링을 이용한 영상 부호화/복호화 방법 및 장치
WO2018026166A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2017204532A1 (ko) 영상 부호화/복호화 방법 및 이를 위한 기록 매체
WO2018016823A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2017222334A1 (ko) 변환 기반의 영상 부호화/복호화 방법 및 장치
WO2017222237A1 (ko) 화면 내 예측 방법 및 장치
WO2020004987A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2019083334A1 (ko) 비대칭 서브 블록 기반 영상 부호화/복호화 방법 및 장치
WO2020005035A1 (ko) 처리율 향상을 위한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2020141813A2 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2020060316A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2018101700A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2019240493A1 (ko) 문맥 적응적 이진 산술 부호화 방법 및 장치
WO2018097590A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2020032531A1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17873899

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17873899

Country of ref document: EP

Kind code of ref document: A2