WO2019045392A1 - 비디오 신호 처리 방법 및 장치 - Google Patents
비디오 신호 처리 방법 및 장치 Download PDFInfo
- Publication number
- WO2019045392A1 WO2019045392A1 PCT/KR2018/009869 KR2018009869W WO2019045392A1 WO 2019045392 A1 WO2019045392 A1 WO 2019045392A1 KR 2018009869 W KR2018009869 W KR 2018009869W WO 2019045392 A1 WO2019045392 A1 WO 2019045392A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- motion vector
- current block
- unit
- prediction
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 83
- 230000033001 locomotion Effects 0.000 claims abstract description 287
- 239000013598 vector Substances 0.000 claims abstract description 224
- 230000002457 bidirectional effect Effects 0.000 claims description 25
- 238000000638 solvent extraction Methods 0.000 description 74
- 238000005192 partition Methods 0.000 description 63
- 230000002123 temporal effect Effects 0.000 description 42
- 238000010586 diagram Methods 0.000 description 31
- 238000006243 chemical reaction Methods 0.000 description 17
- 238000013139 quantization Methods 0.000 description 17
- 238000001914 filtration Methods 0.000 description 15
- 230000003044 adaptive effect Effects 0.000 description 10
- 239000000470 constituent Substances 0.000 description 9
- 239000010432 diamond Substances 0.000 description 8
- 229910003460 diamond Inorganic materials 0.000 description 7
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008707 rearrangement Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 101100537098 Mus musculus Alyref gene Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 101150095908 apex1 gene Proteins 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/533—Motion estimation using multistep search, e.g. 2D-log search or one-at-a-time search [OTS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/56—Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/583—Motion compensation with overlapping blocks
Definitions
- the present invention relates to a video signal processing method and apparatus.
- HD image and UHD image are increasing in various applications.
- HD image and UHD image are increasing in various applications.
- the image data has high resolution and high quality, the amount of data increases relative to the existing image data. Therefore, when the image data is transmitted using a medium such as a wired / wireless broadband line or stored using an existing storage medium, The storage cost is increased.
- High-efficiency image compression techniques can be utilized to solve such problems as image data becomes high-resolution and high-quality.
- An inter picture prediction technique for predicting a pixel value included in a current picture from a previous or a subsequent picture of a current picture by an image compression technique an intra picture prediction technique for predicting a pixel value included in a current picture using pixel information in the current picture
- an entropy encoding technique in which a short code is assigned to a value having a high appearance frequency and a long code is assigned to a value having a low appearance frequency.
- Image data can be effectively compressed and transmitted or stored using such an image compression technique.
- An object of the present invention is to provide a method and apparatus for efficiently performing inter-prediction on a block to be coded / decoded in coding / decoding a video signal.
- An object of the present invention is to provide a method and apparatus for variably determining search points used for updating motion information of a current block in encoding / decoding a video signal.
- a method and apparatus for decoding a video signal according to the present invention includes: obtaining an initial motion vector of a current block; deriving a refine motion vector of each of the plurality of search points based on the initial motion vector; The motion vector of the current block can be obtained based on the refine motion vector of the current block.
- a method and apparatus for encoding a video signal according to the present invention includes: obtaining an initial motion vector of a current block; deriving a refine motion vector of each of the plurality of search points based on the initial motion vector; The motion vector of the current block can be obtained based on the refine motion vector of the current block.
- the initial motion vector may be obtained based on a merge candidate or a motion vector candidate of the current block.
- the method and apparatus for encoding / decoding a video signal according to the present invention may further include selecting a refinement mode of the current block, wherein the refinement mode may include at least one of bi-directional matching and template matching.
- the refinement mode may be determined based on whether the prediction direction of the current block is bidirectional.
- the cost of the search point is determined by a first prediction block obtained based on the initial motion vector, By comparing the second prediction block obtained based on the second prediction block.
- the cost of the search point is set to a reference block specified by the template neighboring the current block and the refine motion vector And can be calculated by comparing neighboring templates.
- the method and apparatus for encoding / decoding a video signal according to the present invention may further comprise determining a search pattern of the current block, wherein the number of search points, the position of the search points, At least one of the search orders may be determined.
- the search pattern of the current block may be determined based on at least one of the size, type, inter prediction mode, or refinement mode of the current block.
- FIG. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present invention.
- FIG. 3 illustrates an example in which a coding block is hierarchically divided based on a tree structure according to an embodiment to which the present invention is applied.
- FIG. 4 is a diagram illustrating a partition type in which binary tree-based partitioning is permitted according to an embodiment of the present invention.
- FIG. 5 is a diagram illustrating an example in which only a specific type of binary tree-based partitioning is permitted according to an embodiment of the present invention.
- FIG. 6 is a diagram for explaining an example in which information related to the allowable number of binary tree division is encoded / decoded according to an embodiment to which the present invention is applied.
- FIG. 7 is a diagram illustrating a partition mode that can be applied to a coding block according to an embodiment of the present invention.
- FIG. 8 is a flowchart illustrating an inter prediction method according to an embodiment of the present invention.
- FIG. 9 is a diagram illustrating a process of deriving motion information of a current block when a merge mode is applied to the current block.
- FIG. 10 is a diagram showing an example of a spatial neighboring block.
- 11 is a diagram for explaining an example of deriving a motion vector of a temporal merging candidate.
- 12 is a diagram showing the positions of candidate blocks that can be used as collocated blocks.
- FIG. 13 is a diagram illustrating a process of deriving motion information of a current block when the AMVP mode is applied to the current block.
- FIG. 14 is a diagram illustrating a method of updating a motion vector of a current block according to an embodiment of the present invention.
- 15 is a view showing a diamond pattern.
- 16 is a diagram showing an adaptive cross pattern.
- 17 is a view showing a star pattern.
- 18 is a view showing a hexagon pattern.
- 19 is a diagram for explaining deriving a refine motion vector for a search point.
- 20 is a diagram for explaining an example of measuring a cost at a search point when the refinement mode is bidirectional matching.
- FIG. 21 is a diagram for explaining an example of measuring a cost at a search point when the refinement mode is a template matching.
- first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
- the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.
- / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.
- FIG. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present invention.
- the image encoding apparatus 100 includes a picture division unit 110, prediction units 120 and 125, a transform unit 130, a quantization unit 135, a reordering unit 160, an entropy encoding unit An inverse quantization unit 140, an inverse transform unit 145, a filter unit 150, and a memory 155.
- each of the components shown in FIG. 1 is shown independently to represent different characteristic functions in the image encoding apparatus, and does not mean that each component is composed of separate hardware or one software configuration unit. That is, each constituent unit is included in each constituent unit for convenience of explanation, and at least two constituent units of the constituent units may be combined to form one constituent unit, or one constituent unit may be divided into a plurality of constituent units to perform a function.
- the integrated embodiments and separate embodiments of the components are also included within the scope of the present invention, unless they depart from the essence of the present invention.
- the components are not essential components to perform essential functions in the present invention, but may be optional components only to improve performance.
- the present invention can be implemented only with components essential for realizing the essence of the present invention, except for the components used for the performance improvement, and can be implemented by only including the essential components except the optional components used for performance improvement Are also included in the scope of the present invention.
- the picture division unit 110 may divide the input picture into at least one processing unit.
- the processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU).
- the picture division unit 110 divides one picture into a plurality of coding units, a prediction unit, and a combination of conversion units, and generates a coding unit, a prediction unit, and a conversion unit combination So that the picture can be encoded.
- one picture may be divided into a plurality of coding units.
- a recursive tree structure such as a quad tree structure can be used.
- a unit can be divided with as many child nodes as the number of divided coding units. Under certain constraints, an encoding unit that is no longer segmented becomes a leaf node. That is, when it is assumed that only one square division is possible for one coding unit, one coding unit can be divided into a maximum of four different coding units.
- a coding unit may be used as a unit for performing coding, or may be used as a unit for performing decoding.
- the prediction unit may be one divided into at least one square or rectangular shape having the same size in one coding unit, and one of the prediction units in one coding unit may be divided into another prediction Or may have a shape and / or size different from the unit.
- intraprediction can be performed without dividing the prediction unit into a plurality of prediction units NxN.
- the prediction units 120 and 125 may include an inter prediction unit 120 for performing inter prediction and an intra prediction unit 125 for performing intra prediction. It is possible to determine whether to use inter prediction or intra prediction for a prediction unit and to determine concrete information (e.g., intra prediction mode, motion vector, reference picture, etc.) according to each prediction method.
- the processing unit in which the prediction is performed may be different from the processing unit in which the prediction method and the concrete contents are determined. For example, the method of prediction, the prediction mode and the like are determined as a prediction unit, and the execution of the prediction may be performed in a conversion unit.
- the residual value (residual block) between the generated prediction block and the original block can be input to the conversion unit 130.
- the prediction mode information, motion vector information, and the like used for prediction can be encoded by the entropy encoding unit 165 together with the residual value and transmitted to the decoder.
- the entropy encoding unit 165 When a particular encoding mode is used, it is also possible to directly encode the original block and transmit it to the decoding unit without generating a prediction block through the prediction units 120 and 125.
- the inter-prediction unit 120 may predict a prediction unit based on information of at least one of a previous picture or a following picture of the current picture, and may predict a prediction unit based on information of a partially- Unit may be predicted.
- the inter prediction unit 120 may include a reference picture interpolation unit, a motion prediction unit, and a motion compensation unit.
- the reference picture information is supplied from the memory 155 and pixel information of an integer pixel or less can be generated in the reference picture.
- a DCT-based interpolation filter having a different filter coefficient may be used to generate pixel information of an integer number of pixels or less in units of quarter pixels.
- a DCT-based 4-tap interpolation filter having a different filter coefficient may be used to generate pixel information of an integer number of pixels or less in units of 1/8 pixel.
- the motion prediction unit may perform motion prediction based on the reference picture interpolated by the reference picture interpolating unit.
- Various methods such as Full Search-based Block Matching Algorithm (FBMA), Three Step Search (TSS), and New Three-Step Search Algorithm (NTS) can be used as methods for calculating motion vectors.
- the motion vector may have a motion vector value of 1/2 or 1/4 pixel unit based on the interpolated pixel.
- the motion prediction unit can predict the current prediction unit by making the motion prediction method different.
- Various methods such as a skip method, a merge method, an AMVP (Advanced Motion Vector Prediction) method, and an Intra Block Copy method can be used as the motion prediction method.
- AMVP Advanced Motion Vector Prediction
- the intra prediction unit 125 can generate a prediction unit based on reference pixel information around the current block which is pixel information in the current picture.
- the reference pixel included in the block in which the inter prediction is performed is referred to as the reference pixel Information. That is, when the reference pixel is not available, the reference pixel information that is not available may be replaced by at least one reference pixel among the available reference pixels.
- the prediction mode may have a directional prediction mode in which reference pixel information is used according to a prediction direction, and a non-directional mode in which direction information is not used in prediction.
- the mode for predicting the luminance information may be different from the mode for predicting the chrominance information and the intra prediction mode information or predicted luminance signal information used for predicting the luminance information may be utilized to predict the chrominance information.
- intraprediction when the size of the prediction unit is the same as the size of the conversion unit, intra prediction is performed on the prediction unit based on pixels existing on the left side of the prediction unit, pixels existing on the upper left side, Can be performed.
- intra prediction when the size of the prediction unit differs from the size of the conversion unit, intraprediction can be performed using the reference pixel based on the conversion unit. It is also possible to use intraprediction using NxN partitioning only for the minimum encoding unit.
- the intra prediction method can generate a prediction block after applying an AIS (Adaptive Intra Smoothing) filter to the reference pixel according to the prediction mode.
- the type of the AIS filter applied to the reference pixel may be different.
- the intra prediction mode of the current prediction unit can be predicted from the intra prediction mode of the prediction unit existing around the current prediction unit.
- the prediction mode of the current prediction unit is predicted using the mode information predicted from the peripheral prediction unit, if the intra prediction mode of the current prediction unit is the same as the intra prediction mode of the current prediction unit,
- the prediction mode information of the current block can be encoded by performing entropy encoding if the prediction mode of the current prediction unit is different from the prediction mode of the neighbor prediction unit.
- a residual block including a prediction unit that has been predicted based on the prediction unit generated by the prediction units 120 and 125 and a residual value that is a difference value from the original block of the prediction unit may be generated.
- the generated residual block may be input to the transform unit 130.
- the transform unit 130 transforms the residual block including the residual information of the prediction unit generated through the original block and the predictors 120 and 125 into a DCT (Discrete Cosine Transform), a DST (Discrete Sine Transform), a KLT You can convert using the same conversion method.
- the decision to apply the DCT, DST, or KLT to transform the residual block may be based on the intra prediction mode information of the prediction unit used to generate the residual block.
- the quantization unit 135 may quantize the values converted into the frequency domain by the conversion unit 130. [ The quantization factor may vary depending on the block or the importance of the image. The values calculated by the quantization unit 135 may be provided to the inverse quantization unit 140 and the reorder unit 160.
- the reordering unit 160 can reorder the coefficient values with respect to the quantized residual values.
- the reordering unit 160 may change the two-dimensional block type coefficient to a one-dimensional vector form through a coefficient scanning method.
- the rearranging unit 160 may scan a DC coefficient to a coefficient in a high frequency region using a Zig-Zag scan method, and change the DC coefficient to a one-dimensional vector form.
- a vertical scan may be used to scan two-dimensional block type coefficients in a column direction, and a horizontal scan to scan a two-dimensional block type coefficient in a row direction depending on the size of the conversion unit and the intra prediction mode. That is, it is possible to determine whether any scanning method among the jig-jag scan, the vertical direction scan and the horizontal direction scan is used according to the size of the conversion unit and the intra prediction mode.
- the entropy encoding unit 165 may perform entropy encoding based on the values calculated by the reordering unit 160.
- various encoding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be used.
- the entropy encoding unit 165 receives the residual value count information of the encoding unit, the block type information, the prediction mode information, the division unit information, the prediction unit information and the transmission unit information, and the motion information of the motion unit from the reordering unit 160 and the prediction units 120 and 125 Vector information, reference frame information, interpolation information of a block, filtering information, and the like.
- the entropy encoding unit 165 can entropy-encode the coefficient value of the encoding unit input by the reordering unit 160.
- the inverse quantization unit 140 and the inverse transformation unit 145 inverse quantize the quantized values in the quantization unit 135 and inversely transform the converted values in the conversion unit 130.
- the residual value generated by the inverse quantization unit 140 and the inverse transform unit 145 is combined with the prediction unit predicted through the motion estimation unit, the motion compensation unit and the intra prediction unit included in the prediction units 120 and 125, A block (Reconstructed Block) can be generated.
- the filter unit 150 may include at least one of a deblocking filter, an offset correction unit, and an adaptive loop filter (ALF).
- a deblocking filter may include at least one of a deblocking filter, an offset correction unit, and an adaptive loop filter (ALF).
- ALF adaptive loop filter
- the deblocking filter can remove block distortion caused by the boundary between the blocks in the reconstructed picture. It may be determined whether to apply a deblocking filter to the current block based on pixels included in a few columns or rows included in the block to determine whether to perform deblocking. When a deblocking filter is applied to a block, a strong filter or a weak filter may be applied according to the deblocking filtering strength required. In applying the deblocking filter, horizontal filtering and vertical filtering may be performed concurrently in performing vertical filtering and horizontal filtering.
- the offset correction unit may correct the offset of the deblocked image with respect to the original image in units of pixels.
- pixels included in an image are divided into a predetermined number of areas, and then an area to be offset is determined and an offset is applied to the area.
- Adaptive Loop Filtering can be performed based on a comparison between the filtered reconstructed image and the original image. After dividing the pixels included in the image into a predetermined group, one filter to be applied to the group may be determined and different filtering may be performed for each group.
- the information related to whether to apply the ALF may be transmitted for each coding unit (CU), and the shape and the filter coefficient of the ALF filter to be applied may be changed according to each block. Also, an ALF filter of the same type (fixed form) may be applied irrespective of the characteristics of the application target block.
- the memory 155 may store the reconstructed block or picture calculated through the filter unit 150 and the reconstructed block or picture stored therein may be provided to the predictor 120 or 125 when the inter prediction is performed.
- FIG. 2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present invention.
- the image decoder 200 includes an entropy decoding unit 210, a reordering unit 215, an inverse quantization unit 220, an inverse transform unit 225, prediction units 230 and 235, 240, and a memory 245 may be included.
- the input bitstream may be decoded in a procedure opposite to that of the image encoder.
- the entropy decoding unit 210 can perform entropy decoding in a procedure opposite to that in which entropy encoding is performed in the entropy encoding unit of the image encoder. For example, various methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be applied in accordance with the method performed by the image encoder.
- various methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be applied in accordance with the method performed by the image encoder.
- CAVLC Context-Adaptive Variable Length Coding
- CABAC Context-Adaptive Binary Arithmetic Coding
- the entropy decoding unit 210 may decode information related to intra prediction and inter prediction performed in the encoder.
- the reordering unit 215 can perform reordering based on a method in which the entropy decoding unit 210 rearranges the entropy-decoded bitstreams in the encoding unit.
- the coefficients represented by the one-dimensional vector form can be rearranged by restoring the coefficients of the two-dimensional block form again.
- the reordering unit 215 can perform reordering by receiving information related to the coefficient scanning performed by the encoding unit and performing a reverse scanning based on the scanning order performed by the encoding unit.
- the inverse quantization unit 220 can perform inverse quantization based on the quantization parameters provided by the encoder and the coefficient values of the re-arranged blocks.
- the inverse transform unit 225 may perform an inverse DCT, an inverse DST, and an inverse KLT on the DCT, DST, and KLT transformations performed by the transform unit on the quantization result performed by the image encoder.
- the inverse transform can be performed based on the transmission unit determined by the image encoder.
- a transform technique e.g., DCT, DST, KLT
- the prediction units 230 and 235 can generate a prediction block based on the prediction block generation related information provided by the entropy decoding unit 210 and the previously decoded block or picture information provided in the memory 245.
- intraprediction is performed using a reference pixel based on the conversion unit . It is also possible to use intra prediction using NxN division only for the minimum coding unit.
- the prediction units 230 and 235 may include a prediction unit determination unit, an inter prediction unit, and an intra prediction unit.
- the prediction unit determination unit receives various information such as prediction unit information input from the entropy decoding unit 210, prediction mode information of the intra prediction method, motion prediction related information of the inter prediction method, and identifies prediction units in the current coding unit. It is possible to determine whether the unit performs inter prediction or intra prediction.
- the inter prediction unit 230 predicts the current prediction based on the information included in at least one of the previous picture of the current picture or the following picture including the current prediction unit by using information necessary for inter prediction of the current prediction unit provided by the image encoder, Unit can be performed. Alternatively, the inter prediction may be performed on the basis of the information of the partial region previously reconstructed in the current picture including the current prediction unit.
- a motion prediction method of a prediction unit included in a corresponding encoding unit on the basis of an encoding unit includes a skip mode, a merge mode, an AMVP mode, and an intra block copy mode It is possible to judge whether or not it is any method.
- the intra prediction unit 235 can generate a prediction block based on the pixel information in the current picture. If the prediction unit is a prediction unit that performs intra prediction, the intra prediction can be performed based on the intra prediction mode information of the prediction unit provided by the image encoder.
- the intraprediction unit 235 may include an AIS (Adaptive Intra Smoothing) filter, a reference pixel interpolator, and a DC filter.
- the AIS filter performs filtering on the reference pixels of the current block and can determine whether to apply the filter according to the prediction mode of the current prediction unit.
- the AIS filtering can be performed on the reference pixel of the current block using the prediction mode of the prediction unit provided in the image encoder and the AIS filter information. When the prediction mode of the current block is a mode in which AIS filtering is not performed, the AIS filter may not be applied.
- the reference pixel interpolator may interpolate the reference pixels to generate reference pixels in units of pixels less than or equal to an integer value when the prediction mode of the prediction unit is a prediction unit that performs intra prediction based on pixel values obtained by interpolating reference pixels.
- the reference pixel may not be interpolated in the prediction mode in which the prediction mode of the current prediction unit generates the prediction block without interpolating the reference pixel.
- the DC filter can generate a prediction block through filtering when the prediction mode of the current block is the DC mode.
- the restored block or picture may be provided to the filter unit 240.
- the filter unit 240 may include a deblocking filter, an offset correction unit, and an ALF.
- the deblocking filter of the video decoder When information on whether a deblocking filter is applied to a corresponding block or picture from the image encoder or a deblocking filter is applied, information on whether a strong filter or a weak filter is applied can be provided.
- the deblocking filter of the video decoder the deblocking filter related information provided by the video encoder is provided, and the video decoder can perform deblocking filtering for the corresponding block.
- the offset correction unit may perform offset correction on the reconstructed image based on the type of offset correction applied to the image and the offset value information during encoding.
- the ALF can be applied to an encoding unit on the basis of ALF application information and ALF coefficient information provided from an encoder.
- ALF information may be provided in a specific parameter set.
- the memory 245 may store the reconstructed picture or block to be used as a reference picture or a reference block, and may also provide the reconstructed picture to the output unit.
- the current block indicates a block to be coded / decoded.
- the current block includes a coding tree block (or coding tree unit), a coding block (or coding unit), a transform block (Or prediction unit), and the like.
- the basic block may be referred to as a coding tree unit.
- the coding tree unit may be defined as a coding unit of the largest size allowed by a sequence or a slice. Information regarding whether the coding tree unit is square or non-square or about the size of the coding tree unit can be signaled through a sequence parameter set, a picture parameter set, or a slice header.
- the coding tree unit can be divided into smaller size partitions. In this case, if the partition generated by dividing the coding tree unit is depth 1, the partition created by dividing the partition having depth 1 can be defined as depth 2. That is, the partition created by dividing the partition having the depth k in the coding tree unit can be defined as having the depth k + 1.
- a partition of arbitrary size generated as the coding tree unit is divided can be defined as a coding unit.
- the coding unit may be recursively divided or divided into basic units for performing prediction, quantization, transformation, or in-loop filtering, and the like.
- a partition of arbitrary size generated as a coding unit is divided may be defined as a coding unit, or may be defined as a conversion unit or a prediction unit, which is a basic unit for performing prediction, quantization, conversion or in-loop filtering and the like.
- the partitioning of the coding tree unit or the coding unit may be performed based on at least one of a vertical line and a horizontal line. Further, the number of vertical lines or horizontal lines partitioning the coding tree unit or the coding unit may be at least one or more. As an example, one vertical line or one horizontal line may be used to divide a coding tree unit or coding unit into two partitions, or two vertical lines or two horizontal lines to divide a coding tree unit or a coding unit into three partitions Can be divided. Alternatively, one vertical line and one horizontal line may be used to divide the coding tree unit or the coding unit into four partitions having a length and a width of 1/2.
- the partitions When dividing the coding tree unit or the coding unit into a plurality of partitions using at least one vertical line or at least one horizontal line, the partitions may have a uniform size or may have different sizes. Alternatively, any one partition may have a size different from the remaining partitions.
- FIG. 3 illustrates an example in which a coding block is hierarchically divided based on a tree structure according to an embodiment to which the present invention is applied.
- the coding block may be hierarchically partitioned based on at least one of a quad tree, a triple tree, and a binary tree.
- quad tree-based partitioning is a method in which a 2Nx2N coding block is divided into 4 NxN coding blocks
- a triple tree-based partitioning is a method in which one coding block is divided into 3 coding blocks, Each of which means that one coding block is divided into two coding blocks. Even if a triple tree division or a binary tree division is performed, a square coding block may exist in a lower depth.
- the triple tree partitioning or the binary tree based partitioning it is possible to restrict the generation of the square coding blocks in the lower depths.
- Binary tree-based partitioning may be limited to either a symmetric or an asymmetric partition.
- configuring the coding tree unit as a square block corresponds to quad tree CU partitioning
- configuring the coding tree unit as a symmetric non-square block may correspond to binary tree partitioning.
- Constructing the coding tree unit as a square block and a symmetric non-square block may correspond to quad and binary tree CU partitioning.
- a triple-tree-based partition or a binary-tree-based partition may be allowed for the coding block based on the binary tree, but only one of the horizontal or vertical partition may be limitedly allowed.
- the additional partitioning or additional partitioning direction may be restricted with respect to the coding block divided on the basis of the binary tree.
- an index of a coding block preceding a coding order is 0 (hereinafter referred to as a coding block index 0), an index of a coding block following the coding order is 1 (hereinafter, referred to as a coding block index)
- the binary tree-based partitioning is applied to both the coding block index 0 or the coding block index 1 coding block, the binary tree-based partitioning direction of the coding block having the coding block index " 1 " Can be determined according to the binary tree-based division direction of the coding block with the coding block index of 0.
- a binary tree of a coding block index 1 May be limited to have a different direction than the binary tree based partitioning of the coding block with a coding block index of 1. That is, it can be restricted that the coding blocks index 0 and the coding blocks index 1 are all divided into square partitions. In this case, encoding / decoding of information indicating the binary tree division direction of a coding block with a coding block index of 1 can be omitted.
- Triple tree-based partitioning can be performed on a coded block where quadtree-based partitioning is no longer performed.
- quadtree-based partitioning For a triple tree-based partitioned coding block, at least one of a quadtree based partition, a triple tree based partition, or a binary tree based partition may be set to no longer be performed.
- a triple tree-based partition or a binary tree-based partition may be allowed for a coding block divided based on a triple tree, but only one of horizontal or vertical partitioning may be limitedly permitted.
- the horizontal direction partitioning or the vertical direction partitioning may be restricted for the partition having the largest size among the coding blocks generated due to the triple tree-based partitioning.
- a partition having the largest size among the coding blocks generated due to the triple tree-based partition may not be divided into a binary tree partition in the same direction as the triple tree partition direction of the upper depth partition or a triple tree partition direction in the same direction have.
- encoding / decoding of the information indicating the binary tree division direction or the triple tree division direction may be omitted for the largest partition among the coding blocks divided based on the triple tree.
- partitioning based on a binary tree or triple tree may be limited.
- the size of the current block is determined based on at least one of the width, height, width / height of the current block, the minimum / maximum value of the current block, the sum of the width and height, the product of the width and height, Can be expressed.
- the predefined value may be an integer such as 16, 32, 64, or 128.
- a binary tree or triple tree based partition may not be allowed if the width and height ratio of the current block is greater than the predefined value or less than the predefined value. If the predefined value is 1, partitioning based on a binary tree or triple tree can be allowed only if the current block is a square block with the same width and height.
- the division of the lower depth may be determined depending on the division type of the upper depth. For example, if binary tree-based partitioning is allowed in two or more depths, only binary tree-based partitioning of the same type as the binary tree partitioning of the upper depths may be allowed in the lower depths. For example, if the binary tree-based partitioning is performed in the 2NxN type in the upper depth, 2NxN type binary tree-based partitioning can be performed even in the lower depth. Alternatively, if the binary tree-based partitioning is performed in the Nx2N type in the upper depth, the binary tree-based partitioning in the Nx2N type may be allowed in the lower depths.
- a sequence, slice, coding tree unit or coding unit it may be limited to use only a specific type of binary tree based partitioning or a specific type of triple tree based partitioning.
- a specific type of binary tree based partitioning or a specific type of triple tree based partitioning As an example, only binary tree-based partitioning in the form of 2NxN or Nx2N for the coding tree unit is allowed to be allowed.
- the allowed partition type may be predefined in an encoder or a decoder, or may be signaled through a bitstream by encoding information on an acceptable partition type or an unacceptable partition type.
- FIG. 5 is a diagram showing an example in which only a specific type of binary tree-based partition is allowed.
- FIG. 5A shows an example in which only binary tree-based partitioning in the form of Nx2N is allowed
- FIG. 5B shows an example in which only binary tree-based partitioning in the form of 2NxN is allowed to be allowed.
- information indicating quad tree-based partitioning In order to implement the adaptive partitioning based on the quadtree or the binary tree, information indicating quad tree-based partitioning, information on the size / depth of the quadtree based partitioning allowable coding block, Information about the size / depth of a coding block in which binary tree-based partitioning is allowed, information on the size / depth of a coding block in which binary tree-based partitioning is not allowed, or whether the binary tree- Information regarding the horizontal direction, and the like can be used.
- the number of times the binary tree partitioning / triple tree partitioning is permitted, the depth at which the binary tree partitioning / triple tree partitioning is allowed, or the number of the depths at which the binary tree partitioning / Etc. can be obtained.
- the information may be encoded in units of a coding tree unit or a coding unit, and may be transmitted to a decoder through a bitstream.
- a syntax 'max_binary_depth_idx_minus1' indicating the maximum depth at which binary tree segmentation is allowed may be encoded / decoded through a bitstream, via a bitstream.
- max_binary_depth_idx_minus1 + 1 may indicate the maximum depth at which the binary tree division is allowed.
- a binary tree division is performed for a depth 2 coding unit and a depth 3 coding unit. Accordingly, the information indicating the number of times (2) the binary tree segmentation in the coding tree unit has been performed, the information indicating the maximum depth (depth 3) allowed to divide the binary tree in the coding tree unit, or the binary tree segmentation in the coding tree unit At least one of information indicating the number of allowed depths (2, depth 2 and depth 3) can be encoded / decoded through a bitstream.
- the information may be encoded in a sequence, picture, or slice unit and transmitted through a bitstream.
- there may be a predefined depth that permits binary tree / triple tree partitioning for each sequence, picture, or slice, or the number of depths allowed for binary tree / triple tree partitioning.
- At least one of the number of binary tree / triple tree divisions of the first slice and the second slice, the maximum depth allowed to divide the binary tree / triple tree, or the number of depths allowed to divide the binary tree / triple tree are different .
- binary tree segmentation in the first slice binary tree segmentation is allowed in only one depth
- binary tree segmentation in two depths is allowed.
- the number of times the binary tree / triple tree partition is allowed, the depth at which the binary tree / triple tree partition is allowed, or the depth at which the binary tree / triple tree partition is allowed, according to the time level identifier (TemporalID) At least one of the numbers may be set differently.
- the temporal level identifier (TemporalID) is used to identify each of a plurality of layers of an image having a scalability of at least one of view, spatial, temporal or picture quality will be.
- the first coding block 300 having a split depth k may be divided into a plurality of second coding blocks based on a quad tree.
- the second coding blocks 310 to 340 may be square blocks having half the width and height of the first coding block, and the division depth of the second coding block may be increased to k + 1.
- the second coding block 310 having the division depth k + 1 may be divided into a plurality of third coding blocks having a division depth k + 2.
- the division of the second coding block 310 may be performed using a quadtree or a binary tree selectively according to the division method.
- the partitioning scheme may be determined based on at least one of information indicating partitioning based on a quadtree or information indicating partitioning based on a binary tree.
- the second coding block 310 When the second coding block 310 is divided based on quadtrees, the second coding block 310 is divided into four third coding blocks 310a having half the width and height of the second coding block, and the third coding block 310a The split depth can be increased to k + 2.
- the second coding block 310 when the second coding block 310 is divided based on the binary tree, the second coding block 310 may be divided into two third coding blocks. At this time, each of the two third coding blocks is a non-square block in which one of the width and height of the second coding block is half, and the dividing depth can be increased to k + 2.
- the second coding block may be determined as a non-square block in the horizontal direction or the vertical direction according to the dividing direction, and the dividing direction may be determined based on information on whether the dividing based on the binary tree is the vertical direction or the horizontal direction.
- the second coding block 310 may be determined as a last coding block that is not further divided based on a quadtree or a binary tree.
- the coding block may be used as a prediction block or a transform block.
- the third coding block 310a may be determined as a terminal coding block as well as the division of the second coding block 310, or may be further divided based on a quadtree or a binary tree.
- the third coding block 310b divided on the basis of the binary tree may be further divided into a vertical coding block 310b-2 or a horizontal coding block 310b-3 on the basis of a binary tree, The division depth of the block can be increased to k + 3.
- the third coding block 310b may be determined as a last coding block 310b-1 that is not further divided based on the binary tree, and the corresponding coding block 310b-1 may be used as a prediction block or a transform block .
- the above-described partitioning process may include information on the size / depth of a coding block in which quadtree-based partitioning is allowed, information on the size / depth of a coding block in which binary tree-based partitioning is allowed or binary tree- / RTI > information about the size / depth of the coding block that is not coded.
- the size that the coding block can have is limited to a predetermined number, or the size of the coding block in a predetermined unit may have a fixed value.
- the size of a coding block in a sequence or the size of a coding block in a picture may be limited to 256x256, 128x128, or 32x32.
- Information indicating the size of a sequence or an intra-picture coding block may be signaled through a sequence header or a picture header.
- the division result based on quad tree, binary tree and triple tree, the coding unit may be square or any size rectangle.
- the coding block may be encoded / decoded using at least one of a skip mode, an intra prediction, an inter prediction, or a skipping method.
- intra prediction or inter prediction can be performed in the same size as the coding block or in units smaller than the coding block through the division of the coding block.
- a prediction block can be determined through predictive division of the coding block.
- Predictive partitioning of the coded block can be performed by a partition mode (Part_mode) indicating the partition type of the coded block.
- Part_mode partition mode
- the size or shape of the prediction block may be determined according to the partition mode of the coding block. For example, the size of the prediction block determined according to the partition mode may be equal to or smaller than the size of the coding block.
- FIG. 7 is a diagram illustrating a partition mode that can be applied to a coding block when a coding block is coded by inter-picture prediction.
- the coding block is coded as an inter-picture prediction, one of eight partitioning modes may be applied to the coding block, as in the example shown in Fig.
- the coding mode can be applied to the partition mode PART_2Nx2N or PART_NxN.
- PART_NxN may be applied when the coding block has a minimum size.
- the minimum size of the coding block may be one previously defined in the encoder and the decoder.
- information regarding the minimum size of the coding block may be signaled via the bitstream.
- the minimum size of the coding block is signaled through the slice header, so that the minimum size of the coding block per slice can be defined.
- the size of the prediction block may have a size from 64x64 to 4x4.
- the coding block is coded by inter-picture prediction, it is possible to prevent the prediction block from having a 4x4 size in order to reduce the memory bandwidth when performing motion compensation.
- FIG. 8 is a flowchart illustrating an inter prediction method according to an embodiment of the present invention.
- the motion information of the current block can be determined (S810).
- the motion information of the current block may include at least one of a motion vector relating to the current block, a reference picture index of the current block, or an inter prediction direction of the current block.
- the motion information of the current block may be obtained based on at least one of information signaled through a bitstream or motion information of a neighboring block neighboring the current block.
- FIG. 9 is a diagram illustrating a process of deriving motion information of a current block when a merge mode is applied to the current block.
- the merge mode indicates a method of deriving motion information of a current block from a neighboring block.
- a spatial merge candidate may be derived from the spatially neighboring block of the current block (S910).
- Spatial neighbor blocks may include at least one of the top, left, or adjacent blocks of the current block (e.g., at the top left corner, the top right corner, or the bottom left corner) of the current block.
- FIG. 10 is a diagram showing an example of a spatial neighboring block.
- the spatial neighboring block includes a neighboring block A 1 neighboring the left side of the current block, a neighboring block B 1 neighboring the upper end of the current block, a neighboring block B 1 neighboring the upper left corner of the current block, An adjacent block A 0 , a neighboring block B 0 adjacent to the upper right corner of the current block, and a neighboring block B 2 neighboring the upper left corner of the current block.
- FIG. 10 may be further expanded so that at least one of a block neighboring the upper left sample of the current block, a block neighboring the upper center sample, or a block adjacent to the upper right sample of the current block is referred to as a block , At least one block adjacent to the upper left sample of the current block, a block neighboring the left center sample, or a block adjacent to the lower left sample of the current block may be defined as a block adjacent to the left side of the current block.
- a spatial merge candidate may be derived from non-contiguous spatial non-neighboring blocks. For example, a block located on the same vertical line as the block adjacent to the top, right upper corner or left upper corner of the current block, block located on the same horizontal line as the block adjacent to the left, lower left corner, Alternatively, the spatial merge candidate of the current block may be derived using at least one of the blocks located on the same diagonal line as the block adjacent to the corner of the current block. As a specific example, if an adjacent block adjacent to the current block can not be used as a merge candidate, a block that is not adjacent to the current block can be used as a merge candidate of the current block.
- the motion information of the spatial merge candidate may be set to be the same as the motion information of the spatial neighboring block.
- Spatial merge candidates can be determined by searching for neighboring blocks in a predetermined order. For example, a search for spatial merge candidates can be performed in the order of A 1 , B 1 , B 0 , A 0, and B 2 blocks. At this time, the B 2 block can be used when at least one of the remaining blocks (i.e., A 1 , B 1 , B 0, and A 0 ) is not present or at least one is coded in the intra prediction mode.
- the search order of the spatial merge candidate may be as previously defined in the encoder / decoder. Alternatively, the search order of the spatial merge candidate may be determined adaptively according to the size or type of the current block. Alternatively, the search order of the spatial merge candidate may be determined based on the information signaled through the bit stream.
- the temporal merge candidate may be derived from the temporally neighboring block of the current block (S920).
- the temporal neighbor block may refer to a co-located block included in the collocated picture.
- a collocated picture has a picture order count (POC) different from the current picture including the current block.
- the collocated picture can be determined as a picture having a predefined index in the reference picture list or a picture having the smallest output order (POC) difference from the current picture.
- the collocated picture may be determined by the information signaled from the bitstream.
- the information signaled from the bitstream may include information indicating a reference picture list (for example, a L0 reference picture list or an L1 reference picture list) including a collocated picture and / or an index indicating a collocated picture in a reference picture list . ≪ / RTI >
- the information for determining the collocated picture may be signaled in at least one of a picture parameter set, a slice header, or a block level.
- the temporal merge candidate motion information can be determined based on the motion information of the collocated block.
- the temporal merge candidate motion vector may be determined based on the motion vector of the collocated block.
- the temporal merge candidate motion vector may be set equal to the motion vector of the collocated block.
- the temporal merge candidate motion vector may be based on the difference in the output order (POC) between the current picture and the reference picture of the current block and / or the output order (POC) difference between the reference picture of the collocated picture and the collocated picture. And then scaling the motion vector of the collocated block.
- POC output order
- 11 is a diagram for explaining an example of deriving a motion vector of a temporal merging candidate.
- tb represents the POC difference between the current picture (curr_pic) and the current picture reference picture (curr_ref)
- td represents the difference between the collocated picture col_pic and the reference picture of the collocated block col_ref).
- the temporal merge candidate motion vector may be derived by scaling the motion vector of the collocated block (col_PU) based on tb and / or td.
- both the motion vector of the collocated block and the motion vector scaled by the collocated block may be used as a motion vector of the temporal merging candidate.
- a motion vector of a collocated block may be set as a motion vector of a first temporal merge candidate, and a value obtained by scaling a motion vector of the collocated block may be set as a motion vector of a second temporal merge candidate.
- the inter prediction direction of the temporal merge candidate may be set equal to the inter prediction direction of the temporal neighbor block.
- the reference picture index of the temporal merge candidate may have a fixed value.
- the reference picture index of the temporal merge candidate may be set to '0'.
- the reference picture index of the temporal merging candidate may be adaptively determined based on at least one of the reference picture index of the spatial merge candidate and the reference picture index of the current picture.
- the collocated block may be determined to be any block in the block having the same position and size as the current block in the collocated picture or a block adjacent to the block having the same position and size as the current block.
- 12 is a diagram showing the positions of candidate blocks that can be used as collocated blocks.
- the candidate block may include at least one of a block adjacent to the upper left corner position of the current block in the collocated picture, a block adjacent to the center sample position of the current block, or a block adjacent to the lower left corner position of the current block.
- the candidate block includes a block TL including the upper left sample position of the current block in the collocated picture, a block BR including the lower right sample position of the current block, a block BR adjacent to the lower right corner of the current block A block C3 containing the center sample position of the current block or a block adjacent to the center sample of the current block such as a sample position spaced apart from the center sample of the current block by (-1, -1) (C0).
- a block including a position of a neighboring block adjacent to a predetermined boundary of a current block in the collocated picture may be selected as a collocated block.
- the number of temporal merge candidates can be one or more. As an example, based on one or more collocated blocks, one or more temporal merge candidates may be derived.
- the maximum number of temporal merge candidates may be encoded and signaled by the encoder.
- the maximum number of temporal merge candidates may be derived based on the maximum number of merge candidates that can be included in the merge candidate list and / or the maximum number of spatial merge candidates.
- the maximum number of temporal merge candidates may be determined based on the number of collocated blocks available.
- any one of the C3 block and the H block may be referred to as a collocated block You can decide. If an H block is available, the H block may be determined as a collocated block. On the other hand, when the H block is unavailable (for example, when the H block is coded by the intra prediction, the H block is not available, or the H block is located outside the largest coding unit (LCU) , Etc.), the C3 block may be determined as a collocated block.
- LCU largest coding unit
- the non-available block is replaced with another available block .
- another available block E.g., C0 and / or C3 adjacent to the center sample position of the current block in the collocated picture or at least one block (e.g., TL) adjacent to the upper left corner position of the current block One can be included.
- the merge candidate list including the spatial merge candidate and the temporal merge candidate may be generated (S930).
- Information regarding the maximum number of merge candidates may be signaled through the bitstream.
- information indicating the maximum number of merge candidates may be signaled through a sequence parameter or a picture parameter. For example, if the maximum number of merge candidates is 5, then 5 can be selected as the sum of the spatial merge candidate and the temporal merge candidate. For example, four of the five spatial merge candidates may be selected, and one of the two temporal merge candidates may be selected. If the number of merge candidates included in the remainder candidate list is smaller than the maximum number of merge candidates, a merge candidate having a combined merge candidate combining two or more merge candidates or a (0, 0) motion vector May be included in the merge candidate list.
- the merge candidate may be included in the merge candidate list according to the predefined priority. The higher the priority, the smaller the index assigned to the merge candidate.
- the spatial merge candidate may be added to the merge candidate list earlier than the temporal merge candidate.
- the spatial merge candidates may also include a spatial merge candidate of the left neighboring block, a spatial merge candidate of the upper neighboring block, a spatial merge candidate of the block adjacent to the upper right corner, a spatial merge candidate of the block adjacent to the lower left corner, Can be added to the merge candidate list in the order of the spatial merge candidate of the block.
- the priority among the merge candidates may be determined according to the size or type of the current block. For example, if the current block is of a rectangular shape with a width greater than the height, the spatial merge candidate of the left neighboring block may be added to the merge candidate list before the spatial merge candidate of the upper neighboring block. On the other hand, if the current block is of a rectangular shape having a height greater than the width, the spatial merge candidate of the upper neighboring block may be added to the merge candidate list before the spatial merge candidate of the left neighboring block.
- the priority among the merge candidates may be determined according to the motion information of each merge candidate. For example, a merge candidate with bi-directional motion information may have a higher priority than a merge candidate with unidirectional motion information. Accordingly, the merge candidate having bidirectional motion information can be added to the merge candidate list before merge candidate having unidirectional motion information.
- the merge candidates may be rearranged.
- Rearrangement can be performed based on motion information of merge candidates.
- the rearrangement may be performed based on at least one of whether the merge candidate has bidirectional motion information, the size of the motion vector, or the temporal order (POC) between the current picture and the merge candidate's reference picture.
- POC temporal order
- rearrangement can be performed so as to have a higher priority than a merge candidate having a unidirectional merge candidate after merge having bidirectional motion information.
- the motion information of the current block may be set to be the same as the motion information of the merge candidate specified by the merge candidate index (S950).
- the motion information of the current block can be set to be the same as the motion information of the spatial neighboring block.
- the motion information of the current block may be set to be the same as the motion information of the temporally neighboring block.
- FIG. 13 is a diagram illustrating a process of deriving motion information of a current block when the AMVP mode is applied to the current block.
- At least one of the inter prediction direction of the current block or the reference picture index can be decoded from the bitstream (S1310). That is, when the AMVP mode is applied, at least one of the inter prediction direction of the current block or the reference picture index may be determined based on the information encoded through the bit stream.
- the spatial motion vector candidate can be determined based on the motion vector of the spatial neighboring block of the current block (S1320).
- the spatial motion vector candidate may include at least one of a first spatial motion vector candidate derived from the top neighboring block of the current block or a second spatial motion vector candidate derived from the left neighboring block of the current block.
- the upper neighbor block includes at least one of the blocks adjacent to the upper or upper right corner of the current block
- the left neighbor block of the current block includes at least one of blocks adjacent to the left or lower left corner of the current block .
- a block adjacent to the upper left corner of the current block may be treated as a top neighboring block, or it may be treated as a left neighboring block.
- a spatial motion vector candidate may be derived from a spatial non-neighboring block that is not neighboring the current block. For example, a block located on the same vertical line as the block adjacent to the top, right upper corner or left upper corner of the current block, block located on the same horizontal line as the block adjacent to the left, lower left corner, Alternatively, a spatial motion vector candidate of the current block may be derived using at least one of the blocks located on the same diagonal line as the block adjacent to the corner of the current block. If the spatial neighboring block is not available, the spatial non-neighboring block can be used to derive the spatial motion vector candidate.
- two or more spatial motion vector candidates may be derived using spatial neighbor blocks and spatial non-neighbor blocks.
- a first spatial motion vector candidate and a second spatial motion vector candidate are derived based on neighboring blocks adjacent to a current block, while a neighboring block neighboring to the current block is based on To derive a third spatial motion vector candidate and / or a fourth spatial motion vector candidate.
- the spatial motion vector may be obtained by scaling the motion vector of the spatial neighboring block.
- the temporal motion vector candidate can be determined based on the motion vector of the temporally neighboring block of the current block (S1330). If the reference picture between the current block and the temporal neighboring block is different, the temporal motion vector may be obtained by scaling the motion vector of the temporal neighboring block. At this time, temporal motion vector candidates can be derived only when the number of spatial motion vector candidates is equal to or less than a predetermined number.
- a motion vector candidate list including the spatial motion vector candidate and the temporal motion vector candidate may be generated (S1340).
- At least one of the motion vector candidates included in the motion vector candidate list can be specified based on information specifying at least one of the motion vector candidate lists (S1350).
- the motion vector candidate specified by the information may be set as the motion vector prediction value of the current block and the motion vector difference value may be added to the motion vector prediction value to obtain the motion vector of the current block (S1360). At this time, the motion vector difference value can be parsed through the bit stream.
- the motion compensation for the current block can be performed based on the obtained motion information (S820). More specifically, motion compensation for the current block can be performed based on the inter prediction direction of the current block, the reference picture index, and the motion vector.
- the current block can be reconstructed based on the generated prediction sample. Specifically, a reconstructed sample can be obtained by summing the predicted sample and the residual sample of the current block.
- the motion information of the current block may be updated by updating the motion information of the current block or updating the motion information of the current block based on the merge candidate or motion vector candidate.
- motion information of a spatial merge candidate or a temporal merge candidate may be updated, or motion information of a current block derived from motion information of a spatial merge candidate or temporal merge candidate may be updated.
- the motion information to be updated may include a motion vector. Not only the motion vector but also at least one of the reference picture index or the prediction direction.
- the updating of the motion information can be performed in the same way in the encoding apparatus and the decoding apparatus.
- the decoding apparatus performs the update in the same manner as the encoding apparatus, the encoding of the information indicating the difference between the motion information before and after the update can be omitted.
- DMCR decoder side merge refinement
- DMVR decoder side motion vector refinement
- DMCR or DMVR Whether or not DMCR or DMVR is performed depends on whether the current block size, shape (square / non-square), block level (coding block / subblock), inter prediction mode, Whether or not prediction is performed, and the like.
- the DMCR may be set to be performed only for merge candidates whose prediction direction is bidirectional, or may be set to be performed only for merge candidates that have the same motion information.
- DMVR may not be performed when the inter prediction mode of the current block is the AMVP mode, while at least one of DMCR and DMVR may be performed when the inter prediction mode of the current block is the merge mode. Contrary to the above example, it may be set to perform DMVR when the inter prediction mode of the current block is the AMVP mode.
- information indicating whether to perform DMCR or DMVR may be signaled through the bitstream.
- the information may be signaled at at least one of a block level, a slice level, or a picture level.
- the decoding apparatus can update the motion vector of the current block or merge candidate in the same manner as the encoding apparatus.
- an initial motion vector of a current block is derived, and a refined motion vector (Refined MV) for a neighboring search point is derived based on a current block initial motion vector can do.
- the initial motion vector of the current block may be derived based on having the lowest cost among merge candidates or motion vector candidates.
- the decoder can select the merge candidate or motion vector candidate having the lowest cost based on the index information.
- the motion vector of the current block can be determined based on the derived refine motion vector when the refine motion vector for the surrounding search point is derived.
- a motion vector having a minimum cost (for example, a minimum RD-cost) among the refinement motion vectors for a plurality of search points is searched, and a motion vector having the searched minimum cost is used as a motion vector of the current block .
- a minimum cost for example, a minimum RD-cost
- a refine motion vector (Refined MV) for the surrounding search point may be derived based on the motion vector of the merge candidate.
- the refine motion vector for the surrounding search point is derived, the motion vector of the remainder candidate can be determined based on the derived refine motion vector.
- a motion vector having a minimum cost for example, a minimum RD-cost
- a motion vector having the searched minimum cost can be used as a merge candidate motion vector .
- FIG. 14 is a diagram illustrating a method of updating a motion vector of a current block according to an embodiment of the present invention.
- a refinement mode for the current block may be determined (S1410).
- the refinement mode may represent at least one of Bi-lateral matching or Template matching.
- Bidirectional matching is a method of searching for motion vectors and reference blocks along the motion trajectory of the bidirectional motion vector, assuming that the bidirectional motion vectors of the current block are in the same motion trajectory.
- the motion vector of the current block can be updated based on the bi-directional template calculated by the bidirectional motion vector of the current block. Specifically, the RD cost between the bidirectional template and the reference block following the motion locus of the bidirectional motion vector of the current block is measured, and the reference block having the minimum RD cost and the motion vector indicating the reference block can be searched.
- Template matching is a method of deriving motion information of a current block by searching an area that best matches a template adjacent to the current block in the reference picture. Specifically, a neighboring template in the current block in the reference picture and an area having the lowest cost can be searched, and a target block adjacent to the searched area can be set as a reference block of the current block.
- FIG. 20 Bidirectional matching and template matching will be described later with reference to FIGS. 20 and 21.
- Information regarding the search point of the current block can be determined (S1420).
- the information on the search point of the current block may include at least one of a number of search points, a position of a search point, or a search order between search points.
- Information about the search point can be stored in a predefined format in the encoder and decoder. At this time, the previously defined search point related format can be called a Search Pattern or a Decoder Side Merge Refinement Search Pattern.
- the search pattern may include a diamond pattern, an adaptive cross pattern, a star pattern, and / or a hexagon pattern.
- Each search pattern may be different from at least one of the number of search points, the position of a search point, or the search order of search points.
- the encoder and the decoder may select at least one of the plurality of search pattern candidates and determine at least one of the number, position, or search order of the search points based on the selected search pattern.
- 15 to 18 are diagrams showing a diamond pattern, an adaptive cruciform pattern, a star pattern, and a hexagon pattern, respectively.
- the diamond pattern indicates that search points are arranged in a diamond shape around a reference block specified by an initial motion vector. For example, if the upper left coordinate of the reference block is (0, 0), the diamond pattern is (0, 4), (2, 2), (4, 0) -4), (-2, -2), (-4,0), (-2,2), and the like.
- the adaptive cross pattern indicates that the search points are arranged in the form of two diamonds around the reference block specified by the initial motion vector. For example, if the upper left coordinate of the reference block is (0, 0), the adaptive cross pattern is (0, 2), (2, 0) (0, 1), (1, 0), (0, -1), (-1, 0).
- the star pattern indicates that the search points are arranged in the star form centered on the reference block specified by the initial motion vector. For example, if the upper left coordinate of the reference block is (0, 0), the star pattern is (0, 4), (1, 1), (4, 0) -4), (-1, -1), (-4,0), (-1,1), and the like.
- the hexagon pattern indicates that search points are arranged in a hexagon shape around a reference block specified by an initial motion vector. For example, if the upper left corner of the reference block is (0, 0), the hexagon pattern is (0, 2), (2, 2), (4, 0) -2, -2, -2, -4, 0, -2, 2, and so on.
- the motion vector of the current block may be updated by modifying the search pattern shown in Figs.
- the motion vector of the current block may be updated using a search pattern that partially changes the number, position, or search order of search points.
- the decoder can perform decoder side merge refinement using the same search pattern as the encoder.
- information for specifying the search pattern used in the encoder may be signaled through the bit stream.
- the information may be index information specifying any one of a plurality of search patterns.
- an inter-prediction mode e.g., merge mode, AMVP mode Or a skip mode
- a refinement mode based on at least one of the search mode and the refinement mode.
- a diamond search may be selected at the coding block level, and a sub-adaptive cross-search may be selected at the sub-coding block level.
- it may be set to select a star search or a hexagon search at the coding block level.
- a refine motion vector for the search point can be determined (S1430).
- 19 is a diagram for explaining deriving a refine motion vector for a search point.
- the refine L0 motion vector of the search point at the (0, 1) position may be set to (0, 1) and the refine L1 motion vector may be set to (-1, 0). That is, it is possible to derive the refine motion vector of the search point by adding the coordinate value of the search point to the initial motion vector of the current block.
- the refine L0 motion vector and the refine L1 motion vector of the search point can be determined in proportion to TD0 and TD1, respectively.
- the refine L0 motion vector of the search point may be obtained by adding or subtracting the coordinate values (x, y) of the search point to the L0 motion vectors MVx 0 and MVy 0 of the current block the derived (that is, (0 + x MVx, MVy + y 0) or (0 -x MVx, MVy 0 -y)),
- L1 refined motion vector of the search point is L1 motion vector of the current block (1 MVx, MVy 1) (which can be derived by modification of Nx, Ny) (i.e., (1 + Nx MVx, MVy 1 + Ny) value applied to the drain of N coordinate values of the search points, or (1 -Nx MVx, MVy 1 - Ny)).
- the refine motion vector for a particular position sample can be determined in consideration of the temporal difference between the current picture and the reference picture.
- the cost at the search point can be measured based on the derived refine motion vector (S1440).
- 20 is a diagram for explaining an example of measuring a cost at a search point when the refinement mode is bidirectional matching.
- the initial prediction of the current block is performed through the weighted sum operation of the L0 reference block and the L1 reference block, Block can be obtained.
- the weighted sum of the refinement L0 reference block and the refinement L1 reference block can be obtained through an operation.
- the RD cost between the initial prediction block and the refined prediction block can be calculated to calculate the cost at the search point.
- FIG. 21 is a diagram for explaining an example of measuring a cost at a search point when the refinement mode is a template matching.
- the RD between the template adjacent to the current block (i.e., the neighboring template) and the template neighboring to the refine prediction block derived based on the refine motion vector cost can be measured.
- the neighboring template may include at least one of an upper region or a left region neighboring a current block (Cur Block), and a refine neighbor template may include at least one of an upper region or a left region neighboring the refine prediction block.
- the upper region or the left region may have a predetermined size or a region including a predetermined number of lines (rows or columns).
- the size or shape of the template may be variably determined based on at least one of the size, shape, motion vector of the current block, or difference in the output order of the current picture and the reference picture.
- the RD cost between the neighbor template and the refinement neighbor template can be measured to calculate the cost at the search point.
- the template matching may be performed by searching an area closest to the neighboring template of the current block in the search area of the reference picture.
- the distance between the neighboring block and the current block in the searched region can be set as a refine motion vector. For example, if the neighboring template is an area adjacent to the upper end of the current block or a neighboring area to the left of the current block, The distance may be set to a refine motion vector.
- the refinement mode of the current block may be determined based on at least one of the size, shape, inter prediction mode, bidirectional prediction, motion vector size, or availability of neighboring blocks of the current block. For example, if the initial motion vector of the current block is unidirectional, the template matching can be selected as the refinement mode of the current block. On the other hand, if the initial motion vector of the current block is bi-directional, bidirectional matching can be selected as the refinement mode of the current block.
- information specifying the refinement mode may be signaled through the bitstream.
- the encoder and decoder may repeatedly perform steps S1430 and S1440 for all search points specified by the search pattern (S1450, S1460). If the search points are unavailable, the unused search points may be replaced with the available search points, or the non-available search points may not be subjected to the steps S1430 and S1440. For example, in the case where the search point is outside the picture, it can be determined that the search point is unavailable.
- the refine motion vector of the search point having the lowest cost among the plurality of search points may be determined as the motion vector of the current block (S1470). That is, the initial motion vector of the current block can be updated to the refine motion vector of the search point having the lowest cost.
- motion compensation can be performed based on the updated motion vector.
- a method of updating the motion vector of the current block in the search point has been described.
- the DMCR can also be performed in the same manner as described with reference to FIG. Specifically, after determining the refinement mode and the search pattern for each merge candidate, a refine motion vector for the search point is derived based on the motion vector of each merge candidate, and the cost of the search point is calculated based on the derived refine motion vector Can be calculated. Thereafter, the refine motion vector of the search point having the lowest cost among the plurality of search points can be determined as a merge candidate motion vector.
- the refinement mode and / or search pattern may be set the same for all merge candidates, and one merge candidate may have a refinement mode and / or search pattern that is different from the other merge candidates.
- motion vector candidates under AMVP mode as well as merge candidates. That is, it is possible to update the motion vector of the spatial neighboring block and / or the temporal neighboring block of the current block, and derive the motion vector candidate based on the updated motion vector.
- the information about the update may be encoded and signaled through the bitstream.
- the update information may include at least one of information indicating the search point having the lowest cost (e.g., index or position of the search point) or information indicating the difference value of the motion vector before and after the update.
- the motion vector of the current block is shown to be updated in a predetermined order.
- the present invention is not limited to the order shown in Fig.
- embodiments in which some of the steps shown in FIG. 14 are omitted or some of the steps are changed may also be included in the scope of the present invention.
- embodiments in which either the refinement mode selection step (S1410) and the search pattern selection step (S1420) are omitted or their order is changed may also be included in the scope of the present invention.
- each of the components (for example, units, modules, etc.) constituting the block diagram may be implemented by a hardware device or software, and a plurality of components may be combined into one hardware device or software .
- the above-described embodiments may be implemented in the form of program instructions that may be executed through various computer components and recorded in a computer-readable recording medium.
- the computer-readable recording medium may include program commands, data files, data structures, and the like, alone or in combination.
- Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like.
- the hardware device may be configured to operate as one or more software modules for performing the processing according to the present invention, and vice versa.
- the present invention can be applied to an electronic device capable of encoding / decoding an image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Picture Signal Circuits (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
Claims (15)
- 현재 블록의 초기 움직임 벡터를 획득하는 단계;상기 초기 움직임 벡터를 기초로, 복수 서치 포인트들 각각의 리파인 움직임 벡터를 유도하는 단계; 및상기 복수 서치 포인트 중 어느 하나의 리파인 움직임 벡터를 기초로 상기 현재 블록의 움직임 벡터를 획득하는 단계를 포함하는 영상 복호화 방법.
- 제1 항에 있어서,상기 초기 움직임 벡터는 상기 현재 블록의 머지 후보 또는 움직임 벡터 후보를 기초로 획득되는 것을 특징으로 하는, 영상 복호화 방법.
- 제1 항에 있어서,상기 현재 블록의 리파인먼트 모드를 선택하는 단계를 더 포함하고,상기 리파인먼트 모드는 양방향 매칭 또는 템플릿 매칭 중 적어도 하나를 포함하는 것을 특징으로 하는, 영상 복호화 방법.
- 제3 항에 있어서,상기 리파인먼트 모드는 상기 현재 블록의 예측 방향이 양방향인지 여부에 기초하여 결정되는 것을 특징으로 하는, 영상 복호화 방법.
- 제3 항에 있어서,상기 리파인먼트 모드가 양방향 매칭인 경우,상기 서치 포인트의 비용은 상기 초기 움직임 벡터에 기초하여 획득되는 제1 예측 블록 및 상기 리파인 움직임 벡터에 기초하여 획득되는 제2 예측 블록을 비교함으로써 산출되는 것을 특징으로 하는, 영상 복호화 방법.
- 제3 항에 있어서,상기 리파인먼트 모드가 템플릿 매칭인 경우,상기 서치 포인트의 비용은 상기 현재 블록에 이웃하는 템플릿과 상기 리파인 움직임 벡터에 의해 특정되는 참조 블록에 이웃하는 템플릿을 비교하여 산출되는 것을 특징으로 하는, 영상 복호화 방법.
- 제1 항에 있어서,상기 현재 블록의 서치 패턴을 결정하는 단계를 더 포함하고,상기 결정된 서치 패턴에 의해, 서치 포인트의 개수, 위치 또는 상기 복수 서치 포인트들간의 탐색 순서 중 적어도 하나가 결정되는 것을 특징으로 하는, 영상 복호화 방법.
- 제7 항에 있어서,상기 현재 블록의 서치 패턴은, 상기 현재 블록의 크기, 형태, 인터 예측 모드, 또는 리파인먼트 모드 중 적어도 하나를 기초로 결정되는 것을 특징으로 하는, 영상 복호화 방법.
- 현재 블록의 초기 움직임 벡터를 획득하는 단계;상기 초기 움직임 벡터를 기초로, 복수 서치 포인트들 각각의 리파인 움직임 벡터를 유도하는 단계; 및상기 복수 서치 포인트 중 어느 하나의 리파인 움직임 벡터를 기초로 상기 현재 블록의 움직임 벡터를 획득하는 단계를 포함하는 영상 부호화 방법.
- 제9 항에 있어서,상기 초기 움직임 벡터는 상기 현재 블록의 머지 후보 또는 움직임 벡터 후보를 기초로 획득되는 것을 특징으로 하는, 영상 부호화 방법.
- 제9 항에 있어서,상기 현재 블록의 리파인먼트 모드를 선택하는 단계를 더 포함하고,상기 리파인먼트 모드는 양방향 매칭 또는 템플릿 매칭 중 적어도 하나를 포함하는 것을 특징으로 하는, 영상 부호화 방법.
- 제11 항에 있어서,상기 리파인먼트 모드는 상기 현재 블록의 예측 방향이 양방향인지 여부에 기초하여 결정되는 것을 특징으로 하는, 영상 부호화 방법.
- 제9 항에 있어서,상기 현재 블록의 서치 패턴을 결정하는 단계를 더 포함하고,상기 결정된 서치 패턴에 의해, 서치 포인트의 개수, 위치 또는 상기 복수 서치 포인트들간의 탐색 순서 중 적어도 하나가 결정되는 것을 특징으로 하는, 영상 부호화 방법.
- 현재 블록의 초기 움직임 벡터를 획득하고, 상기 초기 움직임 벡터를 기초로, 복수 서치 포인트들 각각의 리파인 움직임 벡터를 유도하고, 상기 복수 서치 포인트 중 어느 하나의 리파인 움직임 벡터를 기초로 상기 현재 블록의 움직임 벡터를 획득하는 인터 예측부를 포함하는 영상 복호화 장치.
- 현재 블록의 초기 움직임 벡터를 획득하고, 상기 초기 움직임 벡터를 기초로, 복수 서치 포인트들 각각의 리파인 움직임 벡터를 유도하고, 상기 복수 서치 포인트 중 어느 하나의 리파인 움직임 벡터를 기초로 상기 현재 블록의 움직임 벡터를 획득하는 인터 예측부를 포함하는 영상 부호화 장치.
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201880035954.2A CN110692248B (zh) | 2017-08-29 | 2018-08-27 | 视频信号处理方法及装置 |
CN202311589949.5A CN117544786A (zh) | 2017-08-29 | 2018-08-27 | 视频解码和编码方法及用于存储压缩视频数据的装置 |
CN202311592378.0A CN117615154A (zh) | 2017-08-29 | 2018-08-27 | 视频解码和编码方法及用于存储压缩视频数据的装置 |
US16/619,231 US11457235B2 (en) | 2017-08-29 | 2018-08-27 | Method for refining a motion vector derived under a merge mode using a difference vector |
CN202311588351.4A CN117615153A (zh) | 2017-08-29 | 2018-08-27 | 视频解码和编码方法及用于存储压缩视频数据的装置 |
CN202311588484.1A CN117544785A (zh) | 2017-08-29 | 2018-08-27 | 视频解码和编码方法及用于存储压缩视频数据的装置 |
US17/893,421 US20220417554A1 (en) | 2017-08-29 | 2022-08-23 | Method for refining a motion vector derived under a merge mode using a difference vector |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2017-0109639 | 2017-08-29 | ||
KR20170109639 | 2017-08-29 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/619,231 A-371-Of-International US11457235B2 (en) | 2017-08-29 | 2018-08-27 | Method for refining a motion vector derived under a merge mode using a difference vector |
US17/893,421 Continuation US20220417554A1 (en) | 2017-08-29 | 2022-08-23 | Method for refining a motion vector derived under a merge mode using a difference vector |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019045392A1 true WO2019045392A1 (ko) | 2019-03-07 |
Family
ID=65525914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/009869 WO2019045392A1 (ko) | 2017-08-29 | 2018-08-27 | 비디오 신호 처리 방법 및 장치 |
Country Status (4)
Country | Link |
---|---|
US (2) | US11457235B2 (ko) |
KR (1) | KR102620410B1 (ko) |
CN (5) | CN117615154A (ko) |
WO (1) | WO2019045392A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112866707A (zh) * | 2019-03-11 | 2021-05-28 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及其设备 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019072370A1 (en) * | 2017-10-09 | 2019-04-18 | Huawei Technologies Co., Ltd. | MEMORY ACCESS WINDOW AND FILLING FOR VECTOR MOVEMENT REFINEMENT |
WO2019151284A1 (ja) * | 2018-01-30 | 2019-08-08 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
JP7459069B2 (ja) | 2018-09-21 | 2024-04-01 | オッポ広東移動通信有限公司 | 映像信号符号化/復号化方法及びその装置 |
CN113508593A (zh) * | 2019-02-27 | 2021-10-15 | 北京字节跳动网络技术有限公司 | 基于回退的运动矢量场的基于子块运动矢量推导 |
CN113545086A (zh) * | 2019-03-08 | 2021-10-22 | 北京达佳互联信息技术有限公司 | 用于视频编解码的双向光流和解码器侧运动矢量细化 |
CN111586415B (zh) * | 2020-05-29 | 2022-01-04 | 浙江大华技术股份有限公司 | 视频编码方法、装置、编码器及存储装置 |
CN113870302A (zh) * | 2020-06-30 | 2021-12-31 | 晶晨半导体(上海)股份有限公司 | 运动估计方法、芯片、电子设备以及存储介质 |
CN112040242A (zh) * | 2020-07-30 | 2020-12-04 | 浙江大华技术股份有限公司 | 基于高级运动矢量表达的帧间预测方法、装置及设备 |
US11671616B2 (en) | 2021-03-12 | 2023-06-06 | Lemon Inc. | Motion candidate derivation |
US20220295090A1 (en) * | 2021-03-12 | 2022-09-15 | Lemon Inc. | Motion candidate derivation |
US11936899B2 (en) | 2021-03-12 | 2024-03-19 | Lemon Inc. | Methods and systems for motion candidate derivation |
WO2023061305A1 (en) * | 2021-10-11 | 2023-04-20 | Beijing Bytedance Network Technology Co., Ltd. | Method, apparatus, and medium for video processing |
WO2023132509A1 (ko) * | 2022-01-04 | 2023-07-13 | 현대자동차주식회사 | 공간적 상관성을 이용하는 디코더측 움직임벡터 유도를 위한 방법 |
WO2024014896A1 (ko) * | 2022-07-13 | 2024-01-18 | 엘지전자 주식회사 | 움직임 정보 리파인먼트에 기반한 영상 부호화/복호화 방법, 비트스트림을 전송하는 방법 및 비트스트림을 저장한 기록 매체 |
WO2024072162A1 (ko) * | 2022-09-28 | 2024-04-04 | 엘지전자 주식회사 | 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060285594A1 (en) * | 2005-06-21 | 2006-12-21 | Changick Kim | Motion estimation and inter-mode prediction |
CN105959699A (zh) * | 2016-05-06 | 2016-09-21 | 西安电子科技大学 | 一种基于运动估计和时空域相关性的快速帧间预测方法 |
WO2017043730A1 (ko) * | 2015-09-08 | 2017-03-16 | 엘지전자(주) | 영상의 부호화/복호화 방법 및 이를 위한 장치 |
US20170208341A1 (en) * | 2014-08-12 | 2017-07-20 | Intel Corporation | System and method of motion estimation for video coding |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2002228884A1 (en) * | 2000-11-03 | 2002-05-15 | Compression Science | Video data compression system |
JP2006101239A (ja) * | 2004-09-29 | 2006-04-13 | Toshiba Corp | 動きベクトル検出装置、動きベクトル検出方法、およびその方法をコンピュータで実行させるプログラム |
KR100686393B1 (ko) * | 2005-04-07 | 2007-03-02 | 주식회사 텔레칩스 | 하드웨어 구현에 적합한 움직임 예측 장치 및 그 방법 |
JP2009272724A (ja) * | 2008-04-30 | 2009-11-19 | Panasonic Corp | ビデオ符号化・復号化装置 |
KR20130002242A (ko) * | 2011-06-28 | 2013-01-07 | 주식회사 케이티 | 영상 정보의 부호화 방법 및 복호화 방법 |
GB2556489B (en) * | 2011-11-08 | 2018-11-21 | Kt Corp | A method of decoding a video signal using a merge mode |
ES2729781T3 (es) * | 2012-06-01 | 2019-11-06 | Velos Media Int Ltd | Dispositivo de decodificación aritmética, aparato de decodificación de imágenes, dispositivo de codificación aritmética y aparato de codificación de imágenes |
KR102070719B1 (ko) * | 2013-01-23 | 2020-01-30 | 한국전자통신연구원 | 인터 예측 방법 및 그 장치 |
US11330284B2 (en) * | 2015-03-27 | 2022-05-10 | Qualcomm Incorporated | Deriving motion information for sub-blocks in video coding |
MX2018002477A (es) * | 2015-09-02 | 2018-06-15 | Mediatek Inc | Metodo y aparato de derivacion de movimiento de lado de decodificador para codificacion de video. |
WO2017157281A1 (en) * | 2016-03-16 | 2017-09-21 | Mediatek Inc. | Method and apparatus of pattern-based motion vector derivation for video coding |
EP3264769A1 (en) * | 2016-06-30 | 2018-01-03 | Thomson Licensing | Method and apparatus for video coding with automatic motion information refinement |
US11638027B2 (en) * | 2016-08-08 | 2023-04-25 | Hfi Innovation, Inc. | Pattern-based motion vector derivation for video coding |
US11381829B2 (en) * | 2016-08-19 | 2022-07-05 | Lg Electronics Inc. | Image processing method and apparatus therefor |
KR102414924B1 (ko) * | 2016-12-05 | 2022-06-30 | 엘지전자 주식회사 | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 |
US10595035B2 (en) * | 2017-03-22 | 2020-03-17 | Qualcomm Incorporated | Constraining motion vector information derived by decoder-side motion vector derivation |
WO2019001741A1 (en) * | 2017-06-30 | 2019-01-03 | Huawei Technologies Co., Ltd. | MOTION VECTOR REFINEMENT FOR MULTI-REFERENCE PREDICTION |
IL271770B2 (en) * | 2017-06-30 | 2024-03-01 | Huawei Tech Co Ltd | Search area for motion vector refinement |
-
2018
- 2018-08-27 CN CN202311592378.0A patent/CN117615154A/zh active Pending
- 2018-08-27 WO PCT/KR2018/009869 patent/WO2019045392A1/ko active Application Filing
- 2018-08-27 CN CN202311588484.1A patent/CN117544785A/zh active Pending
- 2018-08-27 CN CN202311588351.4A patent/CN117615153A/zh active Pending
- 2018-08-27 KR KR1020180100532A patent/KR102620410B1/ko active IP Right Grant
- 2018-08-27 CN CN202311589949.5A patent/CN117544786A/zh active Pending
- 2018-08-27 US US16/619,231 patent/US11457235B2/en active Active
- 2018-08-27 CN CN201880035954.2A patent/CN110692248B/zh active Active
-
2022
- 2022-08-23 US US17/893,421 patent/US20220417554A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060285594A1 (en) * | 2005-06-21 | 2006-12-21 | Changick Kim | Motion estimation and inter-mode prediction |
US20170208341A1 (en) * | 2014-08-12 | 2017-07-20 | Intel Corporation | System and method of motion estimation for video coding |
WO2017043730A1 (ko) * | 2015-09-08 | 2017-03-16 | 엘지전자(주) | 영상의 부호화/복호화 방법 및 이를 위한 장치 |
CN105959699A (zh) * | 2016-05-06 | 2016-09-21 | 西安电子科技大学 | 一种基于运动估计和时空域相关性的快速帧间预测方法 |
Non-Patent Citations (1)
Title |
---|
ROSEWARNE, CHRIS ET AL.: "High Efficiency Video Coding (HEVC) Test Model 16 (HM 16) Improved Encoder Description Update 9", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3, no. JCTVC-AB1002, 21 July 2017 (2017-07-21), Torino, IT, XP030118276 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112866707A (zh) * | 2019-03-11 | 2021-05-28 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及其设备 |
CN112866707B (zh) * | 2019-03-11 | 2022-01-25 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及其设备 |
US11902563B2 (en) | 2019-03-11 | 2024-02-13 | Hangzhou Hikvision Digital Technology Co., Ltd. | Encoding and decoding method and device, encoder side apparatus and decoder side apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN110692248B (zh) | 2024-01-02 |
CN117544785A (zh) | 2024-02-09 |
CN117544786A (zh) | 2024-02-09 |
US20220417554A1 (en) | 2022-12-29 |
US11457235B2 (en) | 2022-09-27 |
US20200154135A1 (en) | 2020-05-14 |
KR102620410B1 (ko) | 2024-01-03 |
KR20190024765A (ko) | 2019-03-08 |
CN117615153A (zh) | 2024-02-27 |
CN110692248A (zh) | 2020-01-14 |
CN117615154A (zh) | 2024-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019045392A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018066959A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018212578A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018088805A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018008906A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018117546A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018212577A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018008904A2 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018066927A1 (ko) | 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2018056703A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2017222325A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018106047A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016200100A1 (ko) | 적응적 가중치 예측을 위한 신택스 시그널링을 이용하여 영상을 부호화 또는 복호화하는 방법 및 장치 | |
WO2019078664A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018044087A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018008905A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018026222A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018044088A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2017039256A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018117706A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2020096425A1 (ko) | 영상 신호 부호화/복호화 방법 및 이를 위한 장치 | |
WO2012173415A2 (ko) | 움직임 정보의 부호화 방법 및 장치, 그 복호화 방법 및 장치 | |
WO2013002557A2 (ko) | 움직임 정보의 부호화 방법 및 장치, 그 복호화 방법 및 장치 | |
WO2019225993A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016085231A1 (ko) | 비디오 신호 처리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18852413 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18852413 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/01/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18852413 Country of ref document: EP Kind code of ref document: A1 |