WO2012157826A1 - Procédé pour éliminer des vecteurs présentant des plages similaires dans une liste de modes de prédiction candidats, et dispositif utilisant un tel procédé - Google Patents

Procédé pour éliminer des vecteurs présentant des plages similaires dans une liste de modes de prédiction candidats, et dispositif utilisant un tel procédé Download PDF

Info

Publication number
WO2012157826A1
WO2012157826A1 PCT/KR2011/008999 KR2011008999W WO2012157826A1 WO 2012157826 A1 WO2012157826 A1 WO 2012157826A1 KR 2011008999 W KR2011008999 W KR 2011008999W WO 2012157826 A1 WO2012157826 A1 WO 2012157826A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
candidate prediction
prediction
unit
block
Prior art date
Application number
PCT/KR2011/008999
Other languages
English (en)
Korean (ko)
Inventor
전용준
박승욱
임재현
김정선
박준영
최영희
전병문
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2012157826A1 publication Critical patent/WO2012157826A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • the present invention relates to a method for removing a pseudo range vector from a candidate prediction mode list and an apparatus using the method, and more particularly, to a decoding method and apparatus.
  • High efficiency image compression techniques can be used to solve these problems caused by high resolution and high quality image data.
  • An inter-screen prediction technique for predicting pixel values included in the current picture from a picture before or after the current picture using an image compression technique an intra prediction technique for predicting pixel values included in a current picture using pixel information in the current picture
  • a second object of the present invention is to provide an apparatus for performing a method of setting a candidate predicted motion vector list in order to increase image encoding efficiency.
  • the determining of whether or not a similar range motion vector exists among the calculated candidate prediction motion vectors may include determining the candidate prediction motion vector based on difference values of x-direction vector components and y-direction vector component differences of the candidate prediction motion vectors. It may be a step of determining whether is a pseudo range motion vector.
  • the image decoding method may include determining whether a first motion vector or a second motion vector exists by a sequential determination procedure in a first spatial candidate prediction group, and a first motion vector or a second calculated by the sequential determination procedure. The method may further include setting the motion vector as a candidate prediction motion vector.
  • the image decoding method may include determining whether a third motion vector or a fourth motion vector exists by a sequential decision procedure in a first spatial candidate prediction group, and a third motion vector or a fourth motion vector calculated by the sequential decision procedure. And scaling the motion vector to set the candidate predicted motion vector and changing scaling information.
  • the image decoding method may include determining whether a first motion vector or a second motion vector exists by a sequential decision procedure in a second spatial candidate prediction group, and a first motion vector or a second calculated by the sequential decision procedure. And setting the motion vector as a candidate predicted motion vector.
  • the image decoding method may include determining whether scaling is performed on a candidate prediction motion vector calculated from a first spatial candidate prediction group based on the scaling information.
  • the image decoding method may include determining whether a third motion vector or a fourth motion vector exists by a sequential decision procedure in a second spatial candidate prediction group, and a third motion vector or a fourth motion vector calculated by the sequential decision procedure. And scaling the motion vector to set the candidate predicted motion vector and changing scaling information.
  • the image decoding method may further include including the motion vector as a candidate prediction motion vector in the candidate prediction motion vector list when the motion vector of the temporal candidate prediction unit exists.
  • a candidate prediction motion vector included in the candidate prediction motion vector list has a predetermined number or less, and the same vector as the additional candidate prediction motion vector to be added to the candidate prediction motion vector list is the candidate prediction motion vector list.
  • the method may further include adding the additional candidate prediction motion vector to the candidate prediction motion vector list when it is not present at.
  • candidate candidate motion vectors are removed by removing candidate candidate motion vectors having similar ranges. Inclusion in the vector list can increase the encoding / decoding efficiency.
  • FIG. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating an image decoder according to another embodiment of the present invention.
  • FIG. 3 is a conceptual diagram illustrating a spatial candidate prediction unit and a temporal candidate prediction unit for generating a predictive motion vector according to another embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a method of deriving a predictive motion vector according to another embodiment of the present invention.
  • FIG. 5 is a conceptual view illustrating a method of classifying a motion vector of a spatial candidate prediction unit by a relationship between a motion vector of a current prediction unit and a motion vector of a spatial candidate prediction unit according to another embodiment of the present invention.
  • FIG. 6 is a flowchart illustrating a method of calculating candidate prediction group availability information and availability information of a temporal candidate prediction unit according to another embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating a method of calculating a candidate prediction motion vector in a first spatial candidate prediction group according to another embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating a method of calculating candidate prediction motion vectors (first motion vector and second motion vector) in a second spatial candidate prediction group according to another embodiment of the present invention.
  • FIG. 9 is a flowchart illustrating a method of calculating candidate prediction motion vectors (third motion vector, fourth motion vector) in a second spatial candidate prediction group according to another embodiment of the present invention.
  • FIG. 10 is a flowchart illustrating a method of calculating candidate prediction motion vectors (third motion vector, fourth motion vector) in a second spatial candidate prediction group according to another embodiment of the present invention.
  • FIG. 11 is a flowchart illustrating a method of calculating a candidate prediction motion vector in a temporal candidate prediction group according to another embodiment of the present invention.
  • FIG. 12 is a flowchart illustrating a method of determining a similarity category vector according to another embodiment of the present invention.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • FIG. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present invention.
  • the image encoding apparatus 100 may include a picture splitter 105, a predictor 110, a transformer 115, a quantizer 120, a realigner 125, and an entropy encoder 130. , An inverse quantization unit 135, an inverse transform unit 140, a filter unit 145, and a memory 150.
  • each of the components shown in FIG. 1 is independently shown to represent different characteristic functions in the image encoding apparatus, and does not mean that each of the components is made of separate hardware or one software component unit.
  • each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
  • the integrated and separated embodiments of the components are also included in the scope of the present invention, without departing from the spirit of the invention.
  • the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
  • the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
  • the picture dividing unit 105 may divide the input picture into at least one processing unit.
  • the processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU).
  • the picture division unit 105 divides one picture into a combination of a plurality of coding units, prediction units, and transformation units, and combines one coding unit, prediction unit, and transformation unit on a predetermined basis (for example, a cost function). You can select to encode the picture.
  • one picture may be divided into a plurality of coding units.
  • a recursive tree structure such as a quad tree structure may be used.
  • a coding unit that is split into another coding unit based on one image or a maximum size coding unit as a root may be divided. It can be split with as many child nodes as there are units. Coding units that are no longer split according to certain restrictions become leaf nodes. That is, when it is assumed that only square division is possible for one coding unit, one coding unit may be split into at most four other coding units.
  • a coding unit may be used not only as a coding unit but also as a decoding unit.
  • the prediction unit is divided in the form of at least one square or rectangle of the same size in one coding unit, or the shape of one prediction unit among the prediction units split in one coding unit is different from that of another prediction unit. It can be divided into forms.
  • the intra prediction may be performed without splitting the prediction unit into a plurality of prediction units NxN.
  • the prediction unit 110 may include an inter prediction unit for performing inter prediction and an intra prediction unit for performing intra prediction. Whether to use inter prediction or intra prediction may be determined for the prediction unit, and specific information (eg, intra prediction mode, motion vector, reference picture, etc.) according to each prediction method may be determined. In this case, the processing unit in which the prediction is performed may differ from the processing unit in which the prediction method and the details are determined. For example, the method of prediction and the prediction mode may be determined in the prediction unit, and the prediction may be performed in the transform unit. The residual value (residual block) between the generated prediction block and the original block may be input to the transformer 115.
  • specific information eg, intra prediction mode, motion vector, reference picture, etc.
  • prediction mode information and motion vector information used for prediction may be encoded by the entropy encoder 130 along with the residual value and transmitted to the decoder.
  • the original block may be encoded as it is and transmitted to the decoder without generating the prediction block through the prediction unit 110.
  • the inter prediction unit may predict the prediction unit based on the information of at least one of the previous picture or the subsequent picture of the current picture.
  • the inter prediction unit may include a reference picture interpolator, a motion predictor, and a motion compensator.
  • the reference picture interpolator may receive reference picture information from the memory 150 and generate pixel information of an integer pixel or less in the reference picture.
  • a DCT based 8-tap interpolation filter having different filter coefficients may be used to generate pixel information of integer pixels or less in units of 1/4 pixels.
  • a DCT-based interpolation filter having different filter coefficients may be used to generate pixel information of an integer pixel or less in units of 1/8 pixels.
  • the motion predictor may perform motion prediction based on the reference picture interpolated by the reference picture interpolator.
  • various methods such as a full search-based block matching algorithm (FBMA), a three step search (TSS), and a new three-step search algorithm (NTS) may be used.
  • the motion vector may have a motion vector value in units of 1/2 or 1/4 pixels based on the interpolated pixels.
  • the motion prediction unit may predict the current prediction unit by using a different motion prediction method.
  • various methods such as a skip method, a merge method, and an advanced motion vector prediction (AMVP) method, may be used.
  • AMVP advanced motion vector prediction
  • the intra prediction unit may generate a prediction unit based on reference pixel information around a current block that is pixel information in a current picture. If the neighboring block of the current prediction unit is a block for which inter prediction is performed, and the reference pixel is a pixel for which inter prediction is performed, the intra-prediction of a reference pixel included in the block for performing inter prediction is performed. It can be used in place of the reference pixel information of the block. That is, when the reference pixel is not available, the unavailable reference pixel information may be replaced with at least one reference pixel among the available reference pixels.
  • a prediction mode may have a directional prediction mode using reference pixel information according to a prediction direction, and a non-directional mode using no directional information when performing prediction.
  • the mode for predicting the luminance information and the mode for predicting the color difference information may be different, and the intra prediction mode information or the predicted luminance signal information predicting the luminance information may be used to predict the color difference information.
  • the intra prediction screen is based on the pixels on the left side of the prediction unit, the pixels on the upper left side, and the pixels on the top side.
  • the intra prediction may be performed using a reference pixel based on the transform unit.
  • intra prediction using NxN division may be used only for a minimum coding unit.
  • the intra prediction method may generate a prediction block after applying an adaptive intra smoothing (AIS) filter to a reference pixel according to a prediction mode.
  • AIS adaptive intra smoothing
  • the intra prediction mode of the current prediction unit may be predicted from the intra prediction mode of the prediction unit existing around the current prediction unit.
  • the prediction mode of the current prediction unit is predicted by using the mode information predicted from the neighboring prediction unit
  • the prediction mode of the screen of the current prediction unit and the neighboring prediction unit is the same
  • the current prediction unit is determined by using predetermined flag information.
  • Information that the prediction modes of the neighboring prediction units are the same may be transmitted. If the prediction modes of the current prediction unit and the neighboring prediction unit are different, entropy encoding may be performed to encode the prediction mode information of the current block.
  • a residual block may include a prediction unit that performs prediction based on the prediction unit generated by the prediction unit 110, and a residual block that includes residual information that is a difference from an original block of the prediction unit.
  • the generated residual block may be input to the converter 115.
  • the transform unit 115 converts the residual block including residual information of the original block and the prediction unit generated by the prediction unit 110 such as a discrete cosine transform (DCT) or a discrete sine transform (DST). Can be converted using Whether to apply DCT or DST to transform the residual block may be determined based on intra prediction mode information of the prediction unit used to generate the residual block.
  • DCT discrete cosine transform
  • DST discrete sine transform
  • the quantization unit 120 may quantize the values converted by the transformer 115 into the frequency domain.
  • the quantization coefficient may change depending on the block or the importance of the image.
  • the value calculated by the quantization unit 120 may be provided to the inverse quantization unit 135 and the reordering unit 125.
  • the reordering unit 125 may reorder coefficient values with respect to the quantized residual value.
  • the reordering unit 125 may change the two-dimensional block shape coefficients into a one-dimensional vector form through a coefficient scanning method. For example, the reordering unit 125 may scan from a DC coefficient to a coefficient of a high frequency region by using a Zig-Zag Scan method and change it into a one-dimensional vector form.
  • a vertical scan method for scanning two-dimensional block shape coefficients in a column direction, not a zig zag scan method, and a horizontal scan method for scanning two-dimensional block shape coefficients in a row direction will be used. Can be. That is, according to the size of the transform unit and the intra prediction mode, it is possible to determine which scan method among zigzag scan, vertical scan and horizontal scan is used.
  • the entropy encoder 130 may perform entropy encoding based on the values calculated by the reordering unit 125. Entropy encoding may use various encoding methods such as, for example, Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC).
  • Entropy encoding may use various encoding methods such as, for example, Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC).
  • the entropy encoder 130 receives residual coefficient coefficient information, block type information, prediction mode information, partition unit information, prediction unit information, transmission unit information, and motion vector information of the coding unit from the reordering unit 125 and the prediction unit 110.
  • Various information such as reference frame information, interpolation information of a block, and filtering information may be encoded.
  • the entropy encoder 130 may entropy encode a coefficient value of a coding unit input from the reordering unit 125.
  • the entropy encoder 130 may store a table for performing entropy coding, such as a variable length coding table, and perform entropy coding using the stored variable length coding table.
  • some codewords included in a table can be changed by using a counter or a direct swapping method to change the codeword allocation for the code number of the corresponding information. have. For example, for the top few code numbers assigned a small number of code words in a table that maps code numbers to code words, use a counter to add the shortest length to the code number with the highest number of occurrences. You can adaptively change the mapping order of the tables that map code words to code numbers so that you can assign code words. When the number of counts counted in the counter reaches a predetermined threshold, counting may be performed again by dividing the count count recorded in the counter in half.
  • the code number in the table that does not perform counting is the bit assigned to the code number by converting the code number and digit immediately above when the information corresponding to the code number is generated by using the direct swapping method. Entropy coding can be performed with a small number.
  • the inverse quantizer 135 and the inverse transformer 140 inverse quantize the quantized values in the quantizer 120 and inversely transform the transformed values in the transformer 115.
  • the residual value generated by the inverse quantizer 135 and the inverse transformer 140 is combined with the prediction unit predicted by the motion estimator, the motion compensator, and the intra predictor included in the predictor 110 to restore the block. Create a Reconstructed Block).
  • the filter unit 145 may include at least one of a deblocking filter, an offset correction unit, and an adaptive loop filter (ALF).
  • ALF adaptive loop filter
  • the deblocking filter 145 may remove block distortion caused by boundaries between blocks in the reconstructed picture. In order to determine whether to perform deblocking, it may be determined whether to apply a deblocking filter to the current block based on the pixels included in several columns or rows included in the block. When the deblocking filter is applied to the block, a strong filter or a weak filter may be applied according to the required deblocking filtering strength. In addition, in applying the deblocking filter, horizontal filtering and vertical filtering may be performed in parallel when vertical filtering and horizontal filtering are performed.
  • the offset correction unit may correct the offset with respect to the original image on a pixel-by-pixel basis for the deblocking image.
  • the pixels included in the image are divided into a predetermined number of areas, and then, an area to be offset is determined, an offset is applied to the corresponding area, or offset considering the edge information of each pixel. You can use this method.
  • the adaptive loop filter may perform filtering based on a value obtained by comparing the filtered reconstructed image with the original image. After dividing the pixels included in the image into a predetermined group, one filter to be applied to the group may be determined and filtering may be performed for each group. For information on whether to apply the ALF, the luminance signal may be transmitted for each coding unit (CU), and the size and coefficient of the ALF to be applied may vary according to each block.
  • the ALF may have various forms, and the number of coefficients included in the filter may also vary.
  • Such filtering related information filter coefficient information, ALF On / Off information, filter type information
  • Such filtering related information filter coefficient information, ALF On / Off information, filter type information
  • Such filtering related information filter coefficient information, ALF On / Off information, filter type information
  • the memory 150 may store the reconstructed block or picture calculated by the filter unit 145, and the stored reconstructed block or picture may be provided to the predictor 110 when performing inter prediction.
  • FIG. 2 is a block diagram illustrating an image decoder according to another embodiment of the present invention.
  • the image decoder 200 includes an entropy decoder 2110, a reordering unit 215, an inverse quantization unit 220, an inverse transform unit 225, a prediction unit 230, and a filter unit 235.
  • the memory 240 may be included.
  • the input bitstream may be decoded by a procedure opposite to that of the image encoder.
  • the entropy decoder 210 may perform entropy decoding in a procedure opposite to that of the entropy encoding performed by the entropy encoder of the image encoder.
  • the VLC table used to perform entropy encoding in the image encoder may be implemented in the same variable length encoding table in the entropy decoder to perform entropy decoding.
  • Information for generating the prediction block among the information decoded by the entropy decoder 210 may be provided to the predictor 230, and a residual value obtained by entropy decoding by the entropy decoder may be input to the reordering unit 215.
  • the entropy decoder 210 may change a code word assignment table using a counter or direct swapping method, and may perform entropy decoding based on the changed code word assignment table. have.
  • the entropy decoder 210 may decode information related to intra prediction and inter prediction performed by the encoder. As described above, when there is a predetermined constraint in performing the intra prediction and the inter prediction in the image encoder, entropy decoding is performed based on the constraint to provide information related to the intra prediction and the inter prediction for the current block. I can receive it.
  • the entropy decoding unit 210 decodes intra prediction mode information of the current prediction unit by using a predetermined binary code based on the intra encoding mode decoding method described in the embodiments of the present invention in FIGS. 3 to 8. Can be.
  • the reordering unit 215 may reorder the entropy decoded bitstream by the entropy decoding unit 210 based on a method of rearranging the bitstream. Coefficients expressed in the form of a one-dimensional vector may be reconstructed by reconstructing the coefficients in a two-dimensional block form.
  • the reordering unit may be realigned by receiving information related to coefficient scanning performed by the encoder and performing reverse scanning based on the scanning order performed by the encoder.
  • the inverse quantization unit 220 may perform inverse quantization based on the quantization parameter provided by the encoder and the coefficient values of the rearranged block.
  • the inverse transformer 225 may perform inverse DCT and inverse DST on the DCT and the DST performed by the transformer with respect to the quantization result performed by the image encoder. Inverse transformation may be performed based on a transmission unit determined by the image encoder.
  • the DCT and the DST may be selectively performed by the transform unit of the image encoder according to a plurality of pieces of information, such as a prediction method, a size and a prediction direction of the current block, and the inverse transform unit 225 of the image decoder may be performed by the transform unit of the image encoder.
  • the inverse transformation may be performed based on the converted transformation information.
  • the transformation may be performed based on the coding unit rather than the transformation unit.
  • the prediction unit 230 may generate the prediction block based on the prediction block generation related information provided by the entropy decoder 210 and previously decoded block or picture information provided by the memory 240.
  • the prediction unit 230 may include a prediction unit determiner, an inter prediction unit, and an intra prediction unit.
  • the prediction unit determination unit receives various information such as prediction unit information input from the entropy decoder, prediction mode information of the intra prediction method, and motion prediction related information of the inter prediction method, and distinguishes the prediction unit from the current coding unit. It is possible to determine whether to perform inter prediction or intra prediction.
  • the inter prediction unit uses information required for inter prediction of the current prediction unit provided by the image encoder to determine the current prediction unit based on information included in at least one of a previous picture or a subsequent picture of the current picture including the current prediction unit. Inter prediction can be performed.
  • Whether the motion prediction method of the prediction unit included in the coding unit based on the coding unit to perform inter prediction is skip mode, merge mode, or AMVP mode. Can be determined.
  • the intra prediction unit may generate a prediction block based on pixel information in the current picture.
  • the intra prediction may be performed based on the intra prediction mode information of the prediction unit provided by the image encoder.
  • the intra prediction unit may include an AIS filter, a reference pixel interpolator, and a DC filter.
  • the AIS filter is a part of filtering the reference pixel of the current block and determines whether to apply the filter according to the prediction mode of the current prediction unit.
  • AIS filtering may be performed on the reference pixel of the current block by using the prediction mode and the AIS filter information of the prediction unit provided by the image encoder. If the prediction mode of the current block is a mode that does not perform AIS filtering, the AIS filter may not be applied.
  • the reference pixel interpolator may generate a reference pixel having an integer value or less by interpolating the reference pixel. If the prediction mode of the current prediction unit is a prediction mode for generating a prediction block without interpolating the reference pixel, the reference pixel may not be interpolated.
  • the DC filter may generate the prediction block through filtering when the prediction mode of the current block is the DC mode.
  • the reconstructed block or picture may be provided to the filter unit 235.
  • the filter unit 235 may include a deblocking filter, an offset correction unit, and an ALF.
  • Information about whether a deblocking filter is applied to a corresponding block or picture, and when the deblocking filter is applied to the corresponding block or picture, may be provided with information about whether a strong filter or a weak filter is applied.
  • the deblocking filter related information provided by the image encoder may be provided and the deblocking filtering of the corresponding block may be performed in the image decoder.
  • vertical deblocking filtering and horizontal deblocking filtering may be performed, but at least one of vertical deblocking and horizontal deblocking may be performed in an overlapping portion.
  • Vertical deblocking filtering or horizontal deblocking filtering which has not been previously performed, may be performed at a portion where vertical deblocking filtering and horizontal deblocking filtering overlap. Through this deblocking filtering process, parallel processing of deblocking filtering is possible.
  • the offset correction unit may perform offset correction on the reconstructed image based on the type of offset correction and offset value information applied to the image during encoding.
  • the ALF may perform filtering based on a value obtained by comparing the restored image with the original image after performing the filtering.
  • the ALF may be applied to the coding unit based on the ALF application information, the ALF coefficient information, etc. provided from the encoder. Such ALF information may be provided included in a specific parameter set.
  • the memory 240 may store the reconstructed picture or block to use as a reference picture or reference block, and may provide the reconstructed picture to the output unit.
  • a coding unit is used as a coding unit for convenience of description, but may also be a unit for performing decoding as well as encoding.
  • the image encoding method and the image decoding method which will be described later in an embodiment of the present invention may be performed by each component included in the image encoder and the image decoder described above with reference to FIGS. 1 and 2.
  • the meaning of the component may include not only the hardware meaning but also a software processing unit that may be performed through an algorithm.
  • FIG. 3 is a conceptual diagram illustrating a spatial candidate prediction unit and a temporal candidate prediction unit for generating a predictive motion vector according to another embodiment of the present invention.
  • the position of the pixel in the upper left of the current prediction unit is called (xP, yP), and the width of the current prediction unit is defined by variables nPSW and height nPSH.
  • MinPuSize a variable for representing a spatial candidate prediction unit, indicates the size of the smallest prediction unit that can be used in the prediction unit.
  • the spatial neighboring prediction unit of the current prediction unit may include a block including a pixel present in (xP-1, yP + nPSH), and the left first block 300, (xP-1, yP + A block including a pixel present in nPSH-MinPuSize) is defined and used as the term “left second block 310”.
  • a block including a pixel located at (xP + nPSW, yP-1) may be selected from the upper first block 320
  • a block including a pixel located at (xP + nPSW-MinPuSize, yP-1) may be selected from the upper first block.
  • a block including a pixel located at 2 blocks 330 and (xP-MinPuSize, yP-1) is defined and used as the term of the upper third block 340.
  • the spatial candidate prediction unit may include a left first block 300, a left second block 310, an upper first block 320, an upper second block 330, and an upper third block 340.
  • One group including the left first block 300 and the left second block 310 is defined as the first spatial candidate prediction group, and the upper first block 320, the upper second block 330, and the upper third One group that includes block 340 is defined as a second spatial candidate prediction group.
  • the term spatial candidate prediction unit may be used as a term including a prediction unit included in a first spatial candidate prediction group and a prediction unit included in a second spatial candidate prediction group.
  • the temporal candidate prediction unit 350 includes a prediction including a pixel at the position (xP + nPSW, yP + nPSH) in the call-picture of the current prediction unit based on the pixel position (xP, yP) in the picture including the current prediction unit.
  • Prediction unit containing a pixel at (xP + nPSW / 2-1, yP + nPSH / 2-1) if a unit or a prediction unit containing a pixel at (xP + nPSW, yP + nPSH) is not available Can be
  • the position and number of the spatial candidate prediction unit and the position and number of the temporal candidate prediction unit disclosed in FIG. 3 are arbitrary, so long as the position and number of the spatial candidate prediction unit and the temporal candidate prediction unit are not separated from the nature of the present invention.
  • the position and number may change, and the position and candidate prediction group of the prediction unit that is preferentially scanned when the candidate prediction motion vector list is constructed may also change. That is, the position, number, scan order, candidate prediction group, etc. of the prediction units used when constructing the candidate prediction motion vector list described in the embodiments of the present invention are exemplified in the embodiments of the present invention. One can change.
  • FIG. 4 is a flowchart illustrating a method of deriving a predictive motion vector according to another embodiment of the present invention.
  • a candidate prediction motion vector is calculated in the first spatial candidate prediction group (step S400).
  • the first spatial candidate prediction group may be a left first block and a left second block.
  • the first spatial candidate prediction group availability information may be used to calculate a prediction motion vector in the first spatial candidate prediction group.
  • the first spatial candidate prediction group availability information is a candidate prediction motion vector available by at least one motion vector of the motion vectors of the blocks existing in the first spatial candidate prediction group based on the predetermined bit information and is a candidate prediction motion of the current prediction unit. Information about whether to be included as a candidate prediction motion vector in the vector list may be expressed. The method of setting the first spatial candidate prediction group availability information and the method of calculating the candidate prediction motion vector will be described later in the embodiments of the present invention.
  • a candidate prediction motion vector is calculated in the second spatial candidate prediction group (step S410).
  • the second spatial candidate prediction group may be an upper first block, an upper second block, and an upper third block.
  • the second spatial candidate prediction group availability information may be used to calculate a prediction motion vector in the second spatial candidate prediction group.
  • the second spatial candidate prediction group availability information includes at least one motion vector of the motion vectors of blocks present in the second spatial candidate prediction group based on predetermined bit information. As such, information about whether to be included in the candidate prediction motion vector list of the current prediction unit may be expressed.
  • the method of setting the second spatial candidate prediction group availability information and the method of calculating the candidate prediction motion vector will be described later in the embodiments of the present invention.
  • a candidate prediction motion vector is calculated in the temporal candidate prediction unit (step S420).
  • the temporal candidate prediction unit availability information may express information on whether to include the motion vector of the temporal candidate prediction unit as a candidate prediction motion vector in the candidate prediction motion vector list of the current prediction unit based on the predetermined bit information.
  • a method of setting temporal candidate prediction unit availability information and a method of calculating candidate prediction motion vectors will be described later in the embodiments of the present invention.
  • the candidate prediction motion vector list includes a motion vector calculated through steps S400 to S420, that is, a candidate prediction motion vector calculated from at least one of a first spatial candidate prediction group, a second spatial candidate prediction group, and a temporal candidate prediction unit. Can be.
  • the candidate predicted motion vector except for the highest priority is removed from the candidate motion vector list (step S430).
  • Equation 1 may be used to determine whether the candidate prediction motion vector included in the candidate prediction motion vector list is a motion vector of a similar category.
  • a difference value in the x and y directions of mvA and mvB, which are candidate motion prediction vectors included in the candidate prediction motion vector list may be obtained. It is determined whether the sum of the absolute value of the x-direction vector difference and the absolute value of the y-direction vector difference is within a predetermined threshold range. In this case, only one of the two vectors (eg, a vector having a high priority) may be left and the other one may be excluded from the candidate prediction motion vector list. For example, if mvA included in the candidate prediction motion vector list has a lower priority than mvB, only mvA is left in the candidate prediction motion vector list and mvB may be excluded from the candidate prediction motion vector list.
  • Equation 1 is an embodiment for determining whether two vectors are similar or not, and a method of determining similarity between various vectors may be used as long as they do not depart from the essence of the present invention.
  • the candidate prediction motion vector calculated through the candidate prediction motion vector calculation process performed in step S420 in step S400 may include only candidate prediction motion vectors that are not similar through the candidate prediction motion vector similarity determination process performed in step S430. Can be included in
  • a zero vector is additionally inserted into the candidate prediction motion vector list (step S440).
  • the candidate prediction motion vector may not exist in the candidate prediction motion vector list.
  • the zero vector may be included in the candidate predicted motion vector list.
  • This step may be performed integrally in step S470, which is a step of inserting additional candidate prediction motion vectors to be described later. If the step is performed in combination with step S470, this step may not be performed.
  • step S450 It is determined whether the number of candidate prediction motion vectors included in the current candidate prediction motion vector list is greater than or equal to the maximum number that can be included in the candidate prediction motion vector list.
  • the number of candidate prediction motion vectors that may be included in the candidate prediction motion vector list may be limited to any number. For example, when the maximum number of predicted motion vectors is limited to two, when the number of candidate predicted motion vectors calculated through the process of calculating the candidate predicted motion vectors performed in step S400 in step S400 is greater than the maximum number of predicted motion vectors. Only two candidate prediction motion vectors may be included in the candidate prediction motion vector list in the order of high priority, and the remaining one vector may be excluded from the candidate motion vector list.
  • the candidate prediction motion vectors included in the candidate prediction motion vector list may be used as the maximum candidate prediction motion vectors. Only the number is included in the predicted motion vector list (step S460).
  • the candidate prediction motion vectors as many as the maximum candidate prediction motion vectors may be included in the candidate prediction motion vector list in the order of high priority, and the remaining candidate prediction motion vectors may be excluded from the candidate prediction motion vector list.
  • the additional candidate prediction motion vectors are added to the candidate prediction motion vector list. Include it (step S470).
  • the candidate prediction motion vector list may be implemented by including additional candidate prediction motion vectors in the candidate prediction motion vector list. have. For example, when the candidate prediction motion vector included in the current candidate prediction motion vector list is not a zero vector, the zero vector may be included in the candidate prediction motion vector list as an additional candidate prediction motion vector.
  • the additional candidate prediction motion vector may be a combination or scaling value of the vectors that are already in the candidate prediction motion vector list that are not zero vectors.
  • the prediction motion vector of the current prediction unit is determined based on the index information of the candidate prediction motion vector (step S480).
  • the candidate prediction motion vector index information may indicate which candidate prediction motion vector from among candidate prediction motion vector information included in the candidate prediction motion vector list calculated through steps S400 to S470 is used as the prediction motion vector of the current prediction unit. have.
  • the motion vector information of the current prediction unit is added by adding the predicted motion vector of the current prediction unit and the differential motion vector information that is the difference between the original motion vector value of the current prediction unit and the prediction motion vector value calculated based on the candidate prediction motion vector index information. Can be calculated.
  • FIG. 5 is a conceptual view illustrating a method of classifying a motion vector of a spatial candidate prediction unit by a relationship between a motion vector of a current prediction unit and a motion vector of a spatial candidate prediction unit according to another embodiment of the present invention.
  • the motion vector of the spatial candidate prediction unit calculated from the same reference frame and the same reference picture list as the current prediction unit is referred to as a first motion vector 500.
  • the reference picture of the current prediction unit 550 is a j picture and that the reference picture list including the j picture is an L0 list
  • the reference picture indicated by the vector 500 of the spatial candidate prediction unit 570 is Since the reference picture list including the j picture and the j picture is an L0 list, the motion vector of the spatial candidate prediction unit 570 and the motion vector of the current prediction unit have the same reference picture and the same reference picture list.
  • the motion vector calculated from the same reference frame and the same list as the current prediction unit is defined as the first motion vector 500.
  • the motion vector of the spatial candidate prediction unit 570 having the same reference frame as the current prediction unit 550 but calculated from different reference picture lists is called a second motion vector 510.
  • the reference picture of the current prediction unit 550 is a j picture and the reference picture list containing the j picture is an L0 list
  • the reference picture pointed to by the vector of the spatial candidate prediction unit 570 is a j picture and a reference that includes the j picture. Since the picture list is an L1 list, the motion vector 510 of the spatial candidate prediction unit and the motion vector of the current prediction unit have the same reference picture but different reference picture lists.
  • the motion vector calculated from the same reference frame but different from the current prediction unit is defined as the second motion vector 510.
  • the motion vector of the spatial candidate prediction unit having a reference frame different from the current prediction unit but calculated from the same reference picture list is called a third motion vector 520.
  • the reference picture of the current prediction unit 550 is a j picture and the reference picture list containing the j picture is an L0 list
  • the reference picture pointed to by the vector 520 of the spatial candidate prediction unit 570 is an i picture and the i picture is Since the included reference picture list is an L0 list, the motion vector of the spatial candidate prediction unit and the motion vector of the current prediction unit have different reference pictures but have the same reference picture list.
  • the motion vector calculated from the same list but different from the current prediction unit 550 is defined as the third motion vector 520.
  • the third motion vector 520 since the current prediction unit and the reference picture are different from each other, when the motion vector of the spatial candidate prediction unit is used, the third motion vector 520 may be scaled based on the reference picture of the current prediction unit and included in the candidate prediction motion vector list.
  • the motion vector of the spatial candidate prediction unit 570 calculated from a different reference picture list with a different reference frame from the current prediction unit 550 is called a fourth motion vector 530.
  • the reference picture of the current prediction unit 550 is a j picture and the reference picture list containing the j picture is an L0 list
  • the reference picture pointed to by the vector 530 of the spatial candidate prediction unit 570 is m picture and m picture is Since the included reference picture list is an L1 list, the motion vector of the spatial candidate prediction unit and the motion vector of the current prediction unit have different reference pictures and different reference picture lists.
  • a motion vector calculated from a reference frame different from the current prediction unit and a different reference picture list is defined as a fourth motion vector 530. Since the fourth motion vector 530 is also different from the current prediction unit 550 and the reference picture, when the motion vector of the spatial candidate prediction unit is used, scaling is performed based on the reference picture of the current prediction unit to be included in the candidate prediction motion vector list. Can be.
  • the motion vector of the spatial candidate prediction unit may be classified into first to fourth motion vectors as described above according to the reference frame and the reference picture list of the current prediction unit.
  • a method of classifying a motion vector of a spatial candidate prediction unit into first to fourth motion vectors may be performed by selecting any motion vector among the motion vectors of the spatial candidate prediction unit described below as a candidate prediction motion vector. Can be used to determine whether or not to be used.
  • FIG. 6 is a flowchart illustrating a method of calculating candidate prediction group availability information and availability information of a temporal candidate prediction unit according to another embodiment of the present invention.
  • FIG. 6 a method of calculating the spatial candidate prediction group availability information and the availability information of the temporal candidate prediction unit and the method of calculating the candidate prediction motion vector described in the above steps S400 to S420 of FIG. 4 will be described.
  • FIG. 6 is a simplified flowchart illustrating a method of calculating availability information and a candidate prediction motion vector.
  • operation S600 it is determined whether a first motion vector exists in the left first block, and if a first motion vector does not exist in the left first block, it is determined whether a second motion vector exists in the left first block. .
  • the first spatial candidate prediction group availability information may be set to 1 to indicate that a candidate prediction motion vector exists in the first spatial candidate prediction group.
  • 1 is an arbitrary binary number for indicating whether a candidate prediction motion vector is present and may include the same meaning through different binary codes.
  • the binary numbers of 1 and 0 indicating the content of the predetermined information are arbitrary, and the corresponding information may be expressed based on another binary code or a code generated using another encoding method.
  • step S610 it is determined whether the third motion vector and the fourth motion vector exist from the left first block in the left second block order.
  • step S600 that is, sequentially determining whether the first motion vector and the second motion vector exist in the order from the left first block to the left second block, the result satisfies the condition. If no vector is found, the candidate prediction motion vector may be calculated through step S610.
  • step S610 it is determined whether a third motion vector exists in the left first block, and if the third motion vector does not exist in the left first block, whether the fourth motion vector exists in the left first block. To judge.
  • the first spatial candidate prediction group availability information is set to 1, and a subsequent motion vector presence determination procedure may not be performed. have.
  • the third motion vector and the fourth motion vector indicate a reference picture that is not the same as the current prediction unit
  • the third motion vector and the fourth motion vector may be included in the candidate prediction motion vector list after scaling. have.
  • step S610 If it is determined in step S610 that the third motion vector and the fourth motion vector exist in the left first block or the second left block, information indicating whether scaling has been performed (hereinafter referred to as scaling information display information). ) Can be set to 1 to indicate that scaling has been performed once on the candidate prediction motion vector.
  • the number of scaling for generating the candidate prediction motion vector may be limited. For example, when scaling is performed by limiting the number of scaling to generate a candidate prediction motion vector to one, indicating that the scaling is performed by flag information indicating whether to scale the additional scaling may not be performed. have. By limiting the number of scaling, the complexity of calculating the candidate prediction motion vector can be greatly reduced.
  • step S610 if there is a motion vector that satisfies the condition based on the sequential determination procedure, the motion vector may be included in the candidate prediction motion list by scaling, and the first spatial candidate prediction group availability information is set to one. Can be.
  • one candidate prediction motion vector may be calculated in the first spatial candidate prediction group through step S400 of FIG. 4.
  • operation S620 it is determined whether the first motion vector and the second motion vector exist in the order of the upper first block and the upper third block.
  • operation S620 it is determined whether the first motion vector exists in the upper first block, and if the first motion vector does not exist in the upper first block, it is determined whether the second motion vector exists in the upper first block. .
  • the subsequent determination procedure may not be performed.
  • the calculated motion vector may be included in the candidate prediction motion list, and the second spatial candidate prediction group availability information is set to 1 to indicate that the candidate prediction motion vector exists in the first spatial candidate prediction group.
  • operation S630 it is determined whether the third motion vector and the fourth motion vector exist in the order of the upper first block, the upper second block, and the upper third block according to whether the first spatial candidate prediction unit is scaled.
  • step S630 may not be performed. For example, when scaling information indicating information is displayed as 1 in step S610, step S630 may not be performed. If scaling is possible in step S630, in operation S630, it is determined whether a third motion vector exists in the upper first block, and if the third motion vector does not exist in the upper first block, the fourth in the upper first block. It is determined whether a motion vector exists.
  • the third motion vector and the fourth motion vector indicate a reference picture that is not the same as the current prediction unit
  • the third motion vector and the fourth motion vector may be included in the candidate prediction motion vector list after scaling. have.
  • the motion vector when a motion vector meeting a condition exists based on the sequential determination procedure, the motion vector may be included in the candidate prediction motion list, and the second spatial candidate prediction group availability information may be set to one.
  • one candidate prediction motion vector may be calculated in the first spatial candidate prediction group through step S410 of FIG. 4.
  • the candidate prediction motion vectors calculated in performing steps S620 and S630 exist in a similar category to the candidate prediction motion vectors of the first spatial candidate prediction group calculated through steps S600 and S610, the candidate prediction motion vectors calculated. May be determined to be unavailable.
  • the motion vector of the upper first block is equal to the candidate prediction motion vector of the first spatial candidate prediction group calculated through operations S600 and S610. If present in the similar category, the motion vector of the upper first block cannot be selected as the candidate prediction motion vector.
  • the similar category determination of the motion vector may be determined by Equation 1 described above.
  • a procedure for determining whether the candidate prediction motion vector calculated in steps S620 and S630 exists in a similarity category to the candidate prediction motion vector of the first spatial candidate prediction group calculated through steps S600 and S610 is provided.
  • candidate prediction of the first spatial candidate prediction group included in the calculated candidate prediction motion vector list After performing the step S640 of calculating the motion vector in the temporal candidate prediction unit, which will not be performed in steps S620 and S630 and will be described below, candidate prediction of the first spatial candidate prediction group included in the calculated candidate prediction motion vector list.
  • the candidate prediction motion vector existing in a similar category among the motion vector, the candidate prediction motion vector of the second spatial candidate prediction group, and the candidate prediction motion vector of the temporal candidate prediction unit may be removed from the candidate prediction motion vector list.
  • step S640 it is determined whether a candidate prediction motion vector exists in the temporal candidate prediction unit.
  • a collocated picture including a temporal candidate prediction unit may be the first picture of reference picture list 1 of the current picture or the first picture of reference picture list 0 of the current picture according to predetermined flag information.
  • the temporal candidate prediction unit using two reference picture lists only motion vectors existing in one list may be used as candidate prediction motion vectors based on predetermined flag information. If the distance between the current picture and the reference picture of the current picture and the distance between the picture including the temporal candidate prediction unit and the reference picture of the temporal candidate prediction unit are different, scaling is performed on the candidate prediction motion vector calculated from the temporal candidate prediction unit. can do.
  • the temporal candidate prediction unit availability information may be set to one.
  • FIG. 7 through 9 are flowcharts illustrating a method of constructing a candidate predicted motion vector list according to another embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating a method of calculating a candidate prediction motion vector in a first spatial candidate prediction group according to another embodiment of the present invention.
  • step S700 it is determined whether a first motion vector or a second motion vector exists in the left first block.
  • the vector is included in the candidate prediction motion vector list as the candidate prediction motion vector, and the first spatial candidate prediction group availability information is set to 1 (step 1).
  • operation S740 it is determined whether a first motion vector or a second motion vector exists in the upper first block.
  • step S710 If the first motion vector or the second motion vector does not exist in the left first block, it is determined whether the first motion vector or the second motion vector exists in the left second block (step S710).
  • the vector is included in the candidate prediction motion vector list as the candidate prediction motion vector, and the first spatial candidate prediction group availability information is set to 1 (step 1).
  • operation S740 it is determined whether a first motion vector or a second motion vector exists in the upper first block.
  • step S720 it is determined whether the third motion vector or the fourth motion vector exists in the left first block.
  • the vector is scaled (scaling indication information is indicated as 1) and included in the candidate prediction motion vector list as the candidate prediction motion vector, and the first The spatial candidate prediction group availability information is set to 1 (step S725). It is determined whether a first motion vector or a second motion vector exists in the upper first block (step S740).
  • step S730 it is determined whether the third motion vector or the fourth motion vector exists in the left second block.
  • the vector is scaled (scaling indication information is indicated as 1) and included in the candidate prediction motion vector list as the candidate prediction motion vector, and the first spatial The candidate prediction group availability information is set to 1 (step S725). It is determined whether a first motion vector or a second motion vector exists in the upper first block (step S740).
  • FIG. 8 is a flowchart illustrating a method of calculating candidate prediction motion vectors (first motion vector and second motion vector) in a second spatial candidate prediction group according to another embodiment of the present invention.
  • step S800 it is determined whether a first motion vector or a second motion vector exists in the upper first block.
  • the vector is included in the candidate prediction vector list and the second spatial candidate prediction group availability information is set to 1 (step S815). It is determined whether a candidate prediction motion vector exists (step S1000 of FIG. 10).
  • step S810 If the first motion vector or the second motion vector does not exist in the upper first block, it is determined whether the first motion vector or the second motion vector exists in the upper second block (step S810).
  • the vector is included in the candidate prediction vector list and the second spatial candidate prediction group availability information is set to 1 (step S815). It is determined whether or not a candidate predicted motion vector exists (step S1000 of FIG. 10).
  • step S820 it is determined whether the first motion vector or the second motion vector exists in the upper third block.
  • the vector is included in the candidate prediction vector list and the second spatial candidate prediction group availability information is set to 1 (step S815). It is determined whether a candidate prediction motion vector exists (step S1000 of FIG. 10).
  • FIG. 9 is a flowchart illustrating a method of calculating candidate prediction motion vectors (third motion vector, fourth motion vector) in a second spatial candidate prediction group according to another embodiment of the present invention.
  • step S900 if there is no first motion vector or second motion vector in the upper third block, it is determined whether scaling is performed in the first spatial candidate prediction group (step S900).
  • step S1000 When scaling is performed on the first spatial candidate prediction group, it is determined whether a candidate prediction motion vector of a temporal candidate prediction unit exists without calculating an additional candidate prediction motion vector in the second spatial candidate prediction group (step S1000).
  • step S905 When scaling is performed on the first spatial candidate prediction group, it is determined whether a third motion vector or a fourth motion vector exists in the upper first block (step S905).
  • the candidate prediction motion vector may be calculated by limiting the number of scaling.
  • the vector is scaled and included in the candidate prediction vector list, and the second spatial candidate prediction group availability information is set to 1 (step S915), and the temporal candidate It is determined whether a candidate prediction motion vector of the prediction unit exists (step S1000 of FIG. 10).
  • step S910 If the third motion vector or the fourth motion vector does not exist in the upper first block, it is determined whether the third motion vector or the fourth motion vector exists in the upper second block (step S910).
  • the vector is scaled and included in the candidate prediction vector list, and the second spatial candidate prediction group availability information is set to 1 (step S915), and the temporal candidate prediction is performed. It is determined whether a candidate prediction motion vector of a unit exists (step S1000 of FIG. 10).
  • step S920 When there is no third motion vector or fourth motion vector in the upper second block and scaling is not performed in the first spatial candidate prediction group (scaling state indication information is 0), the third motion in the upper third block It is determined whether the vector or the fourth motion vector exists (step S920).
  • the vector is scaled and included in the candidate prediction vector list, and the second spatial candidate prediction group availability information is set to 1 (step S915), and the temporal candidate prediction is performed. It is determined whether a candidate prediction motion vector of a unit exists (step S1000 of FIG. 10).
  • FIG. 10 is a flowchart illustrating a method of calculating a candidate prediction motion vector in a second spatial candidate prediction group according to another embodiment of the present invention.
  • FIG. 10 unlike FIG. 9, when a candidate prediction motion vector is not calculated in the first spatial candidate prediction group, even when the first motion vector or the second motion vector is calculated as the candidate prediction motion vector in the second spatial candidate prediction group, FIG. In addition, it may be determined whether a third motion vector or a fourth motion vector exists in the second spatial candidate prediction group, and the vector may be used as the candidate prediction motion vector.
  • Step S805 when the first motion vector or the second motion vector is calculated as the candidate prediction motion vector in the second spatial candidate prediction group (step S815), it is determined whether the first spatial candidate prediction group availability information is 1 ( Step S900-1).
  • the first spatial candidate prediction group availability information is set to 1 when the first motion vector or the second motion vector is calculated as the candidate prediction motion vector in the second spatial candidate prediction group
  • the first spatial candidate prediction group availability information is obtained. If it is determined that the candidate prediction motion vector is not calculated in the first spatial candidate prediction group, the scan may be further performed to calculate whether a third motion vector or a fourth motion vector exists in the second spatial prediction unit.
  • the second spatial candidate prediction group availability information set through step S815 is calculated to perform such a scan. Can be set to zero.
  • Step S905-1 may be performed to determine whether there is any.
  • FIG. 11 is a flowchart illustrating a method of calculating a candidate prediction motion vector in a temporal candidate prediction group according to another embodiment of the present invention.
  • step S1000 it is determined whether a candidate prediction motion vector of a temporal candidate prediction unit exists.
  • the vector is included in the candidate prediction vector list and the temporal candidate prediction unit availability information is set to 1 (step S1010).
  • scaling may be varied according to a distance between a picture including a current temporal candidate prediction unit and a reference picture referenced by the temporal candidate prediction unit.
  • step S1020 It is determined whether a similar category vector exists among the candidate prediction motion vectors included in the calculated candidate prediction motion vector list.
  • a zero vector is added to the candidate prediction motion vector list as the candidate prediction motion vector (step S1040).
  • the similar category vector is removed from the candidate prediction motion vector list except for the candidate prediction motion vector having the highest priority (Ste S1030).
  • the additional predicted motion vectors are candidate predicted motion vectors. It is included in the vector list (step S1050).
  • a procedure of determining whether the candidate prediction motion vector corresponds to a vector of a similar category to the candidate prediction motion vector calculated in the first spatial candidate prediction group is the second spatial candidate. It may be performed in FIGS. 8 and 9 to calculate a candidate prediction motion vector in the prediction group.
  • the procedure for calculating the candidate prediction motion vector described above is the number of first spatial candidate prediction units as one embodiment described above to specify a method for removing a candidate prediction motion vector having a similar range.
  • the position, the number and position of the second spatial candidate prediction units, and the position of the temporal candidate prediction unit may vary, and further, adding additional vectors and arbitrarily limiting the number of candidate prediction motion vectors may also change.
  • Table 1 below shows a method for determining a pseudo range motion vector.
  • the candidate prediction motion vector is set by setting an index (listIdx) to the candidate prediction motion vector included in the candidate prediction motion vector list and setting the number of candidate prediction motion vectors included in the current candidate prediction motion vector list to numMVPCand. It is determined whether these are candidate predicted motion vectors of similar categories.
  • the diff value for determining whether the category is similar may be any value.
  • Such a threshold value may have an optimal value according to a sequence, a picture, and a slice, and may set an optimal threshold value for each unit.
  • a new syntax element mvp_removal_threshold can be defined and sent in the SPS, PPS, or slice header. In the case of an I slice, inter prediction is not performed. Therefore, whether or not the syntax element mvp_removal_threshold is transmitted may vary depending on the type of slice.
  • the candidate prediction motion vector list may be newly set only with the candidate prediction motion vectors excluding the similar category vector.
  • the similar category vector determination method described above in Table 1 and FIG. 11 may be used as another embodiment of the similar category vector determination method, which is also included in the scope of the present invention.
  • FIG. 11 is a flowchart illustrating a method of determining a similarity category vector according to another embodiment of the present invention.
  • step S1100 information on the number of candidate prediction motion vectors present in the candidate prediction motion vector list and the candidate prediction motion vector list is received (step S1100).
  • Information about two candidate prediction motion vectors may be provided based on a predetermined variable indicating the number of candidate prediction motion vectors.
  • an index is set in the candidate prediction motion vector (step S1110).
  • index 3 is included in the candidate prediction motion vector calculated in the first spatial candidate candidate prediction group
  • index 2 is applied to the candidate prediction motion vector calculated in the second spatial candidate prediction group
  • temporal index 1 may be set to the candidate prediction motion vector calculated in the candidate prediction unit.
  • step S1120 It is determined whether the candidate prediction motion vector having the predetermined index value and the candidate prediction motion vector having the index value less than or equal to the predetermined index value are the pseudo range motion vectors (step S1120).
  • the candidate prediction motion vector having an index value of 3 may determine whether the candidate prediction motion vector having an index value of 2 and the candidate prediction motion vector having an index value of 1 are similar range motion vectors.
  • the candidate prediction motion vector having the index value of 2 may determine whether the candidate prediction motion vector having the index value is 1 and the pseudo range motion vector.
  • Equation 1 The method of Equation 1 described above may be used to determine whether or not it is a pseudo range motion vector.
  • the method of removing the candidate prediction motion vector, which is a similar category is defined by mvp_removal_flag, which is a newly defined syntax element, and the information on whether to use the method of removing the candidate prediction motion vector, which is a similar category, is determined as ON / OFF. It may be sent in a PPS or slice header. When the method of removing candidate prediction motion vectors that are similar categories is not used, only the same candidate prediction motion vectors may be removed.
  • the syntax element mvp_removal_flag is defined to determine whether to use the method to remove candidate predictive motion vectors that are similar categories, then the mvp_removal_threshold value is determined by applying mvp_removal_flag to remove pseudo category candidate predicted motion vectors, not slices of I slices. Can be sent when using the method.
  • the image encoding and image decoding method described above may be implemented in each component of each of the image encoder and the image decoder apparatus described above with reference to FIGS. 1 and 2.

Abstract

L'invention se réfère à un procédé pour éliminer des vecteurs comportant des plages similaires dans une liste de modes de prédiction candidats, et à un dispositif utilisant ce procédé. Le procédé pour éliminer des vecteurs comportant des plages similaires dans une liste de modes de prédiction candidats, et le dispositif utilisant un tel procédé comprennent l'étape consistant à éliminer les vecteurs de mouvement comportant des plages similaires dans une liste de vecteurs de prédiction de mouvement candidats s'il est déterminé, à une étape donnée, que des vecteurs de mouvement présentant des plages similaires sont présents parmi des vecteurs de prédiction de mouvement produits. Par conséquent, on peut augmenter l'efficacité de codage/décodage en incluant divers vecteurs de prédiction de mouvement candidats dans une liste de vecteurs de prédiction de mouvement candidats, et en éliminant les vecteurs de prédiction de mouvement candidats présentant des plages similaires.
PCT/KR2011/008999 2011-05-19 2011-11-23 Procédé pour éliminer des vecteurs présentant des plages similaires dans une liste de modes de prédiction candidats, et dispositif utilisant un tel procédé WO2012157826A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201161487714P 2011-05-19 2011-05-19
US61/487,714 2011-05-19
US201161498600P 2011-06-19 2011-06-19
US61/498,600 2011-06-19

Publications (1)

Publication Number Publication Date
WO2012157826A1 true WO2012157826A1 (fr) 2012-11-22

Family

ID=47177129

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2011/008999 WO2012157826A1 (fr) 2011-05-19 2011-11-23 Procédé pour éliminer des vecteurs présentant des plages similaires dans une liste de modes de prédiction candidats, et dispositif utilisant un tel procédé

Country Status (1)

Country Link
WO (1) WO2012157826A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110100440A (zh) * 2016-12-22 2019-08-06 株式会社Kt 视频信号处理方法和装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR950007541A (ko) * 1993-08-31 1995-03-21 배순훈 국부 최소치를 이용한 움직임 예측 알고리즘에서의 가변적인 후보갯수 선택방법
KR19990031322A (ko) * 1997-10-10 1999-05-06 전주범 움직임벡터 부호화방법
KR100728031B1 (ko) * 2006-01-23 2007-06-14 삼성전자주식회사 가변 블록 크기 움직임 예측을 위한 부호화 모드 결정 방법및 장치

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR950007541A (ko) * 1993-08-31 1995-03-21 배순훈 국부 최소치를 이용한 움직임 예측 알고리즘에서의 가변적인 후보갯수 선택방법
KR19990031322A (ko) * 1997-10-10 1999-05-06 전주범 움직임벡터 부호화방법
KR100728031B1 (ko) * 2006-01-23 2007-06-14 삼성전자주식회사 가변 블록 크기 움직임 예측을 위한 부호화 모드 결정 방법및 장치

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110100440A (zh) * 2016-12-22 2019-08-06 株式会社Kt 视频信号处理方法和装置
CN110100440B (zh) * 2016-12-22 2023-04-25 株式会社Kt 一种用于对视频进行解码、编码的方法

Similar Documents

Publication Publication Date Title
KR102083012B1 (ko) 움직임 벡터 리스트 설정 방법 및 이러한 방법을 사용하는 장치
CN109845253B (zh) 一种用于对二维视频进行解码、编码的方法
JP5869681B2 (ja) 候補画面内予測モードを利用した画面内予測モードの符号化/復号化方法及び装置
US11089299B2 (en) Method and device for processing video signal
WO2013058473A1 (fr) Procédé de transformation adaptative basé sur une prédiction intra-écran et appareil utilisant le procédé
US20210021832A1 (en) Method and apparatus for video signal processing
WO2012074344A2 (fr) Procédé d'indexation de listes d'informations de mouvement et appareil l'utilisant
WO2013051794A1 (fr) Procédé pour coder/décoder un mode de prédiction d'image intra au moyen de deux modes de prédiction intra candidats, et appareil mettant en œuvre un tel procédé
WO2012157826A1 (fr) Procédé pour éliminer des vecteurs présentant des plages similaires dans une liste de modes de prédiction candidats, et dispositif utilisant un tel procédé
KR20230092798A (ko) 영상 부호화/복호화 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11865695

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11865695

Country of ref document: EP

Kind code of ref document: A1