WO2012177053A2 - 영상 부호화/복호화 방법 및 그 장치 - Google Patents

영상 부호화/복호화 방법 및 그 장치 Download PDF

Info

Publication number
WO2012177053A2
WO2012177053A2 PCT/KR2012/004883 KR2012004883W WO2012177053A2 WO 2012177053 A2 WO2012177053 A2 WO 2012177053A2 KR 2012004883 W KR2012004883 W KR 2012004883W WO 2012177053 A2 WO2012177053 A2 WO 2012177053A2
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
prediction
block
filtering
current block
Prior art date
Application number
PCT/KR2012/004883
Other languages
English (en)
French (fr)
Korean (ko)
Other versions
WO2012177053A3 (ko
Inventor
이진호
김휘용
임성창
최진수
김진웅
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=47906320&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2012177053(A2) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Priority to IN2639CHN2014 priority Critical patent/IN2014CN02639A/en
Priority to US13/983,207 priority patent/US9332262B2/en
Priority to BR112013021229-2A priority patent/BR112013021229B1/pt
Priority to EP17162474.5A priority patent/EP3217665B1/en
Priority to CA2828462A priority patent/CA2828462C/en
Priority to BR122021025309-9A priority patent/BR122021025309B1/pt
Priority to DK12803047.5T priority patent/DK2723078T3/en
Priority to BR122021025319-6A priority patent/BR122021025319B1/pt
Priority to EP12803047.5A priority patent/EP2723078B1/en
Priority to BR112014010333-0A priority patent/BR112014010333B1/pt
Priority to CN201280011184.0A priority patent/CN103404151B/zh
Priority to EP19197782.6A priority patent/EP3614668B1/en
Priority to JP2014516915A priority patent/JP5976793B2/ja
Priority to EP23207377.5A priority patent/EP4307682A3/en
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Publication of WO2012177053A2 publication Critical patent/WO2012177053A2/ko
Publication of WO2012177053A3 publication Critical patent/WO2012177053A3/ko
Priority to US14/202,943 priority patent/US9154781B2/en
Priority to US14/220,724 priority patent/US9225981B2/en
Priority to US14/221,794 priority patent/US9591327B2/en
Priority to US15/067,764 priority patent/US9900618B2/en
Priority to US15/069,314 priority patent/US10021416B2/en
Priority to US15/070,155 priority patent/US10003820B2/en
Priority to US15/410,388 priority patent/US10205964B2/en
Priority to US16/206,696 priority patent/US10516897B2/en
Priority to US16/205,945 priority patent/US10536717B2/en
Priority to US16/546,786 priority patent/US10904569B2/en
Priority to US16/546,795 priority patent/US10979734B2/en
Priority to US16/546,835 priority patent/US10979735B2/en
Priority to US16/546,930 priority patent/US10986368B2/en
Priority to US17/202,935 priority patent/US11711541B2/en
Priority to US17/221,229 priority patent/US11689742B2/en
Priority to US18/330,440 priority patent/US20230319308A1/en
Priority to US18/340,762 priority patent/US20230336779A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the present invention relates to image processing, and more particularly, to an intra prediction method and an apparatus thereof.
  • an inter prediction technique for predicting a pixel value included in a current picture from a previous and / or subsequent picture in time, and for predicting a pixel value included in a current picture using pixel information in the current picture.
  • An intra prediction technique an entropy encoding technique of allocating a short code to a symbol with a high frequency of appearance and a long code to a symbol with a low frequency of appearance may be used.
  • An object of the present invention is to provide an image encoding method and apparatus capable of improving image encoding / decoding efficiency.
  • Another object of the present invention is to provide an image decoding method and apparatus capable of improving image encoding / decoding efficiency.
  • Another technical problem of the present invention is to provide a prediction block generation method and apparatus for improving image encoding / decoding efficiency.
  • Another object of the present invention is to provide an intra prediction method and apparatus capable of improving image encoding / decoding efficiency.
  • Another technical problem of the present invention is to provide a method and apparatus for performing filtering that can improve image encoding / decoding efficiency.
  • One embodiment of the present invention is a video decoding method.
  • the method may further include generating an prediction block by performing intra prediction on a current block, and performing filtering on pixels to be filtered in the prediction block based on an intra prediction mode of the current block to obtain a final prediction block. And generating a reconstructed block based on the reconstructed differential block corresponding to the current block and the final prediction block, wherein the filtering target pixel is a prediction included in the filtering target region in the prediction block.
  • a filter type and a filter type applied to the filtering target pixel are determined based on the intra prediction mode of the current block.
  • the filtering target region When the intra prediction mode of the current block is a DC mode, the filtering target region includes a left vertical prediction pixel line, which is one vertical pixel line located on the leftmost side in the prediction block, and one horizontal topmost position in the prediction block. It may include a top horizontal prediction pixel line that is a pixel line.
  • filtering may be performed when the current block is a luma component block, and filtering is not performed when the current block is a chroma component block.
  • the filter type may include information about a filter shape, a filter tap, and a plurality of filter coefficients.
  • filtering is performed based on a predetermined fixed filter type, regardless of the size of the current block. Can be performed.
  • the filtering target pixel is a top left prediction pixel located at the top left corner of the prediction block
  • the filtering target pixel, a top reference pixel adjacent to an upper end of the filtering target pixel, and the filtering target pixel Filtering may be performed on the filtering target pixel by applying a 3-tap filter based on a left reference pixel adjacent to a left side of, wherein the top reference pixel and the left reference pixel are respectively reconstructed reference pixels adjacent to the current block.
  • the filter coefficient assigned to the filter tap corresponding to the filtering target pixel is 2/4, and the filter coefficient assigned to the filter tap corresponding to the top reference pixel is 1/4,
  • the filter coefficient assigned to the filter tap corresponding to the left reference pixel may be 1/4.
  • the filtering target pixel is a prediction pixel included in the left vertical prediction pixel line and is not the upper left prediction pixel
  • the filtering target pixel and the left reference pixel adjacent to the left side of the filtering target pixel are generated.
  • Filtering may be performed on the filtering target pixel by applying a horizontal two-tap filter based on the first reference pixel, wherein the left reference pixel is a reconstructed reference pixel adjacent to the current block, and in the horizontal two-tap filter, the filtering is performed.
  • the filter coefficient assigned to the filter tap corresponding to the target pixel may be 3/4, and the filter coefficient assigned to the filter tap corresponding to the left reference pixel may be 1/4.
  • the final prediction block generating step includes the filtering target pixel and an upper reference pixel adjacent to an upper end of the filtering target pixel.
  • Filtering may be performed on the filtering target pixel by applying a vertical two-tap filter, wherein the upper reference pixel is a reconstructed reference pixel adjacent to the current block, and in the vertical two-tap filter, the filtering
  • the filter coefficient assigned to the filter tap corresponding to the target pixel may be 3/4, and the filter coefficient assigned to the filter tap corresponding to the top reference pixel may be 1/4.
  • the method may include generating a prediction block by performing prediction on a prediction target pixel in the current block based on the intra prediction mode of the current block, and based on the reconstructed differential block corresponding to the current block and the last prediction block. Generating a reconstructed block; wherein, in the predictive block generating step, when the intra prediction mode of the current block is a vertical mode and the prediction target pixel is a pixel on a left vertical pixel line, Performing prediction on the prediction target pixel based on the prediction target pixel, and when the intra prediction mode of the current block is a horizontal mode and the prediction pixel is a pixel on an upper horizontal pixel line, the prediction target based on a second offset. Perform prediction on the pixel, and the left vertical pixel line is the most within the current block And one vertical pixel line located on the side of the top horizontal line of pixels is a one horizontal line of pixels located in the top within the current block.
  • the prediction target pixel is reconstructed from the reconstructed reference pixels adjacent to the top of the current block.
  • the first offset value may be derived by adding the first offset value to the pixel value of the first reference pixel existing on the same vertical line, and the first offset value may be set to the left side of the prediction target pixel. It may be determined based on a difference value between a pixel value of an adjacent second reference pixel and a pixel value of a third reference pixel adjacent to a left side of the first reference pixel.
  • the pixel value of the first reference pixel may be determined as the prediction value of the prediction target pixel.
  • the prediction block pixel is selected from among the reconstructed reference pixels adjacent to the left side of the current block.
  • the second offset value may be derived by adding the second offset value to the pixel value of the first reference pixel present on the same horizontal line, and the second offset value may be formed at the top of the prediction target pixel. It may be determined based on a difference value between a pixel value of an adjacent second reference pixel and a pixel value of a third reference pixel adjacent to an upper end of the first reference pixel.
  • the pixel value of the first reference pixel may be determined as the prediction value of the prediction target pixel.
  • the apparatus may include a prediction block generation unit configured to generate a prediction block by performing intra prediction on a current block, and perform filtering on the filtering target pixel in the prediction block based on an intra prediction mode of the current block.
  • a prediction pixel included in a target region and a filter type applied to the filtering target pixel and the filtering target region are determined based on an intra prediction mode of the current block.
  • the filtering target region When the intra prediction mode of the current block is a DC mode, the filtering target region includes a left vertical prediction pixel line, which is one vertical pixel line located on the leftmost side in the prediction block, and one horizontal topmost position in the prediction block. It may include a top horizontal prediction pixel line that is a pixel line.
  • the filter unit may include the filtering target pixel, an upper reference pixel adjacent to an upper end of the filtering target pixel, and a left side of the filtering target pixel.
  • Filtering may be performed on the filtering target pixel by applying a 3-tap filter based on a left reference pixel, wherein the upper reference pixel and the left reference pixel are reconstructed reference pixels adjacent to the current block, respectively,
  • the filter coefficient assigned to the filter tap corresponding to the filtering target pixel is 2/4
  • the filter coefficient assigned to the filter tap corresponding to the top reference pixel is 1/4
  • the left reference pixel The filter coefficient assigned to the corresponding filter tap may be 1/4.
  • the filter unit is horizontal based on the filtering target pixel and the left reference pixel adjacent to the left side of the filtering target pixel.
  • Filtering may be performed on the filtering target pixel by applying a 2-tap filter, wherein the left reference pixel is a reconstructed reference pixel adjacent to the current block, and in the horizontal 2-tap filter, corresponding to the filtering target pixel.
  • the filter coefficient assigned to the filter tap may be 3/4, and the filter coefficient assigned to the filter tap corresponding to the left reference pixel may be 1/4.
  • the filter unit is vertical based on the filtering target pixel and an upper reference pixel adjacent to an upper end of the filtering target pixel.
  • Filtering may be performed on the filtering target pixel by applying a 2-tap filter, wherein the upper reference pixel is a reconstructed reference pixel adjacent to the current block, and in the vertical 2-tap filter, corresponds to the filtering target pixel.
  • the filter coefficient assigned to the filter tap may be 3/4, and the filter coefficient assigned to the filter tap corresponding to the top reference pixel may be 1/4.
  • the apparatus may include a prediction block generator configured to generate a prediction block by performing prediction on a prediction target pixel in the current block based on an intra prediction mode of the current block, and a reconstructed differential block corresponding to the current block and the final prediction.
  • a reconstruction block generation unit configured to generate a reconstruction block based on the block, wherein the prediction block generation unit includes: when the intra prediction mode of the current block is a vertical mode and the prediction target pixel is a pixel on a left vertical pixel line, Predict the prediction target pixel based on a first offset, and when the intra prediction mode of the current block is a horizontal mode and the prediction pixel is a pixel on an upper horizontal pixel line, based on a second offset. Predicting the prediction target pixel, and the left vertical pixel line And one vertical line of pixels located in the left in the rock, the upper horizontal line of pixels is a one horizontal line of pixels located in the top within the current block.
  • the prediction block generator is equal to the prediction target pixel among the reconstructed reference pixels adjacent to the top of the current block when the intra prediction mode of the current block is a vertical mode and the prediction target pixel is a pixel on the left vertical pixel line.
  • the first offset value may be added to a pixel value of a first reference pixel existing on a vertical line to derive a prediction value of the prediction target pixel, and the first offset value is adjacent to the left side of the prediction target pixel.
  • the pixel value of the second reference pixel and the pixel value of the third reference pixel adjacent to the left side of the first reference pixel may be determined based on a difference value.
  • the prediction block generator is equal to the prediction target pixel among the reconstructed reference pixels adjacent to the left side of the current block when the intra prediction mode of the current block is a horizontal mode and the prediction target pixel is a pixel on the upper horizontal pixel line.
  • the second offset value may be derived by adding the second offset value to a pixel value of a first reference pixel existing on a horizontal line, and the second offset value is adjacent to an upper end of the prediction target pixel. It may be determined based on a difference value between a pixel value of a second reference pixel and a pixel value of a third reference pixel adjacent to an upper end of the first reference pixel.
  • image encoding / decoding efficiency can be improved.
  • the image decoding method According to the image decoding method according to the present invention, the image encoding / decoding efficiency can be improved.
  • image encoding / decoding efficiency may be improved.
  • image encoding / decoding efficiency may be improved.
  • the image encoding / decoding efficiency can be improved.
  • FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
  • 3 is a conceptual diagram schematically illustrating an embodiment in which one unit is divided into a plurality of sub-units.
  • 4A and 4B are diagrams for describing an embodiment of an intra prediction process.
  • FIG. 5 is a diagram schematically showing an embodiment of an intra prediction method in a planner mode.
  • FIG. 6 is a flowchart schematically showing an embodiment of an image encoding method according to the present invention.
  • FIG. 7 is a diagram schematically showing an embodiment of the above-described difference block generation process.
  • FIG. 8 is a flowchart schematically showing an embodiment of an image decoding method according to the present invention.
  • FIG. 9 is a diagram schematically showing an embodiment of the above-described difference block generation process.
  • FIG. 10 is a flowchart schematically illustrating an embodiment of a filtering performing method according to the present invention.
  • FIG. 11 is a diagram schematically illustrating an embodiment of a method of determining whether to perform filtering based on encoding parameters of a neighboring block adjacent to a current block.
  • FIG. 12 is a diagram schematically illustrating an embodiment of a method of determining whether to perform filtering based on information about the presence or absence of a neighboring block adjacent to a current block (and / or whether a neighboring block is an available block). .
  • FIG. 13 is a diagram schematically illustrating an embodiment of a method of determining a filtering performing region based on an intra prediction mode of a current block.
  • FIG. 14 is a diagram schematically showing an embodiment of a method of determining a filtering performing region based on a size and / or a depth of a current block.
  • FIG. 15 is a diagram schematically illustrating an embodiment of a method of determining a filtering performing region based on an encoding mode of a neighboring block adjacent to a current block.
  • 16A and 16B illustrate an embodiment of a filter type determination method according to an intra prediction mode of a current block.
  • FIG. 17 is a diagram schematically illustrating a filter type determination method according to the embodiments of FIGS. 16A and 16B.
  • FIG. 18 is a diagram schematically showing an embodiment of a filter type applied when the prediction mode of the current block is a vertical mode and / or a horizontal mode.
  • FIG. 19 is a view schematically showing another embodiment of a filter type according to the present invention.
  • FIG. 20 is a diagram for describing an intra prediction mode and a filter type applied to Table 9.
  • FIG. 20 is a diagram for describing an intra prediction mode and a filter type applied to Table 9.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • each component shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit.
  • each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
  • Integrated and separate embodiments of the components are also included within the scope of the present invention without departing from the spirit of the invention.
  • the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
  • the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
  • FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
  • the image encoding apparatus 100 may include a motion predictor 111, a motion compensator 112, an intra predictor 120, a switch 115, a subtractor 125, and a converter 130. And a quantization unit 140, an entropy encoding unit 150, an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
  • the image encoding apparatus 100 may encode an input image in an intra mode or an inter mode and output a bitstream.
  • Intra prediction means intra prediction and inter prediction means inter prediction.
  • the switch 115 may be switched to intra, and in the inter mode, the switch 115 may be switched to inter.
  • the image encoding apparatus 100 may generate a prediction block for an input block of an input image and then encode a residual between the input block and the prediction block.
  • the intra predictor 120 may generate a prediction block by performing spatial prediction using pixel values of blocks that are already encoded around the current block.
  • the motion predictor 111 may obtain a motion vector by searching for a region that best matches an input block in the reference image stored in the reference picture buffer 190 during the motion prediction process.
  • the motion compensator 112 may generate a prediction block by performing motion compensation using the motion vector.
  • the motion vector is a two-dimensional vector used for inter prediction, and may indicate an offset between the current encoding / decoding target image and the reference image.
  • the subtractor 125 may generate a residual block by the difference between the input block and the generated prediction block.
  • the transform unit 130 may output a transform coefficient by performing a transform on the residual block.
  • the quantization unit 140 may output the quantized coefficient by quantizing the input transform coefficient according to the quantization parameter.
  • the entropy encoder 150 may output a bit stream by performing entropy encoding based on the values calculated by the quantizer 140 or the encoding parameter values calculated in the encoding process.
  • the entropy encoder 150 may use an encoding method such as exponential golomb, context-adaptive variable length coding (CAVLC), or context-adaptive binary arithmetic coding (CABAC) for entropy encoding.
  • CAVLC context-adaptive variable length coding
  • CABAC context-adaptive binary arithmetic coding
  • the image encoding apparatus Since the image encoding apparatus according to the embodiment of FIG. 1 performs inter prediction encoding, that is, inter prediction encoding, the currently encoded image needs to be decoded and stored to be used as a reference image. Accordingly, the quantized coefficients are inversely quantized by the inverse quantizer 160 and inversely transformed by the inverse transformer 170. The inverse quantized and inverse transformed coefficients are added to the prediction block by the adder 175 and a reconstruction block is generated.
  • the reconstruction block passes through the filter unit 180, and the filter unit 180 applies at least one or more of a deblocking filter, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to the reconstruction block or the reconstruction picture. can do.
  • the filter unit 180 may be referred to as an adaptive in-loop filter.
  • the deblocking filter can remove block distortion generated at the boundary between blocks.
  • SAO can add an appropriate offset to the pixel value to compensate for coding errors.
  • the ALF may perform filtering based on a value obtained by comparing the reconstructed image with the original image.
  • the reconstructed block that has passed through the filter unit 180 may be stored in the reference picture buffer 190.
  • FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
  • the image decoding apparatus 200 may include an entropy decoder 210, an inverse quantizer 220, an inverse transformer 230, an intra predictor 240, a motion compensator 250, and an adder ( 255, a filter unit 260, and a reference picture buffer 270.
  • the image decoding apparatus 200 may receive a bitstream output from the encoder and perform decoding in an intra mode or an inter mode, and output a reconstructed image, that is, a reconstructed image.
  • the switch In the intra mode, the switch may be switched to intra, and in the inter mode, the switch may be switched to inter.
  • the image decoding apparatus 200 may obtain a residual block from the input bitstream, generate a prediction block, and then add the residual block and the prediction block to generate a reconstructed block, that is, a reconstruction block.
  • the entropy decoder 210 may entropy decode the input bitstream according to a probability distribution to generate symbols including symbols in the form of quantized coefficients.
  • the entropy decoding method is similar to the entropy coding method described above.
  • the entropy decoding method When the entropy decoding method is applied, a small number of bits are allocated to a symbol having a high probability of occurrence and a large number of bits are allocated to a symbol having a low probability of occurrence, whereby the size of the bit string for each symbol is increased. Can be reduced. Therefore, the compression performance of image decoding can be improved through an entropy decoding method.
  • the quantized coefficient is inversely quantized by the inverse quantizer 220 and inversely transformed by the inverse transformer 230, and as a result of the inverse quantization / inverse transformation of the quantized coefficient, a residual block may be generated.
  • the intra predictor 240 may generate a prediction block by performing spatial prediction using pixel values of blocks that are already encoded around the current block.
  • the motion compensator 250 may generate a predictive block by performing motion compensation using the reference image stored in the motion vector and the reference picture buffer 270.
  • the residual block and the prediction block may be added through the adder 255, and the added block may pass through the filter unit 260.
  • the filter unit 260 may apply at least one or more of the deblocking filter, SAO, and ALF to the reconstructed block or the reconstructed picture.
  • the filter unit 260 may output a reconstructed image, that is, a reconstructed image.
  • the reconstructed picture may be stored in the reference picture buffer 270 and used for inter prediction.
  • a unit means a unit of image encoding and decoding.
  • a coding or decoding unit refers to a divided unit when an image is divided and encoded or decoded, and thus a coding unit (CU), a prediction unit (PU), and a transformation unit (TU). Transform Unit).
  • CU coding unit
  • PU prediction unit
  • TU transformation unit
  • Transform Unit Transform Unit
  • a unit may also be referred to as a block.
  • One unit may be further divided into smaller sub-units.
  • 3 is a conceptual diagram schematically illustrating an embodiment in which one unit is divided into a plurality of sub-units.
  • One unit may be hierarchically divided with depth information based on a tree structure.
  • Each divided subunit may have depth information. Since the depth information indicates the number and / or degree of division of the unit, the depth information may include information about the size of the sub-unit.
  • the highest node may be called a root node and may have the smallest depth value. At this time, the highest node may have a depth of level 0 and may represent the first unit that is not divided.
  • a lower node having a depth of level 1 may indicate a unit in which the first unit is divided once, and a lower node having a depth of level 2 may indicate a unit in which the first unit is divided twice.
  • unit 320 corresponding to node a in 320 of FIG. 3 may be a unit divided once in an initial unit and may have a depth of level 1.
  • a leaf node of level 3 may indicate a unit in which the first unit is divided three times.
  • the unit d corresponding to the node d in 320 of FIG. 3 may be a unit divided three times in the first unit and may have a depth of level 3.
  • FIG. the leaf node at level 3, which is the lowest node, may have the deepest depth.
  • an encoding / decoding target block may be referred to as a current block in some cases.
  • the encoding / decoding object block may be called a prediction object block.
  • the image signal may generally include three color signals representing three primary color components of light.
  • Three color signals representing three primary colors of light can be represented by R (Red), G (Green), and B (Blue).
  • the R, G, and B signals may be converted into one luma signal and two chroma signals to reduce a frequency band used for image processing.
  • one image signal may include one luma signal and two chroma signals.
  • the luma signal may correspond to Y as a component representing the brightness of the screen and the chroma signal may correspond to U, V, Cb, or Cr as a component representing the color of the screen.
  • a block having a luma component is called a luma block
  • a block having a chroma component is called a chroma block.
  • FIG. 4A and 4B are diagrams for describing an embodiment of an intra prediction process.
  • 410 and 420 of FIG. 4A illustrate embodiments of prediction directions of intra prediction modes and mode values assigned to each prediction direction.
  • reference numeral 430 of FIG. 4B illustrates a position of a reference pixel used for intra prediction of an encoding / decoding target block.
  • a pixel may have the same meaning as a sample, and in embodiments described below, a pixel may be referred to as a sample in some cases.
  • the encoder and the decoder may generate a prediction block by performing intra prediction based on pixel information in the current picture. That is, when performing intra prediction, the encoder and the decoder may perform directional prediction and / or non-directional prediction based on at least one reconstructed reference pixel.
  • the prediction block may mean a block generated as a result of performing intra prediction.
  • the prediction block may correspond to at least one of a coding unit CU, a prediction unit PU, and a transform unit TU.
  • the prediction block may be a square block having a size of 2x2, 4x4, 8x8, 16x16, 32x32, or 64x64, or a rectangular block having a size of 2x8, 4x8, 2x16, 4x16, 8x16, or the like. have.
  • intra prediction may be performed according to the intra prediction mode of the current block.
  • the number of intra prediction modes that the current block may have may be a predetermined fixed value or may be a value determined differently according to the size of the prediction block.
  • the number of intra prediction modes that the current block may have may be 3, 5, 9, 17, 34, 35, 36, or the like.
  • 410 of FIG. 4A illustrates an embodiment of a prediction direction of the intra prediction mode and a mode value assigned to each prediction direction.
  • the number assigned to each intra prediction mode may indicate a mode value.
  • prediction in the vertical mode in which the mode value is 0, prediction may be performed in the vertical direction based on the pixel value of the reference pixel, and in the horizontal mode in which the mode value is 1, the reference pixel.
  • the prediction may be performed in the horizontal direction based on the pixel value of.
  • the encoder and the decoder may perform intra prediction using the reference pixel according to the corresponding angle.
  • an intra prediction mode having a mode value of 2 may be called a DC mode
  • an intra prediction mode having a mode value of 34 may be called a planar mode.
  • the DC mode and the planner mode may correspond to a non-directional mode.
  • a prediction block may be generated by an average of pixel values of a plurality of reference pixels. An embodiment of a method of generating each prediction pixel in the prediction block in the planner mode will be described later with reference to FIG. 5.
  • the number of intra prediction modes and / or mode values assigned to each intra prediction mode are not limited to the above-described embodiments, and may be determined differently according to implementation and / or needs.
  • the prediction direction of the intra prediction mode and the mode value assigned to each prediction mode may be determined differently from 410 of FIG. 4A as in 420 of FIG. 4A.
  • intra prediction is performed based on an intra prediction mode as shown at 410 of FIG. 4A unless otherwise stated for convenience of description.
  • an intra prediction mode located on the right side of the vertical mode is called a vertical right mode
  • an intra prediction mode located on the bottom of the horizontal mode is called a horizontal-below mode.
  • an intra prediction mode having a mode value of 5, 6, 12, 13, 22, 23, 24, or 25 in 410 of FIG. 4A may correspond to the vertical right mode 413, and the mode value is 8 or 9.
  • the intra prediction modes 16, 17, 30, 31, 32, and 33 may correspond to the horizontal lower mode 416.
  • a reconstructed reference pixel used for intra prediction of the current block includes, for example, a lower-left reference pixel 431, a left reference pixel 433, There may be an upper-left corner reference pixel 435, an upper reference pixel 437, an upper-right reference pixel 439, and the like.
  • the left reference pixel 433 may mean a reconstructed reference pixel adjacent to the outer left side of the current block
  • the upper reference pixel 437 may mean a reconstructed reference pixel adjacent to the outer upper outer side of the current block
  • the corner reference pixel 435 may mean a reconstructed reference pixel located at the upper left corner outside the current block.
  • the lower left reference pixel 431 may refer to a reference pixel located below the left pixel line among pixels positioned on the same line as the left pixel line including the left reference pixel 433, and the upper right reference pixel 439. ) May refer to a reference pixel positioned to the right of the upper pixel line among pixels positioned on the same line as the upper pixel line formed of the upper reference pixel 437.
  • the name of the above-described reference pixel may be equally applied to other embodiments described below.
  • the reference pixel used for intra prediction of the current block may vary depending on the intra prediction mode of the current block.
  • the intra prediction mode of the current block is a vertical mode (intra prediction mode in which the mode value is 0 in 410 of FIG. 4A)
  • the upper reference pixel 437 may be used for intra prediction
  • the current block When the intra prediction mode of is a horizontal mode (intra prediction mode in which the mode value is 1 in 410 of FIG. 4A), the left reference pixel 433 may be used for intra prediction.
  • the upper right reference pixel 439 may be used for intra prediction.
  • the lower left reference pixel 431 is used. This can be used for intra prediction.
  • the encoder and the decoder may determine the reference pixel value of the corresponding position as the prediction pixel value for the prediction target pixel. If the position of the reference pixel determined based on the prediction direction of the intra prediction mode and the target pixel is not an integer position, the encoder and the decoder generate an interpolated reference pixel based on the reference pixels of the integer position, and The pixel value of the interpolated reference pixel may be determined as the prediction pixel value.
  • the encoder and the decoder may perform intra prediction on the block to be encoded / decoded based on the reconstructed or generated reference pixel.
  • the reference pixel used for intra prediction may vary according to the intra prediction mode of the current block, and discontinuity may occur between the generated prediction block and the neighboring block. For example, in the case of directional intra prediction, a pixel farther from a reference pixel among prediction pixels in the prediction block may have a larger prediction error. In this case, discontinuity may occur due to the prediction error, and there may be a limit in improving coding efficiency.
  • an encoding / decoding method for performing filtering on a prediction block generated by intra prediction may be provided.
  • a filter may be adaptively applied to a region having a large prediction error in a prediction block generated based on a reference pixel.
  • the prediction error is reduced and the discontinuity between blocks is minimized, so that the encoding / decoding efficiency can be improved.
  • FIG. 5 is a diagram schematically showing an embodiment of an intra prediction method in a planner mode.
  • FIG. 5 shows an embodiment of the intra prediction method in planner mode
  • 530 of FIG. 5 shows another embodiment of the intra prediction method in planner mode
  • 515 and 535 of FIG. 5 indicate a block to be encoded / decoded (hereinafter, have the same meaning as the current block), and sizes of blocks 515 and 535 are nS x nS, respectively.
  • the position of a pixel in the current block may be represented by predetermined coordinates.
  • the top left coordinate in the current block is referred to as (0,0).
  • the y value can be increased toward the bottom on the coordinate axis, and the x value can be increased toward the right.
  • the coordinates of the pixel may be represented by the same coordinate axis as that used in FIG. 5.
  • the encoder and the decoder are predicted pixels for pixels nS-1 and nS-1 located at the bottom right in the current block, that is, pixels of the bottom right predicted pixel 520.
  • the value can be derived.
  • the encoder and decoder are based on a pixel on the rightmost vertical line in the current block based on the reference pixel 523 located at the rightmost (nS-1, -1) and the lower right prediction pixel 520 among the upper reference pixels.
  • the pixel value of the prediction pixel that is, the right vertical line prediction pixel, may be derived, and the reference pixel 526 and the bottom right prediction pixel 520 located at the bottom ( ⁇ 1, nS ⁇ 1) among the left reference pixels may be obtained.
  • the prediction pixel for the pixel on the lowest horizontal line in the current block that is, the pixel value of the lower horizontal line prediction pixel, may be derived.
  • the prediction values for the remaining pixels except for the pixel on the right vertical line and the pixel on the lower horizontal line among the pixels in the current block include the upper reference pixel, the left reference pixel, the right vertical line prediction pixel, and the lower horizontal line prediction pixel. Can be obtained by applying the weight on the basis.
  • the encoder and the decoder may derive the prediction value for the prediction target pixel 540 in the current block 535 in the same manner as 530 of FIG. 5.
  • the coordinate of the prediction target pixel 540 is (x, y).
  • the encoder and the decoder are positioned on the same horizontal line as the reference pixel (-1, nS) 541 located at the top of the lower left reference pixels and the prediction target pixel 540 among the left reference pixels.
  • a prediction value of the target pixel 540 may be derived by performing an average and / or a weighted average based on the reference pixels nS and ⁇ 1.
  • FIG. 6 is a flowchart schematically showing an embodiment of an image encoding method according to the present invention.
  • the encoder may generate a prediction block by performing intra prediction on an encoding target block (S610). Since a specific embodiment of the predictive block generation method has been described above with reference to FIGS. 4A and 4B, a description thereof will be omitted.
  • the encoder may perform filtering on the prediction block based on the encoding target block and / or the encoding parameter of the neighboring block adjacent to the encoding target block (S620).
  • the encoding parameter may include information that may be inferred in the encoding or decoding process, as well as information encoded by the encoder and transmitted to the decoder, such as a syntax element, and may be required for encoding or decoding an image.
  • Coding parameters include, for example, intra / inter prediction modes, motion vectors, reference picture indices, coded block patterns (CBPs), residual signals, quantization parameters, block sizes, block partition information, and the like. can do.
  • the encoder may include an intra prediction mode of a block to be encoded, whether the block to be encoded is a luma block or a chroma block, the size (and / or depth) of the block to be encoded, and encoding parameters of neighboring blocks adjacent to the block to be encoded (eg, For example, filtering may be performed on the prediction block based on information about the encoding mode of the neighboring block) and / or the presence or absence of the neighboring block (and / or whether the neighboring block is an available block).
  • the encoder may not perform filtering on the prediction block. For example, the encoder may determine whether to perform filtering based on encoding parameters of an encoding target block and / or a neighboring block adjacent to the encoding target block, and if it is determined that filtering is not performed, the filtering unit does not perform filtering on the prediction block. It may not.
  • the above-described filtering process may be a separate process independent of the prediction block generation process, or may be performed as one process in combination with the prediction block generation process. That is, the encoder may generate the prediction block by applying a process corresponding to the filtering process based on the encoding parameters of the encoding target block and / or the neighboring block in the prediction block generation process. A specific embodiment of the filtering method will be described later.
  • the encoder may generate a difference block based on the original block and the prediction block corresponding to the position of the encoding target block (S630).
  • the prediction block may be a prediction block on which filtering is performed or a prediction block on which filtering is not performed.
  • 7 is a diagram schematically showing an embodiment of the above-described difference block generation process.
  • 710 of FIG. 7 illustrates an embodiment of a process of generating a difference block based on an original block and a prediction block on which filtering is performed.
  • block 713 represents an original block
  • block 716 represents a prediction block on which filtering is performed
  • 719 represents a differential block.
  • the encoder and the decoder may generate a difference block by subtracting the prediction block from which the filtering is performed from the original block.
  • 720 of FIG. 7 illustrates an embodiment of a process of generating a difference block based on an original block and a prediction block on which filtering is not performed.
  • FIG. 1 illustrates an embodiment of a process of generating a difference block based on an original block and a prediction block on which filtering is not performed.
  • block 723 represents an original block
  • block 726 represents a prediction block without filtering
  • 729 represents a differential block.
  • the encoder and the decoder may generate a difference block by subtracting a prediction block that is not filtered from the original block.
  • the generated difference block may be transmitted to the decoder through a process of transform, quantization, entropy encoding, and the like.
  • FIG. 8 is a flowchart schematically showing an embodiment of an image decoding method according to the present invention.
  • the decoder may generate a prediction block by performing intra prediction on a decoding target block (S810). Since a specific embodiment of the predictive block generation method has been described above with reference to FIGS. 4A and 4B, a description thereof will be omitted.
  • the decoder may perform filtering on the prediction block based on the encoding parameter of the decoding target block and / or the neighboring block adjacent to the decoding target block (S820).
  • the encoding parameter may include information that may be inferred in the encoding or decoding process, as well as information encoded by the encoder and transmitted to the decoder, such as a syntax element, and may be required for encoding or decoding an image.
  • Coding parameters include, for example, intra / inter prediction modes, motion vectors, reference picture indices, coded block patterns (CBPs), residual signals, quantization parameters, block sizes, block partition information, and the like. can do.
  • the decoder may include an intra prediction mode of a decoding object block, whether the decoding object block is a luma block or a chroma block, the size (and / or depth) of the decoding object block, and encoding parameters of neighboring blocks adjacent to the decoding object block (eg, For example, filtering may be performed on the prediction block based on information about the encoding mode of the neighboring block) and / or the presence or absence of the neighboring block (and / or whether the neighboring block is an available block).
  • the decoder may not perform filtering on the prediction block.
  • the decoder may determine whether to perform filtering based on decoding parameters of a decoding target block and / or a neighboring block adjacent to the decoding target block, and if it is determined that filtering is not performed, the filtering unit does not perform filtering on the prediction block. It may not.
  • the above-described filtering process may be a separate process independent of the prediction block generation process, or may be performed as one process in combination with the prediction block generation process. That is, the decoder may generate the prediction block by simultaneously applying a process corresponding to the filtering process based on encoding parameters of the decoding object block and / or the neighboring block in the prediction block generation process. In this case, the decoder may not perform a separate filtering process on the prediction block.
  • the filtering method in the decoder may be the same as in the encoder. A specific embodiment of the filtering method will be described later.
  • the decoder may generate a reconstructed block based on the reconstructed differential block and the predicted block corresponding to the position of the decoding target block (S830).
  • the prediction block may be a prediction block on which filtering is performed or a prediction block on which filtering is not performed.
  • 9 is a diagram schematically showing an embodiment of the above-described difference block generation process.
  • 9 illustrates an embodiment of a process of generating a reconstruction block based on the reconstructed differential block and the prediction block on which the filtering is performed.
  • block 913 represents a reconstructed differential block
  • block 916 represents a prediction block on which filtering is performed
  • 919 represents a reconstructed block.
  • the encoder and the decoder may generate a reconstructed block by adding the reconstructed differential block and the prediction block on which the filtering is performed.
  • 9 illustrates an embodiment of a process of generating a reconstruction block based on a reconstructed differential block and a prediction block on which filtering is not performed.
  • block 923 represents a reconstructed difference block
  • block 926 represents a prediction block without filtering
  • 929 represents a reconstructed block.
  • the encoder and the decoder may generate a reconstructed block by adding a reconstructed differential block and a prediction block on which filtering is not performed.
  • FIG. 10 is a flowchart schematically illustrating an embodiment of a filtering performing method according to the present invention.
  • the encoder and the decoder may determine whether to perform filtering on the prediction block (and / or the prediction pixel) (S1010).
  • the encoder and the decoder may perform intra prediction on a block to be encoded / decoded based on a previously reconstructed reference pixel.
  • the reference pixel used for intra prediction and / or the prediction pixel value in the prediction block generated by the intra prediction may vary according to the intra prediction mode of the current block. Therefore, in this case, the encoder and the decoder may reduce the prediction error by performing filtering on the prediction pixel having low association with the reference pixel used for intra prediction.
  • the encoder and the decoder may use the intra prediction mode of the encoding / decoding object block, whether the encoding / decoding object block is a luma block or a chroma block, the size (and / or depth) of the encoding / decoding object block, and the encoding / decoding object block.
  • the encoding parameters e.g., the size of the neighboring block and / or the encoding mode of the neighboring block
  • it may be determined whether to perform filtering on the prediction block (and / or the prediction pixel). Whether to perform filtering may be determined in the encoding / decoding process, or may be predetermined according to each condition.
  • specific embodiments of the method of determining whether to perform filtering are described.
  • the encoder and the decoder may determine whether to perform filtering on the prediction block based on the intra prediction mode of the encoding / decoding target block.
  • the reference pixel and the prediction direction used for intra prediction may be determined differently according to the intra prediction mode of the encoding / decoding target block. Therefore, it may be efficient to determine whether to perform filtering based on the intra prediction mode of the encoding / decoding target block.
  • Table 1 below shows an embodiment of a method of determining whether to perform filtering according to an intra prediction mode.
  • Table 1 it is assumed that the prediction direction of the intra prediction mode and the mode value assigned to each prediction mode are determined as in 410 of FIG. 4A described above.
  • 0 may indicate that filtering is not performed among the values allocated to the intra prediction mode
  • 1 may indicate that filtering is performed.
  • the prediction mode of the current block is a DC mode (for example, a prediction mode having a mode value of 2)
  • the prediction block is generated by the pixel value average of the plurality of reference pixels, the correlation between the prediction pixel and the reference pixel is performed. Becomes smaller. Therefore, in this case, the encoder and the decoder may perform filtering on the prediction pixel in the prediction block.
  • the prediction mode of the current block is a planner mode (for example, a prediction mode having a mode value of 34)
  • the encoder and the decoder may select the right vertical line prediction pixel and the lower horizontal line prediction pixel as described above with reference to FIG. 5.
  • a weight may be applied based on the derived prediction pixel and the reference pixel to derive a prediction value for each pixel in the current block. Therefore, in this case, since the association between the prediction pixel and the reference pixel is small, the encoder and the decoder may perform filtering on the prediction pixel in the prediction block.
  • the encoder and the decoder refer to the above. Since intra prediction is performed on the current block by using the pixel and / or the upper right reference pixel, the correlation between the prediction pixel located in the left region and the left reference pixel in the prediction block may be reduced. Therefore, in this case, filtering may be performed on pixels located in the left region of the prediction block.
  • the intra prediction mode of the current block is a horizontal lower mode (for example, a prediction mode having mode values of 8, 9, 16, 17, 30, 31, 32, and 33), the encoder and the decoder refer to the left side.
  • the correlation between the prediction pixel located in the upper region and the upper reference pixel in the prediction block may be reduced. Therefore, in this case, filtering may be performed on the pixel located in the upper region of the prediction block.
  • the encoder and decoder perform filtering for the vertical mode (e.g., prediction mode with mode value 0) and the horizontal mode (e.g., prediction mode with mode value 1). It can also be done.
  • the intra prediction mode of the current block is the vertical mode
  • the encoder and the decoder perform intra prediction on the current block by using the upper reference pixel, so that the correlation between the prediction pixel located in the left region and the left reference pixel in the prediction block becomes small. Can be. Therefore, in this case, filtering may be performed on pixels located in the left region of the prediction block.
  • the encoder and the decoder perform intra prediction on the current block by using the left reference pixel.
  • the association between the prediction pixel located in the top region in the block and the top reference pixel may be small. Therefore, in this case, filtering may be performed on the pixel located in the upper region of the prediction block.
  • the encoder and the decoder may use at least one reference pixel among the upper reference pixel and the upper right reference pixel for intra prediction, and the left reference pixel and the lower left reference pixel. At least one reference pixel may be used for intra prediction. Therefore, in this case, since the prediction pixels located in the left region and the upper region of the prediction block can maintain the association with the reference pixel, the encoder and the decoder may not perform filtering on the prediction block.
  • the encoder and the decoder may determine whether to perform filtering on the prediction block based on the size and / or depth of the current block (and / or the block to be predicted).
  • the current block may correspond to at least one of a CU, a PU, or a TU.
  • Table 2 below shows one embodiment of a method of determining whether to perform filtering according to a block size
  • Table 3 below shows an embodiment of a method of determining whether to perform filtering according to a depth value of a current block.
  • the current block may correspond to a TU
  • the size of the TU may be, for example, 2x2, 4x4, 8x8, 16x16, 32x32, 64x64, and the like.
  • the present invention is not limited thereto, and the current block may correspond to a CU and / or a PU, not a TU.
  • 0 may indicate that filtering is not performed among the values allocated to the intra prediction mode
  • 1 may indicate that filtering is performed.
  • the encoder and the decoder may determine whether to perform filtering on the current block and / or the prediction block in consideration of the intra prediction mode of the current block and the size of the current block. That is, the encoder and the decoder may determine whether to perform filtering for each of the intra prediction modes based on the size of the current block. In this case, whether to perform filtering may be differently determined according to the size of the current block for each intra prediction mode. Table 4 below shows an embodiment of a method of determining whether to perform filtering according to the intra prediction mode of the current block and the size of the current block.
  • 0 may indicate that filtering is not performed among the values allocated to each intra prediction mode, and 1 may indicate that filtering is performed.
  • the encoder and the decoder may determine whether to perform filtering on the prediction block based on information indicating whether the current block corresponds to a luma block or a chroma block, that is, color component information of the current block. .
  • the encoder and the decoder may perform filtering on the prediction block only when the current block corresponds to the luma block, and may not perform filtering when the current block corresponds to the chroma block.
  • the encoder and the decoder may include encoding parameters of neighboring blocks adjacent to the current block, whether CIP (Constrained Intra Prediction) is applied to the current block, and / or presence or absence of neighboring blocks (and / or neighboring blocks are available). Whether or not to perform filtering may be determined based on information on whether an available block is available. Specific embodiments of the method of determining whether to perform filtering on each will be described later.
  • the encoder and the decoder may determine a region in which filtering is performed in the current block and / or the prediction block (S1020).
  • the region in which the filtering is performed may correspond to one or more samples in the current block and / or the prediction block.
  • the encoder and the decoder may reduce the prediction error by performing filtering on prediction pixels having low correlation with the reference pixels used for intra prediction. That is, the encoder and the decoder may determine a region where the prediction error is relatively large in the current block and / or the prediction block as the filtering performing region. In this case, the encoder and the decoder may determine the filtering performing region based on at least one of the intra prediction mode of the current block, the size (and / or depth) of the current block, and the encoding mode of the neighboring block adjacent to the current block.
  • the encoding mode of the neighboring block may indicate whether the neighboring block is encoded / decoded in the inter mode or in the intra mode. Specific embodiments of the method of determining the filtering performing region will be described later.
  • the encoder and the decoder may determine a filter type applied to each prediction pixel in the filtering performing region (S1030).
  • the filter type may include information about a filter shape, a filter tap, a filter coefficient, and the like.
  • the plurality of intra prediction modes may have different prediction directions, and a method of using the reconstructed reference pixel may vary according to the position of the pixel to be filtered. Therefore, the encoder and the decoder can improve the filtering efficiency by adaptively determining the filter type. For example, the encoder and the decoder determine the filter type applied for each filtering pixel based on the intra prediction mode of the current block, the size (and / or depth) of the current block, and / or the position of the filtering pixel. Can be.
  • the filter shape may include a horizontal shape, a vertical shape, a diagonal shape, and the like, and the filter tab may include a 2-tap, 3-tap, 4-tap, and the like.
  • the encoder and the decoder may determine the filter coefficients based on the size of the prediction block and / or the position of the filtering target pixel. That is, the encoder and the decoder may vary filter coefficients applied to the filtering target pixel according to the size of the prediction block and / or the position of the filtering target pixel. Therefore, the filter strength of the filtering target pixel can be adaptively determined. For example, when a 2-tap filter is used, the filter coefficients may be [1: 3], [1: 7], [3: 5], and the like. As another example, when a 3-tap filter is used, the filter coefficients may be [1: 2: 1], [1: 4: 1], [1: 6: 1], or the like.
  • the filter determined by the filter type may not be a filter defined by the filter shape, the filter tap, the filter coefficient, and the like.
  • the encoder and the decoder may perform the filtering process by adding an offset value determined by a predetermined process to the pixel value of the reference pixel.
  • the filtering process may be performed in combination with the prediction block generation process. That is, the filtered prediction pixel value of each pixel in the current block may be derived by only the above-described filtering process, wherein the above-described filtering process includes one prediction process including both a prediction pixel generation process and a filtering process for the generated prediction pixel. This can be a process.
  • the encoder and the decoder may perform filtering on each prediction pixel in the prediction block based on the determined filter application region and the filter type (S1040). If it is determined that filtering is not performed on the prediction block, the encoder and the decoder may not perform filtering on the prediction block (and / or each prediction pixel in the prediction block) (S1050).
  • FIG. 11 is a diagram schematically illustrating an embodiment of a method of determining whether to perform filtering based on encoding parameters of a neighboring block adjacent to a current block.
  • an encoding parameter of a neighboring block may include an intra prediction mode, an inter prediction mode, a coding mode, and the like.
  • the encoding mode of the neighboring block may indicate whether the neighboring block is encoded / decoded in the inter mode or in the intra mode.
  • 1110 of FIG. 11 illustrates an embodiment of a method of determining whether to perform filtering based on an intra prediction mode of a neighboring block adjacent to a current block.
  • 1113 of FIG. 11 represents the current block C
  • 1116 of FIG. 11 represents a left peripheral block A adjacent to the left side of the current block.
  • an intra prediction mode of a current block corresponds to a vertical right mode.
  • filtering may be performed on pixels located in the left region 1119 of the prediction block.
  • the filtering target region 1119 It may be more efficient not to perform filtering on. Therefore, when the prediction direction of the neighboring block 1116 adjacent to the filtering target region 1119 and the prediction direction of the current block 1113 are different from each other, the encoder and the decoder may not perform filtering on the filtering target region 1119. have.
  • the prediction direction of the neighboring block 1116 adjacent to the filtering target region 1119 and the prediction direction of the current block 1113 are the same or similar to each other (for example, when the prediction angle difference value is less than or equal to a predetermined threshold).
  • the prediction error may be reduced by filtering the filtering target region 1119.
  • 1120 of FIG. 11 illustrates an embodiment of a method of determining whether to perform filtering based on an encoding mode of a neighboring block adjacent to the current block when CIP (Constrained Intra Prediction) is applied to the current block.
  • 1123 of FIG. 11 represents the current block C
  • 1126 of FIG. 11 represents a left peripheral block A adjacent to the left side of the current block.
  • the intra prediction mode of the current block corresponds to the vertical right mode.
  • filtering may be performed on pixels located in the left region 1129 of the prediction block.
  • the encoder and the decoder may not use a pixel in a neighboring block encoded in the inter mode as a reference pixel when performing intra prediction on the current block 1123.
  • the reference pixel in the left neighboring block 1126 that is, the left reference pixel is included in the inter prediction of the current block 1123. May not be used.
  • the encoder and the decoder may perform intra prediction after filling the pixel value of the reference pixel in the intra coded block at the position of the left reference pixel. That is, the encoder and the decoder may enhance error tolerance by not using the pixel to which the inter mode is applied for intra prediction.
  • the encoder and the decoder are to be filtered. It may not perform filtering on (1129).
  • FIG. 12 is a diagram schematically illustrating an embodiment of a method of determining whether to perform filtering based on information about the presence or absence of a neighboring block adjacent to a current block (and / or whether a neighboring block is an available block). .
  • FIG. 1210 of FIG. 12 represents a current block C
  • 1220 of FIG. 12 represents a neighboring block A adjacent to the left side of the current block.
  • the intra prediction mode of the current block 1210 corresponds to the vertical right mode.
  • filtering may be performed on pixels located in the left region 1230 in the prediction block.
  • the encoder and the decoder may not perform filtering on the filtering target region.
  • the neighboring block adjacent to the filtering target region does not exist or is not available, when the current block exists at the boundary of the current picture and the neighboring block adjacent to the current block is outside the slice boundary to which the current block belongs. It may be present in the case.
  • the encoder and the decoder may perform intra prediction after generating a reference pixel value of a position adjacent to the filtering target region using the available reference pixels.
  • the generated plurality of reference pixels may have similar values and the value of the generated reference pixel may not be similar to the pixel value in the current block, so filtering on the current block based on the generated reference pixel is performed. Performing may reduce coding efficiency. Therefore, the encoder and the decoder may not perform filtering on the filtering target region.
  • reconstructed blocks B and D exist around the current blocks C and 1210.
  • the left neighboring blocks A and 1220 adjacent to the filtering target region 1230 in the current block 1210 are outside the boundary 1240 of the slice to which the current block 1210 belongs.
  • the encoder and the decoder may not perform filtering on the filtering target region 1230.
  • FIG. 13 is a diagram schematically illustrating an embodiment of a method of determining a filtering performing region based on an intra prediction mode of a current block.
  • the encoder and the decoder may perform intra prediction on a block to be encoded / decoded based on a previously reconstructed reference pixel.
  • the reference pixel and / or prediction direction used for intra prediction may vary according to the intra prediction mode of the current block, a region having a relatively large prediction error is determined as the filtering performing region in consideration of the intra prediction mode of the current block. It can be efficient. More specifically, a prediction pixel located in an area adjacent to a reference pixel that is not used for intra prediction in the prediction block may have a low prediction association with the reference pixel. Accordingly, the encoder and the decoder may reduce the prediction error and improve the prediction efficiency by performing filtering on the prediction pixel in the region adjacent to the reference pixel that is not used for intra prediction among the prediction pixels in the prediction block.
  • 1310 of FIG. 13 illustrates an embodiment of a filtering performing area when a prediction mode of a current block is a DC mode and / or a planner mode.
  • 1313 may indicate a prediction block
  • 1316 may indicate a filtering performing region.
  • the encoder and decoder are one or more horizontal pixel lines (hereinafter referred to as top horizontal prediction pixel lines) located at the top of the prediction block 1313 and one or more vertical pixels located at the leftmost in the prediction block 1313.
  • a line hereinafter, referred to as a left vertical prediction pixel line
  • the number of horizontal pixel lines included in the top horizontal prediction pixel line and the number of vertical pixel lines included in the left vertical prediction pixel line may be a predetermined fixed number, for example, the top horizontal prediction pixel line and the left side.
  • Each vertical prediction pixel line may include one pixel line.
  • the number of pixel lines included in the top horizontal prediction pixel line and the number of pixel lines included in the left vertical prediction pixel line are determined by the current block and / or the prediction block 1313. It may be determined based on the size.
  • the number of pixel lines included in the top horizontal prediction pixel line and the number of pixel lines included in the left vertical prediction pixel line may have a variable value according to the size of the current block and / or the prediction block 1313.
  • the number of pixel lines included in the top horizontal prediction pixel line and the number of pixel lines included in the left vertical prediction pixel line may be one, two, or four, respectively.
  • the encoder and the decoder may determine the upper horizontal prediction pixel line and the left vertical prediction pixel line as the filtering performing region 1316 as in the DC mode.
  • 1320 of FIG. 13 illustrates an implementation of the filtering performing region when the intra prediction mode of the current block is the vertical right mode (eg, the prediction mode having the mode values of 5, 6, 12, 13, 22, 23, 24, and 25).
  • the intra prediction mode of the current block is the vertical right mode (eg, the prediction mode having the mode values of 5, 6, 12, 13, 22, 23, 24, and 25).
  • An example is shown.
  • 1323 may indicate a prediction block and 1326 may indicate a filtering performing region.
  • the encoder and the decoder When the prediction mode of the current block is the vertical right mode, the encoder and the decoder perform intra prediction on the current block based on the top reference pixel and / or the top right reference pixel, and thus the prediction located in the left region in the prediction block 1323.
  • the association between the pixel and the left reference pixel can be made smaller. Accordingly, in this case, the encoder and the decoder determine one or more leftmost vertical pixel lines located in the prediction block 1323, that is, the left vertical prediction pixel line, as the filtering performing region 1326 and perform filtering to improve prediction efficiency.
  • the number of vertical pixel lines included in the left vertical prediction pixel line may be a predetermined fixed number, for example, the left vertical prediction pixel line may include one vertical pixel line.
  • the number of vertical pixel lines included in the left vertical prediction pixel line may be determined based on the size of the current block and / or the prediction block 1323. That is, the number of vertical pixel lines included in the left vertical prediction pixel line may have a variable value according to the size of the current block and / or the prediction block 1323, for example, one, two, or four. Can be.
  • the encoder and the decoder perform intra prediction on the current block by using the upper reference pixel, so that the correlation between the prediction pixel located in the left region and the left reference pixel in the prediction block is small. Can lose. Therefore, even in this case, the encoder and the decoder may determine the left vertical prediction pixel line as the filtering region and perform the filtering.
  • 1330 of FIG. 13 illustrates implementation of a filtering area when the intra prediction mode of the current block is a horizontal lower mode (eg, a prediction mode having mode values of 8, 9, 16, 17, 30, 31, 32, and 33). An example is shown.
  • 1333 may indicate a prediction block and 1336 may indicate a filtering performing region.
  • the encoder and the decoder When the prediction mode of the current block is the horizontal bottom mode, the encoder and the decoder perform intra prediction on the current block by using the left reference pixel and / or the bottom left reference pixel, and thus the prediction located in the top region of the prediction block 1333.
  • the association between the pixel and the top reference pixel can be small. Therefore, in this case, the encoder and the decoder determine one or more horizontal pixel lines located at the top of the prediction block 1333, that is, the top horizontal prediction pixel lines, as the filtering performing region 1336 and perform filtering to improve prediction efficiency.
  • the number of horizontal pixel lines included in the top horizontal prediction pixel line may be a predetermined fixed number, for example, the top horizontal prediction pixel line may include one pixel line.
  • the number of horizontal pixel lines included in the top horizontal prediction pixel line may be determined based on the size of the current block and / or the prediction block 1333. That is, the number of horizontal pixel lines included in the top horizontal prediction pixel line may have a variable value according to the size of the current block and / or the prediction block 1333, for example, one, two, or four. Can be.
  • the encoder and the decoder perform intra prediction on the current block by using the left reference pixel, so that the correlation between the prediction pixel located in the upper region and the top reference pixel in the prediction block is small. Can lose. Therefore, even in this case, the encoder and the decoder may determine the upper horizontal prediction pixel line as the filtering region and perform the filtering.
  • FIG. 14 is a diagram schematically showing an embodiment of a method of determining a filtering performing region based on a size and / or a depth of a current block.
  • the encoder and the decoder may improve encoding efficiency by determining a filtering performing region based on the size (and / or depth) of the current block (and / or the block to be predicted). In this case, the encoder and the decoder may determine a region having a relatively large prediction error as the filtering performing region.
  • 1410 of FIG. 14 illustrates an embodiment of a filtering performing area when the size of the current block is 8x8.
  • 1410 to 1413 of FIG. 14 indicate a current block, and 1416 indicates a filtering target region.
  • an intra prediction mode of the current block 1413 corresponds to a vertical right mode (eg, a prediction mode having a mode value of 6).
  • the encoder and the decoder may determine one or more leftmost vertical pixel lines, that is, the left vertical prediction pixel lines, in the prediction block as the filtering performing region 1416.
  • 1420 of FIG. 14 illustrates an embodiment of a filtering performing area when the size of the current block is 32x32.
  • 1423 represents a current block
  • 1426 represents a filtering target region.
  • an intra prediction mode of the current block 1423 corresponds to a vertical right mode (eg, a prediction mode having a mode value of 6).
  • the encoder and the decoder may determine one or more leftmost vertical pixel lines, that is, the left vertical prediction pixel lines, in the prediction block as the filtering performing region 1426.
  • the number of vertical pixel lines constituting the left vertical prediction pixel line may be determined based on the sizes of the current blocks 1413 and 1423 and / or the prediction block.
  • the size of the current block 1413 is 8x8, and thus has a relatively small value. Therefore, in this case, since the size of the region having the large prediction error may be relatively small, the encoder and the decoder may determine the two vertical pixel lines as the filtering performing region in the order of the leftmost position in the prediction block.
  • the size of the current block 1423 is 32x32, and thus has a relatively large value. Therefore, in this case, since the size of the region having a large prediction error may be relatively large, the encoder and the decoder may determine four vertical pixel lines as the filtering performing region in the order of the leftmost position in the prediction block.
  • Table 5 shows an embodiment of the filtering area according to the block size
  • Table 6 shows an embodiment of the filtering area according to the depth value of the current block.
  • the encoder and the decoder may determine a filtering performing region based on the size and / or depth of the current block, as shown in Tables 5 and 6 below.
  • the current block may correspond to a TU, and the size of the TU may be 2x2, 4x4, 8x8, 16x16, 32x32, 64x64, and the like.
  • the present invention is not limited thereto, and the current block may correspond to a CU and / or a PU, not a TU.
  • the size and / or position of the filtering performing region determined according to the size and / or depth of the current block is not limited to the above-described embodiment, and may be determined to be different from the above-described embodiment.
  • a method of determining a filtering performing region based on the vertical right mode is described, but this is for convenience of description, and the same method is used when the prediction mode of the current block corresponds to a mode other than the vertical right mode. Or in a similar manner.
  • FIG. 15 is a diagram schematically illustrating an embodiment of a method of determining a filtering performing region based on an encoding mode of a neighboring block adjacent to a current block.
  • the intra prediction mode of the current blocks C and 1510 corresponds to the vertical right mode.
  • the encoder and the decoder perform intra prediction on the current block 1510 using the upper reference pixel and / or the upper right reference pixel, the left region in the prediction block may be determined as the filtering target region.
  • the encoder and the decoder may not perform filtering on a region adjacent to the neighboring block in which the encoding mode is the inter mode. That is, the encoder and the decoder may determine the filtering performing region based on the encoding mode of the neighboring block adjacent to the current block.
  • neighboring blocks adjacent to the left of the current block 1510 include a restored neighboring block A 1520 and a restored neighboring block B 1530.
  • the encoding mode of the neighboring block A 1520 is an intra mode
  • the encoding mode of the neighboring block B 1530 is an inter mode.
  • the encoder and the decoder may determine only the region 1540 adjacent to the neighboring block B 1530 encoded in the intra mode among the left regions of the prediction block as the filtering target region.
  • 16A and 16B illustrate an embodiment of a filter type determination method according to an intra prediction mode of a current block.
  • 1610 of FIG. 16A illustrates an embodiment of a filter type determination method when the prediction mode of the current block is a DC mode and / or a planner mode.
  • 1615 indicates a prediction block
  • 1620 indicates a filter tap applied to a pixel to be filtered.
  • the encoder and the decoder may use the top horizontal prediction pixel line (e.g., one horizontal pixel line located at the top of the prediction block 1615) and the left vertical prediction pixel line (e.g., prediction block 1615).
  • Predicted pixels (eg, (0,0), (1,0), (2,0), (3,0), (4,0) contained in the leftmost vertical pixel line in the ), (5,0), (6,0), (7,0), (0,1), (0,2), (0,3), (0,4), (0,5), (0,6) and (0,7)) may be determined as the filtering performing region.
  • the encoder and the decoder may determine, as in the DC mode, prediction pixels included in the upper horizontal prediction pixel line and the left vertical prediction pixel line as the filtering performing region.
  • the encoder and the decoder are [1/4, 2/4, 1 for the top left prediction pixel (0,0) located at the top left in the prediction block. / 4] may be applied to the 3-tap filter 1629.
  • the encoder and the decoder are based on the filtering target pixel (0,0), the reference pixel (0, -1) adjacent to the top of the filtering target pixel, and the reference pixel (-1,0) adjacent to the left side of the filtering target pixel. Filtering may be performed on the filtering target pixel.
  • the filter coefficient applied to the filtering target pixel may be 2/4
  • the filter coefficient applied to the reference pixel adjacent to the top of the filtering target pixel and the reference pixel adjacent to the left of the filtering target pixel may be 1/4.
  • the encoder and the decoder may include pixels excluding the upper left prediction pixel among the prediction pixels included in the left vertical prediction pixel line (eg, (0,1 ), (0,2), (0,3), (0,4), (0,5), (0,6), (0,7)), respectively, [1/4, 3/4 ] Can be applied to the horizontal two-tap filter (1623).
  • the encoder and the decoder may be configured based on the filtering target pixel (0, y) and the reference pixel (-1, y) adjacent to the left side of the filtering target pixel. Filtering may be performed on the filtering target pixel.
  • the filter coefficient applied to the filtering target pixel may be 3/4
  • the filter coefficient applied to the reference pixel adjacent to the left side of the filtering target pixel may be 1/4.
  • the encoder and the decoder may include pixels excluding the upper left prediction pixel among the prediction pixels included in the upper horizontal prediction pixel line (eg, (1,0 ), (2,0), (3,0), (4,0), (5,0), (6,0), (7,0)) for each, [1/4, 3/4 ] Can be applied to a vertical two-tap filter 1625.
  • the encoder and the decoder may be configured based on the filtering target pixel (x, 0) and the reference pixel (x, -1) adjacent to the upper end of the filtering target pixel. Filtering may be performed on the filtering target pixel.
  • the filter coefficient applied to the filtering target pixel may be 3/4
  • the filter coefficient applied to the reference pixel adjacent to the top of the filtering target pixel may be 1/4.
  • the encoder and the decoder may use different filter types (eg, filter shapes, filter taps, and / or filter coefficients, etc.) according to the size of the current block.
  • the encoder and the decoder may adaptively determine the filter type based on the size of the current block.
  • encoders and decoders always have a certain fixed filter type (e.g., filter shape, filter taps, and / or filter coefficients, etc.), regardless of the size of the current block and / or prediction block, as in the embodiments described above. You can also use
  • 1630 of FIG. 16A illustrates a method of determining a filter type when a prediction mode of a current block is a vertical right mode (eg, a prediction mode having mode values of 5, 6, 12, 13, 22, 23, 24, and 25). An example is shown.
  • 1635 represents a prediction block
  • 1640 represents a filter tap applied to a filtering target pixel.
  • the encoder and the decoder perform intra prediction on the current block based on the upper reference pixel and / or the upper right reference pixel, and thus, within the prediction block 1635.
  • the correlation between the prediction pixel located in the left region and the left reference pixel may be reduced.
  • the encoder and decoder are equal to prediction pixels (e.g., (0,0), which are included in the left vertical prediction pixel line (e.g., one leftmost vertical pixel line in prediction block 1635). (0,1), (0,2), (0,3), (0,4), (0,5), (0,6), (0,7)) may be determined as the filtering performing region. .
  • the encoder and the decoder perform intra prediction on the current block by using the upper reference pixel.
  • the correlation between the prediction pixel located in the region and the left reference pixel may be small. Therefore, even in this case, the encoder and the decoder may determine the prediction pixels included in the left vertical prediction pixel line as the filtering performing region.
  • the filter type applied to the vertical mode may be different from the filter type applied to the vertical right mode.
  • the encoder and the decoder may include prediction pixels included in the left vertical prediction pixel line (eg, (0,0), (0,1), (0,2), For each of (0,3), (0,4), (0,5), (0,6), (0,7)), a diagonal two-tap filter of [1/4, 3/4] ( 1640 may be applied.
  • the encoder and the decoder are adjacent to the filtering target pixel (0, y) and the reference pixel adjacent to the left side of the filtering target pixel (-1).
  • the filtering may be performed on the filtering target pixel.
  • the filter coefficient applied to the filtering target pixel may be 3/4
  • the filter coefficient applied to the reference pixel immediately adjacent to the lower right of the reference pixel adjacent to the left of the filtering target pixel may be 1/4.
  • the filter type determination method is performed when the prediction mode of the current block is the horizontal lower mode (eg, the prediction mode having the mode values 8, 9, 16, 17, 30, 31, 32, and 33).
  • the prediction mode of the current block is the horizontal lower mode (eg, the prediction mode having the mode values 8, 9, 16, 17, 30, 31, 32, and 33).
  • An example is shown.
  • 1650 to 1655 of FIG. 16B indicate a prediction block, and 1660 indicates a filter tap applied to a pixel to be filtered.
  • the encoder and the decoder perform intra prediction on the current block by using the left reference pixel and / or the bottom left reference pixel, and thus, the prediction block 1655 may be used.
  • the correlation between the predicted pixel located in the upper region and the upper reference pixel may be reduced.
  • the encoder and the decoder may determine the prediction pixels (eg, (0,0), included in the top horizontal prediction pixel line (e.g., one vertical pixel line located at the top of the prediction block 1655). (1,0), (2,0), (3,0), (4,0), (5,0), (6,0), (7,0)) may be determined as the filtering performing region. .
  • the encoder and the decoder perform intra prediction on the current block using the left reference pixel, and thus, the prediction block 1655
  • the correlation between the prediction pixel and the top reference pixel located in the upper region in the C) may be small. Therefore, even in this case, the encoder and the decoder may determine the prediction pixels included in the upper horizontal prediction pixel line as the filtering performing region.
  • the filter type applied to the horizontal mode may be different from the filter type applied to the horizontal bottom mode.
  • the encoder and the decoder may include the prediction pixels included in the top horizontal prediction pixel line (eg, (0,0), (1,0), (2,0), ( [1/4, 3/4] diagonal two-tap filter (1660) for each of 3,0), (4,0), (5,0), (6,0), (7,0)) ) Can be applied.
  • the encoder and the decoder may close the reference pixel adjacent to the right side of the filtering target pixel (x, 0) and the reference pixel adjacent to the filtering target pixel (x + 0). Based on 1, -1), filtering may be performed on the filtering target pixel.
  • the filter coefficient applied to the filtering target pixel is 3/4
  • the filter coefficient applied to the reference pixel immediately adjacent to the right side of the reference pixel adjacent to the filtering target pixel may be 1/4.
  • 1670 of FIG. 16B illustrates an embodiment of a method for adaptively determining a filter type (eg, filter shape, filter coefficients, filter tap, etc.) according to an intra prediction mode (particularly, a directional prediction mode) of the current block.
  • a filter type eg, filter shape, filter coefficients, filter tap, etc.
  • an intra prediction mode particularly, a directional prediction mode
  • 1675 to 1675 of FIG. 16B represent a prediction block
  • 1680 represents a filter tap applied to a pixel to be filtered.
  • the encoder and the decoder may apply a predetermined fixed filter type to each of the vertical right mode and / or the horizontal bottom mode.
  • the encoder and the decoder may apply various filter types according to the intra prediction mode in addition to the above filter types.
  • the encoder and the decoder may adaptively determine the filter type based on the intra prediction mode of the current block.
  • the encoder and the decoder is a 3-tap filter that performs filtering based on the filtering target pixel (x, y), the reference pixel (x + 2, y-1) and the reference pixel (x + 3, y-1). (1681) may be used.
  • the filter coefficients applied to the filtering target pixels (x, y) are 12
  • the filter coefficients applied to the reference pixels (x + 2, y-1) are 3, and the reference pixels (x + 3, y-1) are used.
  • the filter coefficient applied may be one.
  • the encoder and the decoder may perform a three-tap filter that performs filtering based on the filtering target pixel (x, y), the reference pixel (x + 1, y-1), and the reference pixel (x + 2, y-1). (1683, 1685, 1687) can be used.
  • the filter coefficient applied to the filtering target pixel (x, y) is 12
  • the filter coefficient applied to the reference pixel (x + 1, y-1) is 1
  • the reference pixel (x + 2, y-1) The filter coefficient applied may be 3 (1683).
  • the filter coefficient applied to the filtering target pixel (x, y) is 12
  • the filter coefficient applied to the reference pixel (x + 1, y-1) is 2, and the reference pixel (x + 2, y-1) is applied.
  • the filter coefficient to be may be two (1685).
  • the filter coefficient applied to the filtering target pixel (x, y) is 8, the filter coefficient applied to the reference pixel (x + 1, y-1) is 6, the reference pixel (x + 2, y-1) is applied.
  • the filter coefficient to be may be two (1687).
  • the encoder and the decoder may use a 2-tap filter 1689 that performs filtering based on the filtering target pixels (x, y) and the reference pixels (x + 1, y-1). In this case, a filter coefficient applied to the filtering pixel (x, y) may be 8 and a filter coefficient applied to the reference pixel (x + 1, y-1) may be 8.
  • the encoder and the decoder may use at least one reference pixel among the upper reference pixel and the upper right reference pixel for intra prediction, and the left reference pixel and the lower left reference pixel. At least one reference pixel may be used for intra prediction. Therefore, in this case, since the prediction pixels located in the left region and the upper region of the prediction block can maintain the association with the reference pixel, the encoder and the decoder may not perform filtering on the prediction block.
  • the encoder and the decoder may determine whether to perform filtering on the prediction block based on the color component information of the current block, the encoder and the decoder may determine that the current block corresponds to the luma block. Only the filtering process described above with reference to FIGS. 16A and 16B may be performed. That is, the filtering processes according to the above-described embodiments may be applied only when the current block corresponds to a luma block, and may not be applied when the current block corresponds to a chroma block.
  • FIG. 17 is a diagram schematically illustrating a filter type determination method according to the embodiments of FIGS. 16A and 16B.
  • 1710 of FIG. 17 illustrates an embodiment of a filter type when the prediction mode of the current block is a DC mode and / or a planner mode.
  • 1710 of FIG. 17 shows the same filter type as the filter type shown in 1610 of FIG. 16A.
  • the prediction mode of the current block is a DC mode (eg, a prediction mode with a mode value of 2) and / or a planner mode (eg, a prediction mode with a mode value of 34).
  • the encoder and the decoder may apply a 3-tap filter to the top left prediction pixel (eg, c pixel at 1710 of FIG. 17) located at the top left of the prediction block.
  • the encoder and the protector may apply a horizontal two-tap filter to each pixel except for the upper left prediction pixel among the prediction pixels included in the left vertical prediction pixel line (eg, the g pixel at 1710 of FIG. 17). Can be.
  • the encoder and the decoder may apply a vertical two-tap filter to each pixel except for the upper left prediction pixel among the prediction pixels included in the upper horizontal prediction pixel line (eg, the e pixel in 1710 of FIG. 17). have. In one embodiment, this may be represented by the following equation (1).
  • F_x represents the filtered prediction pixel value generated by performing filtering on the prediction pixel value at the x position.
  • FIG. 1730 of FIG. 17 illustrates an embodiment of a filter type when the prediction mode of the current block is a vertical right mode (eg, a prediction mode having mode values of 5, 6, 12, 13, 22, 23, 24, and 25). Illustrated. 17 shows a filter type identical to the filter type shown in 1630 of FIG. 16A.
  • the encoder and the decoder each of the prediction pixels included in the left vertical prediction pixel line (eg, i pixels in 1730 of FIG. 17). And k pixels), a 2-tap filter can be applied.
  • the encoder and the decoder may determine the shape of the filter as a diagonal shape. In one embodiment, this can be represented by the following equation.
  • F_x represents the filtered prediction pixel value generated by performing filtering on the prediction pixel value at the x position.
  • 1750 of FIG. 17 illustrates an embodiment of a filter type when the prediction mode of the current block is a horizontal lower mode (eg, a prediction mode having mode values of 8, 9, 16, 17, 30, 31, 32, and 33). Illustrated. 1750 of FIG. 17 shows the same filter type as the filter type shown at 1650 of FIG. 16B.
  • the encoder and the decoder each of the prediction pixels included in the top horizontal prediction pixel line (eg, m pixels at 1750 of FIG. 17). And o pixels).
  • a 2-tap filter can be applied.
  • the encoder and the decoder may determine the diagonal shape of the filter. In one embodiment, this can be represented by the following equation.
  • F_x represents the filtered prediction pixel value generated by performing filtering on the prediction pixel value at the x position.
  • FIG. 18 is a diagram schematically showing an embodiment of a filter type applied when the prediction mode of the current block is a vertical mode and / or a horizontal mode.
  • terms such as a first reference pixel, a second reference pixel, and a third reference pixel are used independently in 1810 of FIG. 18 and 1820 of FIG. 18, respectively.
  • the first reference pixel used in 1810 of FIG. 18 is not the same as the first reference pixel used in 1820 of FIG. 18, and the second reference pixel and the third reference pixel of FIGS. 1810 and 18 of FIG. In 1820, each may have an independent meaning.
  • the filter determined by the filter type may not be a filter defined by the filter shape, the filter tap, the filter coefficient, and the like.
  • the encoder and the decoder may perform the filtering process by adding an offset value determined by a predetermined process to the pixel value of the reference pixel.
  • the filtering process may be performed in combination with the prediction block generation process. That is, the filtered prediction pixel value of each pixel in the current block may be derived by only the above-described filtering process, wherein the above-described filtering process includes one prediction process including both a prediction pixel generation process and a filtering process for the generated prediction pixel. This can be a process.
  • the filtering process may also be viewed as a process of generating a final prediction pixel (and / or filtered prediction pixel) using the reference pixel. Therefore, in FIG. 18, embodiments will be described in terms of prediction pixel generation.
  • FIG. 1810 of FIG. 18 illustrates an embodiment of a prediction pixel generation method when the prediction mode of the current block is the vertical mode.
  • the encoder and the decoder may generate the prediction block by performing intra prediction on the current block by using the upper reference pixel.
  • the encoder and the decoder are configured to include the pixels ((0,0), (0,1), ((0,0), (0,1), ( For each of 0, 2), (0, 3), (0, 4), (0, 5), (0, 6), and (0, 7)), a prediction block can be generated as follows.
  • a current prediction target pixel is a pixel (0, 4) among the pixels on the left vertical pixel line.
  • the encoder and the decoder are the first reference pixels (0, -1) (for example, the highest among the top reference pixels) located on the same vertical line as the prediction target pixel among the top reference pixels.
  • the pixel value of the reference pixel located on the left side may be filled in the position of the prediction target pixel. That is, when the prediction mode of the current block 1815 is the vertical mode, the pixel value of the first reference pixel may be determined as the prediction pixel value of the prediction target pixel.
  • the encoder and the decoder may derive the final prediction pixel value by adding an offset value to the first reference pixel value.
  • the process of adding the offset value may correspond to the filtering process or may correspond to a part of the prediction pixel generation process.
  • the offset value may be derived based on the second reference pixel (-1,4) adjacent to the left side of the prediction target pixel and the third reference pixel (-1, -1) adjacent to the left side of the first reference pixel. have.
  • the offset value may correspond to a value obtained by subtracting a pixel value of the third reference pixel from a pixel value of the second reference pixel.
  • the encoder and the decoder may derive the prediction value of the prediction target pixel by adding the difference value between the second reference pixel value and the third reference pixel value to the first reference pixel value.
  • the above-described prediction pixel generation process may be similarly or similarly applied to pixels other than pixels (0,4) among pixels on a left vertical pixel line.
  • Equation 4 The above-described prediction pixel generation process may be represented by Equation 4 as an example.
  • p '[x, y] represents the final predicted pixel value for the predicted pixel at position (x, y), and p [x, -1] is on the same vertical line as the predicted pixel among the upper reference pixels.
  • p [-1, y] represents a second reference pixel adjacent to the left side of the prediction target pixel
  • p [-1, -1] represents a third reference pixel adjacent to the left side of the first reference pixel.
  • nS represents the height of the current block.
  • the encoder and the decoder may apply the above-described prediction pixel generation process to the two leftmost pixel lines located in the current block 1815.
  • the prediction pixel generation process may be represented by Equation 5 as an example.
  • p '[x, y] represents the final predicted pixel value for the pixel to be predicted at the position (x, y)
  • p [x, y] represents the predicted pixel value generated by a normal vertical prediction process.
  • p [-1, y] represents a reference pixel located on the same horizontal line as the prediction target pixel among the left reference pixels
  • p [-1, -1] represents an upper left corner reference pixel.
  • the above process of adding the offset value may be applied only when the current block is a luma block and may not be applied when the current block is a chroma block.
  • the encoder and the decoder may directly determine the first reference pixel as the prediction pixel value of the prediction target pixel without applying an offset value.
  • FIG. 18 illustrates an embodiment of a prediction pixel generation method when the prediction mode of the current block is the horizontal mode.
  • the encoder and the decoder may generate the prediction block by performing intra prediction on the current block by using the left reference pixel.
  • the prediction pixel located in the upper region of the prediction block may have a large prediction error.
  • the encoder and the decoder may include the pixels ((0,0), (1,0), ((0,0), (1,0), ( For each of 2,0), (3,0), (4,0), (5,0), (6,0), (7,0)), the prediction block and / or prediction pixel Can be generated.
  • a current prediction target pixel is a pixel (4,0) among the pixels on the upper horizontal pixel line.
  • the encoder and the decoder may include the first reference pixel ( ⁇ 1,0) located on the same horizontal line as the prediction target pixel among the left reference pixels (eg, the most among the left reference pixels).
  • the pixel value of the reference pixel located at the top may be filled in the position of the prediction target pixel. That is, when the prediction mode of the current block 1825 is the horizontal mode, the pixel value of the first reference pixel may be determined as the prediction pixel value of the prediction target pixel.
  • the encoder and the decoder may derive the final prediction pixel value by adding an offset value to the first reference pixel value.
  • the process of adding the offset value may correspond to the filtering process or may correspond to a part of the prediction pixel generation process.
  • the offset value may be derived based on the second reference pixel (4, -1) adjacent to the top of the prediction target pixel and the third reference pixel (-1, -1) adjacent to the top of the first reference pixel. have.
  • the offset value may correspond to a value obtained by subtracting a pixel value of the third reference pixel from a pixel value of the second reference pixel.
  • the encoder and the decoder may derive the prediction value of the prediction target pixel by adding the difference value between the second reference pixel value and the third reference pixel value to the first reference pixel value.
  • the above-described prediction pixel generation process may be similarly or similarly applied to non-pixel (4,0) pixels among the pixels on the upper horizontal pixel line.
  • Equation 6 The above-described prediction pixel generation process may be represented by Equation 6 as an example.
  • p '[x, y] represents the final predicted pixel value for the pixel to be predicted at position (x, y), and p [-1, y] is on the same horizontal line as the pixel to be predicted among the left reference pixels.
  • p [x, -1] represents a second reference pixel adjacent to the top of the prediction target pixel
  • p [-1, -1] represents a third reference pixel adjacent to the top of the first reference pixel.
  • nS represents the width of the current block.
  • the region to which the offset and / or filtering is applied is not limited to the above-described embodiment.
  • the encoder and the decoder may apply the above-described prediction pixel generation process to the two horizontal pixel lines located at the top of the current block 1825.
  • the prediction pixel generation process may be represented by Equation 7, for example.
  • p '[x, y] represents the final predicted pixel value for the pixel to be predicted at the position (x, y)
  • p [x, y] represents the predicted pixel value generated by a general horizontal prediction process.
  • p [x, -1] represents a reference pixel located on the same vertical line as the prediction target pixel among the upper reference pixels
  • p [-1, -1] represents an upper left corner reference pixel.
  • the above-described process of adding the offset value may be applied only when the current block is a luma block and may not be applied when the current block is a chroma block.
  • the encoder and the decoder may directly determine the first reference pixel as the prediction pixel value of the prediction target pixel without applying an offset value.
  • FIG. 19 is a view schematically showing another embodiment of a filter type according to the present invention.
  • the encoder and the decoder perform intra prediction on the current block based on the left reference pixel and / or the bottom left reference pixel, and thus the prediction pixel and the top reference pixel located in the upper region in the prediction block 1910.
  • the encoder and the decoder may perform filtering on the prediction pixels included in the top horizontal prediction pixel line (eg, one horizontal pixel line located at the top of the prediction block 1910).
  • the filtering method according to FIG. 19 may include a left vertical prediction pixel line (eg, the leftmost portion in a prediction block 1910). The same applies to the case where filtering is performed on a pixel on one vertical pixel line located at.
  • the encoder and the decoder may perform filtering on predicted pixels, that is, predicted pixels B and 1920 in the prediction block 1910.
  • the filtering process may correspond to adding an appropriate offset value to the pixel value of the prediction pixel 1920.
  • the offset value may be derived based on the reference pixel.
  • the reference pixel used to derive the offset value may be a reference pixel A adjacent to the upper end of the filtering target pixel 1920. May be).
  • the reference pixel used to derive the offset value may be a reference pixel adjacent to the left side of the filtering target pixel.
  • the encoder and the decoder may perform intra prediction on the reference pixel 1930 to obtain a prediction value of the reference pixel, that is, a prediction reference pixel value.
  • the intra prediction may be directional prediction.
  • the encoder and the decoder perform prediction on the reference pixel 1930 based on the same intra prediction mode (and / or prediction direction) 1950 as the prediction mode (and / or prediction direction) 1940 of the current block. can do. If the position of the prediction reference pixel determined based on the prediction direction and the reference pixel of the intra prediction mode is not an integer position, the encoder and the decoder interpolate the reference reference pixel value based on the reference pixels of the integer position. You can get it.
  • the encoder and the decoder may derive the offset value based on the pixel value difference between the reference pixel and the predictive reference pixel.
  • the offset value may correspond to a value obtained by dividing the difference value between the reference pixel value and the predictive reference pixel value by four.
  • the encoder and the decoder may derive the pixel value of the filtered prediction pixel by adding the derived offset value to the pixel value of the prediction pixel 1920.
  • Equation 8 The above-described filtering process may be represented by Equation 8 as an example.
  • B represents a pixel value of the prediction pixel 1920
  • A represents a pixel value of the reference pixel 1930 for the prediction pixel
  • Ref1 represents a pixel value of the predictive reference pixel for A.
  • B ' represents a pixel value of the filtered prediction pixel.
  • the process of determining whether to perform the filtering, the process of determining the filtering region, and the process of determining the filter type are described independently, but the encoder and the decoder may combine the above processes and process them as one process. In this case, the encoder and the decoder may determine two or more of the filtering operation determination process, the filtering execution region determination process, and the filter type determination process based on one table.
  • whether to perform filtering according to the intra prediction mode, the filtering performing region, and the filter type may be represented by one table.
  • the encoder and the decoder may store the same table, and the encoder and the decoder may determine whether to perform filtering, a filtering execution region, and a filter type based on the intra prediction mode and the stored table.
  • Table 7 below shows an embodiment of a table indicating whether filtering is performed according to an intra prediction mode, a filtering performing region, and a filter type.
  • the filter type when the value assigned to the filter type is 0, the filter type may indicate that filtering is not performed on the prediction block. In addition, when the value assigned to the filter type is 1, 2, or 3, the filter type may indicate that filtering is performed on the prediction block.
  • the filter type may indicate that the filtering performing region and the filter type in the DC mode and / or the planner mode described above in 1610 of FIG. 16A are applied.
  • the filter type may represent that the filtering performing region and the filter type in the vertical right mode described above in 1630 of FIG. 16A are applied.
  • the filter type may indicate that the filtering performing region and the filter type in the horizontal lower mode described above in 1650 of FIG. 16B are applied.
  • the table shown in Table 7 may further include information about whether to apply the filter according to the block size. That is, the table including information on whether the filter is applied according to the intra prediction mode, the filter application region, and the filter type may include information on whether the filter is applied according to the block size.
  • the encoder and the decoder may have the same table, and the encoder and the decoder may perform filtering based on the intra prediction mode, the size of the current block (and / or prediction block), and the stored table. The area and filter type can be determined.
  • the encoder and the decoder may improve filtering efficiency by adaptively determining whether to perform filtering according to the size of the current block and / or the prediction block.
  • Table 8 below shows an embodiment of a table configured by considering the block size as well as the intra prediction mode.
  • the values of 0, 1, 2, and 3 assigned to the filter type in Table 8 may have the same meaning as in Table 7. Referring to Table 8, the encoder and the decoder may determine whether to perform filtering based on the size of the current block and / or the prediction block, and determine whether to perform the filtering, the region to perform the filtering, the filter type, etc. based on the intra prediction mode. have.
  • whether to perform filtering according to an intra prediction mode, a filtering performing region, and a filter type may be shown in Table 9 below.
  • FIG. 20 is a diagram for describing an intra prediction mode and a filter type applied to Table 9.
  • FIG. 2010 of FIG. 20 illustrates an embodiment of a prediction direction of an intra prediction mode and a mode value assigned to each prediction direction.
  • the intra prediction modes (prediction directions, shown in 2010) of FIG. 20 are limited to the embodiments of Table 9. Mode value) is used.
  • the embodiment of Table 9 is not limited to 2010 of FIG. 20.
  • the encoder and the decoder may not perform filtering on the prediction block.
  • the encoder and the decoder may perform filtering on the prediction block.
  • Tx allocated to the filter application region represents the x horizontal pixel lines located at the top of the prediction block, that is, the top horizontal prediction pixel line
  • Lx represents the x vertical pixel lines located at the leftmost side of the prediction block, that is, left vertical. It may represent a prediction pixel line.
  • TxLx allocated to the filter application region may represent an area including both the upper horizontal prediction pixel line and the left vertical prediction pixel line.
  • the value of x may be 1,2 or 4.
  • x may be some fixed value, for example x may always be one.
  • the top horizontal prediction pixel line may include only one horizontal pixel line
  • the left vertical prediction pixel line may also include only one vertical pixel line.
  • Non-zero filter types of Table 9 may include a, b, c, d, and e.
  • the encoder and the decoder may perform filtering based on the filtering execution region and the filter type described above in 1610 of FIG. 16A. At this time, the encoder and the decoder are based on the filter coefficients described above with reference to 1610 of FIG. 16A for the prediction pixels included in the top horizontal prediction pixel line (one pixel line) and the left vertical prediction pixel line (one pixel line). Filtering can be performed.
  • the encoder and the decoder may perform filtering based on the filtering execution region and the filter type described above with reference to FIG. 18.
  • the prediction mode of the current block is a vertical mode (e.g., a prediction mode with a mode value of 1)
  • the encoder and the decoder may have a left vertical predictive pixel line (e.g., two pixel lines) as in 1810 of FIG. Filtering may be performed on the prediction pixels included in the.
  • the prediction mode of the current block is a horizontal mode (eg, a prediction mode having a mode value of 2)
  • the encoder and the decoder may have an upper horizontal prediction pixel line (eg, two pixels as shown in 1820 of FIG. 18). Filtering may be performed on the prediction pixels included in the line).
  • the encoder and the decoder may perform filtering based on the filtering performing region and the filter type described above with reference to 1650 of FIG. 16B. . In this case, the encoder and the decoder may apply a diagonal filter of [1, 3] to the prediction pixels included in the upper horizontal prediction pixel line.
  • the encoder and the decoder may perform the filtering based on the filtering performing region and the filter type described above with reference to 1630 of FIG. 16A. . In this case, the encoder and the decoder may apply a diagonal filter of [1, 3] to the prediction pixels included in the left vertical prediction pixel line.
  • the value assigned to the filter type may be d.
  • block 2023 represents a prediction block, and a prediction direction when the intra prediction mode of the current block is 10 may be represented as 2025.
  • the filtered prediction pixel value may be represented by the following equation (9).
  • p '[x, y] may represent the filtered prediction pixel value
  • p [x, y] may represent the prediction pixel value before filtering at the (x, y) position.
  • p [x, -1] may represent a reference pixel located on the same vertical line as the prediction pixel among the upper reference pixels.
  • the prediction direction may be represented as 2027.
  • a value assigned to the filter type may be e.
  • the filtered prediction pixel value may be represented by the following equation (10).
  • p '[x, y] may represent the filtered prediction pixel value
  • p [x, y] may represent the prediction pixel value before filtering at the (x, y) position.
  • p [-1, y] may represent a reference pixel located on the same horizontal line as the prediction pixel among the left reference pixels.
  • Rp [-1, y] may represent a predicted value for the reference pixel of p [-1, y], that is, a predicted reference pixel value.
  • the encoder and the decoder may perform prediction based on the same intra prediction mode as that of the current block for the reference pixel of p [-1, y] to derive the prediction reference pixel value.
  • the filter applied according to the value assigned to each filter type is not limited to the above-described embodiment. That is, the filter applied according to the value assigned to each filter type may vary depending on implementation and / or need, and whether or not the filter is applied may also be set differently from the above-described embodiment.
  • IntraPredMode represents the intra prediction mode of the current block
  • nS represents the horizontal and vertical size of the prediction block
  • whether to perform filtering according to the intra prediction mode, the filtering performing region, and the filter type may be determined by Table 10 below.
  • intraPostFilterType represents filter type information applied to a prediction block.
  • the filter type information may include all information on whether to perform filtering, a filtering performing region, and a filter type.
  • intraPostFilterType may be represented as intraPostFilterType [IntraPredMode], which may mean that a value assigned to intraPostFilterType is determined by IntraPredMode.
  • the encoder and the decoder may derive the predSamplesF [x, y] value by the following equation (11).
  • predSamplesF [0,0] (p [-1,0] + 2 * predSamples [0,0] + p [0, -1] + 2) >> 2
  • the encoder and the decoder may derive the predSamplesF [x, y] value by the following equation (12).
  • the encoder and the decoder may derive the predSamplesF [x, y] value by the following equation (13).
  • the encoder and the decoder may derive the predSamplesF [x, y] value by the following equation (14).
  • the encoder and the decoder may set the application range differently according to the size and / or depth of the current block (and / or prediction block).
  • the application range of the present invention may be set differently according to the size of the PU and / or the size of the TU, or may be set differently according to the depth value of the CU.
  • the encoder and the decoder may use the size of the block and / or the depth value of the block as variables to determine the application range of the present invention.
  • the block may correspond to a CU, a PU, and / or a TU.
  • the encoder and the decoder may apply the present invention only to a block having a size greater than or equal to the variable, and as another example, the present invention may be applied only to a block having a size smaller than or equal to the variable. It may be.
  • the encoder and the decoder may apply the present invention only to a block having a size corresponding to the variable value.
  • Table 11 shows an embodiment of the application range of the present invention when the size value of the block used as a variable for determining the application range of the present invention is 16 ⁇ 16.
  • O may indicate that the present invention is applied to the corresponding block size
  • X may indicate that the present invention is not applied to the corresponding block size.
  • the encoder and the decoder may apply the present invention only to blocks having a size larger than or equal to a block size (16 ⁇ 16) used as a variable.
  • the encoder and the decoder may apply the present invention only to blocks having a size smaller than or equal to the block size (16 ⁇ 16) used as a variable.
  • the encoder and the decoder may apply the present invention only to blocks having the same size as the block size 16x16 used as a variable.
  • variable value (size value of the block and / or depth value of the block) for determining the application range of the present invention may be a predetermined fixed value.
  • the variable value may be previously stored in the encoder and the decoder, and the encoder and the decoder may determine the application range of the present invention based on the stored variable value.
  • variable value for determining the scope of application of the present invention may vary depending on the profile or level.
  • the variable value corresponding to each profile may be a predetermined fixed value.
  • the variable value corresponding to each level may be predetermined fixed.
  • a variable value size value of a block and / or depth value of a block
  • the encoder may encode information about the variable value and transmit the encoded information about the variable value to the decoder through the bitstream.
  • the variable value information transmitted through the bitstream may be included in a sequence parameter set (SPS), a picture parameter set (PSP), a slice header, and the like.
  • SPS sequence parameter set
  • PSP picture parameter set
  • the decoder may derive the variable value from the received bitstream and determine the application range of the present invention based on the derived variable value.
  • the indicator used to indicate variable value information may be log2_intra_prediction_filtering_enable_max_size_minus2.
  • the value assigned to the indicator may be 3
  • the variable value is 4x4
  • the value assigned to the indicator may be 0.
  • the indicator used to indicate variable value information may be intra_prediction_filtering_enable_max_cu_depth.
  • the present invention when the value assigned to the indicator is 0, the present invention can be applied to a block having a size of 64x64 or more.
  • the present invention is applied to a block having a size of 32x32 or more.
  • the invention can be applied, and when the value assigned to the indicator is 4, the invention can be applied to a block having a size of 4x4 or more.
  • the encoder may determine that the present invention is not applied to all block sizes. In this case, the encoder may use a predetermined indicator to transmit the determined information to the decoder.
  • the encoder may include an indicator such as intra_prediction_filtering_enable_flag and include the SPS, PPS, and / or slice header to transmit to the decoder.
  • the intra_prediction_filtering_enable_flag may correspond to an indicator indicating whether the present invention is applied to all blocks in a sequence, a picture, and / or a slice.
  • the encoder may transmit information indicating that the present invention is not applied to all block sizes to the decoder by using the indicator indicating the variable value information (eg, intra_prediction_filtering_enable_max_cu_depth).
  • the encoder assigns a value (eg, 5) to the indicator indicating an invalid (and / or not allowed) block size (eg, 2x2 size), for all block sizes. It may be indicated that the present invention does not apply.
  • the present invention can improve the prediction efficiency and the coding efficiency by reducing the prediction error occurring during intra prediction and minimizing the discontinuity between blocks.
  • the methods are described based on a flowchart as a series of steps or blocks, but the present invention is not limited to the order of steps, and certain steps may occur in a different order or at the same time than other steps described above. Can be. Also, one of ordinary skill in the art appreciates that the steps shown in the flowcharts are not exclusive, that other steps may be included, or that one or more steps in the flowcharts may be deleted without affecting the scope of the present invention. I can understand.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Finger-Pressure Massage (AREA)
PCT/KR2012/004883 2011-06-20 2012-06-20 영상 부호화/복호화 방법 및 그 장치 WO2012177053A2 (ko)

Priority Applications (31)

Application Number Priority Date Filing Date Title
EP12803047.5A EP2723078B1 (en) 2011-06-20 2012-06-20 Image decoding apparatus
BR112014010333-0A BR112014010333B1 (pt) 2011-06-20 2012-06-20 Equipamentos para decodificação de vídeo
US13/983,207 US9332262B2 (en) 2011-06-20 2012-06-20 Image encoding/decoding method and apparatus for same
CN201280011184.0A CN103404151B (zh) 2011-06-20 2012-06-20 图像编码/解码方法和用于其的设备
CA2828462A CA2828462C (en) 2011-06-20 2012-06-20 Image encoding/decoding method and apparatus for same
BR122021025309-9A BR122021025309B1 (pt) 2011-06-20 2012-06-20 Métodos para codificar e decodificar vídeo
DK12803047.5T DK2723078T3 (en) 2011-06-20 2012-06-20 IMAGE ENCODING APPARATUS
IN2639CHN2014 IN2014CN02639A (US07585860-20090908-C00083.png) 2011-06-20 2012-06-20
BR122021025319-6A BR122021025319B1 (pt) 2011-06-20 2012-06-20 Equipamento de decodificação de vídeo e método de codificação de vídeo
BR112013021229-2A BR112013021229B1 (pt) 2011-06-20 2012-06-20 Equipamentos de codificação e de decodificação de vídeo
EP17162474.5A EP3217665B1 (en) 2011-06-20 2012-06-20 Video decoding method
EP19197782.6A EP3614668B1 (en) 2011-06-20 2012-06-20 Video decoding method
JP2014516915A JP5976793B2 (ja) 2011-06-20 2012-06-20 映像復号化装置
EP23207377.5A EP4307682A3 (en) 2011-06-20 2012-06-20 Video decoding method
US14/202,943 US9154781B2 (en) 2011-06-20 2014-03-10 Image encoding/decoding method and apparatus for same
US14/220,724 US9225981B2 (en) 2011-06-20 2014-03-20 Image encoding/decoding method and apparatus for same
US14/221,794 US9591327B2 (en) 2011-06-20 2014-03-21 Image encoding/decoding method and apparatus for same
US15/067,764 US9900618B2 (en) 2011-06-20 2016-03-11 Method for generating reconstructed blocks using a filter for an intra prediction mode
US15/069,314 US10021416B2 (en) 2011-06-20 2016-03-14 Method, apparatus, and bitstream for generating reconstructed blocks using filter for intra prediction mode
US15/070,155 US10003820B2 (en) 2011-06-20 2016-03-15 Image encoding/decoding method and apparatus for same
US15/410,388 US10205964B2 (en) 2011-06-20 2017-01-19 Image encoding/decoding method using prediction block and apparatus for same
US16/206,696 US10516897B2 (en) 2011-06-20 2018-11-30 Image encoding/decoding method using prediction block and apparatus for same
US16/205,945 US10536717B2 (en) 2011-06-20 2018-11-30 Image encoding/decoding method using prediction block and apparatus for same
US16/546,930 US10986368B2 (en) 2011-06-20 2019-08-21 Image encoding/decoding method using prediction block and apparatus for same
US16/546,835 US10979735B2 (en) 2011-06-20 2019-08-21 Image encoding/decoding method using prediction block and apparatus for same
US16/546,786 US10904569B2 (en) 2011-06-20 2019-08-21 Image encoding/decoding method using prediction block and apparatus for same
US16/546,795 US10979734B2 (en) 2011-06-20 2019-08-21 Image encoding/decoding method using prediction block and apparatus for same
US17/202,935 US11711541B2 (en) 2011-06-20 2021-03-16 Image encoding/decoding method using prediction block and apparatus for same
US17/221,229 US11689742B2 (en) 2011-06-20 2021-04-02 Image encoding/decoding method using prediction block and apparatus for same
US18/330,440 US20230319308A1 (en) 2011-06-20 2023-06-07 Image encoding/decoding method using prediction block and apparatus for same
US18/340,762 US20230336779A1 (en) 2011-06-20 2023-06-23 Image encoding/decoding method using prediction block and apparatus for same

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
KR10-2011-0059850 2011-06-20
KR20110059850 2011-06-20
KR10-2011-0065708 2011-07-01
KR20110065708 2011-07-01
KR1020110119214A KR20120140181A (ko) 2011-06-20 2011-11-15 화면내 예측 블록 경계 필터링을 이용한 부호화/복호화 방법 및 그 장치
KR10-2011-0119214 2011-11-15
KR1020110125353A KR20120140182A (ko) 2011-06-20 2011-11-28 화면내 예측 블록 경계 필터링을 이용한 부호화/복호화 방법 및 그 장치
KR10-2011-0125353 2011-11-28
KR10-2012-0066206 2012-06-20
KR1020120066206A KR101357640B1 (ko) 2011-06-20 2012-06-20 영상 부호화/복호화 방법 및 그 장치

Related Child Applications (7)

Application Number Title Priority Date Filing Date
US13/983,207 A-371-Of-International US9332262B2 (en) 2011-06-20 2012-06-20 Image encoding/decoding method and apparatus for same
US14/202,943 Continuation US9154781B2 (en) 2011-06-20 2014-03-10 Image encoding/decoding method and apparatus for same
US14/220,724 Continuation US9225981B2 (en) 2011-06-20 2014-03-20 Image encoding/decoding method and apparatus for same
US14/221,794 Continuation US9591327B2 (en) 2011-06-20 2014-03-21 Image encoding/decoding method and apparatus for same
US15/067,764 Continuation US9900618B2 (en) 2011-06-20 2016-03-11 Method for generating reconstructed blocks using a filter for an intra prediction mode
US15/069,314 Continuation US10021416B2 (en) 2011-06-20 2016-03-14 Method, apparatus, and bitstream for generating reconstructed blocks using filter for intra prediction mode
US15/070,155 Continuation US10003820B2 (en) 2011-06-20 2016-03-15 Image encoding/decoding method and apparatus for same

Publications (2)

Publication Number Publication Date
WO2012177053A2 true WO2012177053A2 (ko) 2012-12-27
WO2012177053A3 WO2012177053A3 (ko) 2013-04-04

Family

ID=47906320

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/004883 WO2012177053A2 (ko) 2011-06-20 2012-06-20 영상 부호화/복호화 방법 및 그 장치

Country Status (11)

Country Link
US (18) US9332262B2 (US07585860-20090908-C00083.png)
EP (7) EP4307682A3 (US07585860-20090908-C00083.png)
JP (12) JP5976793B2 (US07585860-20090908-C00083.png)
KR (12) KR20120140181A (US07585860-20090908-C00083.png)
CN (4) CN103796029B (US07585860-20090908-C00083.png)
BR (4) BR122021025319B1 (US07585860-20090908-C00083.png)
CA (12) CA3011863C (US07585860-20090908-C00083.png)
DK (2) DK2723078T3 (US07585860-20090908-C00083.png)
ES (1) ES2618860T3 (US07585860-20090908-C00083.png)
IN (1) IN2014CN02639A (US07585860-20090908-C00083.png)
WO (1) WO2012177053A2 (US07585860-20090908-C00083.png)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014131162A (ja) * 2012-12-28 2014-07-10 Nippon Telegr & Teleph Corp <Ntt> イントラ予測符号化方法、イントラ予測復号方法、イントラ予測符号化装置、イントラ予測復号装置、それらのプログラム並びにプログラムを記録した記録媒体
WO2015002444A1 (ko) * 2013-07-01 2015-01-08 삼성전자 주식회사 필터링을 수반한 비디오 부호화 및 복호화 방법 및 그 장치
US9855720B2 (en) 2013-09-23 2018-01-02 Morphotrust Usa, Llc Unidirectional opacity watermark

Families Citing this family (91)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3567854B1 (en) 2009-03-23 2022-12-14 Ntt Docomo, Inc. Image predictive decoding method
KR20110113561A (ko) * 2010-04-09 2011-10-17 한국전자통신연구원 적응적인 필터를 이용한 인트라 예측 부호화/복호화 방법 및 그 장치
CN105120278B (zh) * 2010-07-20 2016-11-30 株式会社Ntt都科摩 图像预测编码装置及方法、图像预测解码装置及方法
CN105959706B (zh) * 2011-01-12 2021-01-08 三菱电机株式会社 图像编码装置和方法、以及图像译码装置和方法
KR20120140181A (ko) * 2011-06-20 2012-12-28 한국전자통신연구원 화면내 예측 블록 경계 필터링을 이용한 부호화/복호화 방법 및 그 장치
WO2014107073A1 (ko) * 2013-01-04 2014-07-10 삼성전자 주식회사 비디오의 부호화 방법 및 장치, 그 복호화 방법 및 장치
US20140192866A1 (en) * 2013-01-09 2014-07-10 Mitsubishi Electric Research Laboratories, Inc. Data Remapping for Predictive Video Coding
WO2014163209A1 (ja) * 2013-04-05 2014-10-09 シャープ株式会社 画像復号装置
KR102217225B1 (ko) * 2013-04-29 2021-02-18 인텔렉추얼디스커버리 주식회사 인트라 예측 방법 및 장치
WO2014178563A1 (ko) 2013-04-29 2014-11-06 인텔렉추얼 디스커버리 주식회사 인트라 예측 방법 및 장치
US20170155899A1 (en) * 2013-09-07 2017-06-01 Tongji University Image compression method and apparatus using matching
EP3078194B1 (en) 2014-01-02 2019-09-11 HFI Innovation Inc. Method and apparatus for intra prediction coding with boundary filtering control
JP2015216626A (ja) * 2014-04-23 2015-12-03 ソニー株式会社 画像処理装置及び画像処理方法
US9998742B2 (en) * 2015-01-27 2018-06-12 Qualcomm Incorporated Adaptive cross component residual prediction
US20160286224A1 (en) * 2015-03-26 2016-09-29 Thomson Licensing Method and apparatus for generating color mapping parameters for video encoding
CN107615763B (zh) * 2015-05-28 2020-09-11 寰发股份有限公司 一种管理解码图像缓存器的方法及装置
CN115134609A (zh) * 2015-06-11 2022-09-30 杜比实验室特许公司 使用自适应去块滤波编码和解码图像的方法及其装置
US10091506B2 (en) * 2015-06-11 2018-10-02 Sony Corporation Data-charge phase data compression architecture
US10841593B2 (en) 2015-06-18 2020-11-17 Qualcomm Incorporated Intra prediction and intra mode coding
US11463689B2 (en) 2015-06-18 2022-10-04 Qualcomm Incorporated Intra prediction and intra mode coding
KR20180040577A (ko) * 2015-08-17 2018-04-20 엘지전자 주식회사 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치
CN108353185B (zh) * 2015-08-28 2022-09-13 株式会社Kt 用于处理视频信号的方法和设备
US10574984B2 (en) * 2015-09-10 2020-02-25 Lg Electronics Inc. Intra prediction method and device in video coding system
CN108353164B (zh) * 2015-09-11 2022-06-24 株式会社Kt 用于处理视频信号的方法和设备
EP3358848B1 (en) * 2015-09-29 2021-04-21 LG Electronics Inc. Method of filtering image in image coding system
WO2017069591A1 (ko) * 2015-10-23 2017-04-27 엘지전자 주식회사 영상 코딩 시스템에서 영상 필터링 방법 및 장치
WO2017082670A1 (ko) * 2015-11-12 2017-05-18 엘지전자 주식회사 영상 코딩 시스템에서 계수 유도 인트라 예측 방법 및 장치
KR20230143623A (ko) 2016-03-28 2023-10-12 로즈데일 다이나믹스 엘엘씨 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치
KR102346713B1 (ko) 2016-04-12 2022-01-03 세종대학교산학협력단 인트라 예측 기반의 비디오 신호 처리 방법 및 장치
WO2017188565A1 (ko) * 2016-04-25 2017-11-02 엘지전자 주식회사 영상 코딩 시스템에서 영상 디코딩 방법 및 장치
WO2017188652A1 (ko) 2016-04-26 2017-11-02 인텔렉추얼디스커버리 주식회사 영상 부호화/복호화 방법 및 장치
WO2017188782A2 (ko) 2016-04-29 2017-11-02 세종대학교 산학협력단 영상 신호 부호화/복호화 방법 및 장치
CN109479142B (zh) * 2016-04-29 2023-10-13 世宗大学校产学协力团 用于对图像信号进行编码/解码的方法和设备
ES2724568B2 (es) 2016-06-24 2021-05-19 Kt Corp Método y aparato para tratar una señal de vídeo
JP6740534B2 (ja) * 2016-07-04 2020-08-19 日本放送協会 符号化装置、復号装置及びプログラム
CN115914625A (zh) * 2016-08-01 2023-04-04 韩国电子通信研究院 图像编码/解码方法
CN116962679A (zh) * 2016-08-31 2023-10-27 株式会社Kt 用于处理视频信号的方法和设备
EP3509299B1 (en) * 2016-09-05 2024-05-01 Rosedale Dynamics LLC Image encoding/decoding method and device therefor
CN116437079A (zh) * 2016-09-20 2023-07-14 株式会社Kt 对视频进行解码和编码的方法以及传输方法
WO2018062702A1 (ko) * 2016-09-30 2018-04-05 엘지전자 주식회사 영상 코딩 시스템에서 인트라 예측 방법 및 장치
CA3227652A1 (en) 2016-10-04 2018-04-12 Kt Corporation Method and apparatus for processing video signal
KR102410424B1 (ko) * 2016-10-04 2022-06-17 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
JP7356347B2 (ja) 2016-10-04 2023-10-04 エルエックス セミコン カンパニー, リミティド 画像復号方法、画像符号化方法、及び記録媒体
JP6895247B2 (ja) * 2016-10-06 2021-06-30 日本放送協会 符号化装置、復号装置及びプログラム
CN109845254B (zh) 2016-10-14 2024-01-26 世宗大学校产学协力团 影像编码/解码方法及装置
CN116916017A (zh) 2016-10-28 2023-10-20 韩国电子通信研究院 视频编码/解码方法和设备以及存储比特流的记录介质
CN116647677A (zh) 2016-10-28 2023-08-25 韩国电子通信研究院 视频编码/解码方法和设备以及存储比特流的记录介质
CN116320495A (zh) 2016-11-28 2023-06-23 韩国电子通信研究院 用于滤波的方法和装置
CN110024386B (zh) * 2016-11-29 2023-10-20 韩国电子通信研究院 用于对图像进行编码/解码的方法和设备、用于存储比特流的记录介质
CN117061736A (zh) * 2017-01-13 2023-11-14 谷歌有限责任公司 视频代码化的复合预测
US10728548B2 (en) * 2017-04-04 2020-07-28 Futurewei Technologies, Inc. Processing reference samples used for intra-prediction of a picture block
CN110495168B (zh) 2017-04-06 2021-12-07 松下电器(美国)知识产权公司 编码装置、解码装置、编码方法及解码方法
CN108881907A (zh) * 2017-05-16 2018-11-23 富士通株式会社 用于视频编解码的像素滤波方法和装置及视频编码方法
CN116828206A (zh) * 2017-05-17 2023-09-29 株式会社Kt 用于解码视频的方法和用于编码视频的方法
HUE056668T2 (hu) * 2017-05-31 2022-02-28 Lg Electronics Inc Eljárás és eszköz kép dekódolásának elvégzésére intra-predikció alapján képkódoló rendszerben
JP6770192B2 (ja) * 2017-06-01 2020-10-14 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 符号化装置、符号化方法、復号装置及び復号方法
CN117354542A (zh) * 2017-07-06 2024-01-05 Lx 半导体科技有限公司 图像解码设备、图像编码设备和用于发送图像数据的设备
CN111034196B (zh) 2017-08-21 2023-11-17 韩国电子通信研究院 用于对视频进行编码/解码的方法和设备以及存储比特流的记录介质
US10706492B2 (en) * 2017-09-05 2020-07-07 Texas Instruments Incorporated Image compression/decompression in a computer vision system
EP3454556A1 (en) * 2017-09-08 2019-03-13 Thomson Licensing Method and apparatus for video encoding and decoding using pattern-based block filtering
CN111247799B (zh) * 2017-10-18 2022-08-09 韩国电子通信研究院 图像编码/解码方法和装置以及存储有比特流的记录介质
CN116156165A (zh) * 2017-10-31 2023-05-23 三星电子株式会社 图像编码方法、图像解码方法及其装置
CN107801024B (zh) 2017-11-09 2019-07-12 北京大学深圳研究生院 一种用于帧内预测的边界滤波方法
CN111434109A (zh) * 2017-11-28 2020-07-17 韩国电子通信研究院 图像编码/解码方法和装置以及存储有比特流的记录介质
CN107896330B (zh) * 2017-11-29 2019-08-13 北京大学深圳研究生院 一种用于帧内和帧间预测的滤波方法
KR20210111323A (ko) 2018-01-15 2021-09-10 주식회사 비원영상기술연구소 색차 성분에 관한 화면내 예측 부호화/복호화 방법 및 장치
WO2019203559A1 (ko) * 2018-04-17 2019-10-24 엘지전자 주식회사 영상 코딩 시스템에서 리그레션 모델 기반 필터링을 사용하는 영상 디코딩 방법 및 장치
US10848696B2 (en) * 2018-06-01 2020-11-24 Samsung Electronics Co., Ltd. Apparatus for encoding image, apparatus for decoding image and image sensor
EP3808079A1 (en) * 2018-06-18 2021-04-21 InterDigital VC Holdings, Inc. Boundary filtering for planar and dc modes in intra prediction
US20210144402A1 (en) * 2018-06-21 2021-05-13 Kt Corporation Video signal processing method and device
AU2019292266B2 (en) * 2018-06-25 2023-02-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Intra-frame prediction method and device
CN110650349B (zh) 2018-06-26 2024-02-13 中兴通讯股份有限公司 一种图像编码方法、解码方法、编码器、解码器及存储介质
US11277644B2 (en) 2018-07-02 2022-03-15 Qualcomm Incorporated Combining mode dependent intra smoothing (MDIS) with intra interpolation filter switching
US10630979B2 (en) * 2018-07-16 2020-04-21 Tencent America LLC Reference sample padding and filtering for intra prediction in video compression
KR102483942B1 (ko) * 2018-07-16 2022-12-30 후아웨이 테크놀러지 컴퍼니 리미티드 비디오 인코더, 비디오 디코더 및 대응하는 인코딩 및 디코딩 방법
KR20200028856A (ko) * 2018-09-07 2020-03-17 김기백 인트라 예측을 이용한 영상 부호화/복호화 방법 및 장치
WO2020060261A1 (ko) * 2018-09-20 2020-03-26 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
US11159789B2 (en) 2018-10-24 2021-10-26 City University Of Hong Kong Generative adversarial network based intra prediction for video coding
US11303885B2 (en) 2018-10-25 2022-04-12 Qualcomm Incorporated Wide-angle intra prediction smoothing and interpolation
WO2020130628A1 (ko) * 2018-12-18 2020-06-25 엘지전자 주식회사 다중 참조 라인 인트라 예측에 기반한 영상 코딩 방법 및 그 장치
BR112020025145A2 (pt) * 2019-01-10 2021-07-20 Huawei Technologies Co., Ltd. filtro de desbloqueio para fronteiras de subpartição causadas por ferramenta de codificação de subpartição intra
JP2022065225A (ja) * 2019-03-08 2022-04-27 シャープ株式会社 Lic部、画像復号装置および画像符号化装置
KR20210131395A (ko) 2019-03-12 2021-11-02 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 인트라 예측 방법 및 장치, 컴퓨터 판독가능 저장 매체
MX2021012370A (es) * 2019-04-10 2022-01-18 Electronics & Telecommunications Res Inst Método y dispositivo para señalizar señales relacionadas con modo de predicción en intra predicción.
EP4018651A4 (en) * 2019-09-02 2023-02-22 Huawei Technologies Co., Ltd. ENCODER, DECODER AND RELATED METHODS OF FILTER MODIFICATION IN A GENERAL INTRA PREDICTION PROCESS
CN111787334B (zh) * 2020-05-29 2021-09-14 浙江大华技术股份有限公司 一种用于帧内预测的滤波方法,滤波器及装置
CN114339224B (zh) * 2020-06-05 2022-12-23 杭州海康威视数字技术股份有限公司 图像增强方法、装置及机器可读存储介质
CN111669584B (zh) * 2020-06-11 2022-10-28 浙江大华技术股份有限公司 一种帧间预测滤波方法、装置和计算机可读存储介质
JP7104101B2 (ja) * 2020-06-24 2022-07-20 日本放送協会 符号化装置、復号装置及びプログラム
DE102021117397A1 (de) * 2020-07-16 2022-01-20 Samsung Electronics Co., Ltd. Bildsensormodul, bildverarbeitungssystem und bildkomprimierverfahren
KR102189259B1 (ko) * 2020-08-20 2020-12-09 인텔렉추얼디스커버리 주식회사 인트라 예측 방법 및 장치

Family Cites Families (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2688369B1 (fr) * 1992-03-03 1996-02-09 Thomson Csf Procede de codage d'images a tres bas debit et dispositif de codage-decodage mettant en óoeuvre ce procede.
US6148109A (en) * 1996-05-28 2000-11-14 Matsushita Electric Industrial Co., Ltd. Image predictive coding method
US6157676A (en) * 1997-07-31 2000-12-05 Victor Company Of Japan Digital video signal inter-block interpolative predictive encoding/decoding apparatus and method providing high efficiency of encoding
AU717480B2 (en) * 1998-08-01 2000-03-30 Korea Advanced Institute Of Science And Technology Loop-filtering method for image data and apparatus therefor
WO2002067589A1 (en) * 2001-02-23 2002-08-29 Seiko Epson Corporation Image processing system, image processing method, and image processing program
CN101448162B (zh) * 2001-12-17 2013-01-02 微软公司 处理视频图像的方法
US7386048B2 (en) * 2002-05-28 2008-06-10 Sharp Laboratories Of America, Inc. Methods and systems for image intra-prediction mode organization
US7372999B2 (en) * 2002-09-09 2008-05-13 Ricoh Company, Ltd. Image coder and image decoder capable of power-saving control in image compression and decompression
US7227901B2 (en) * 2002-11-21 2007-06-05 Ub Video Inc. Low-complexity deblocking filter
JP4474288B2 (ja) * 2003-01-10 2010-06-02 トムソン ライセンシング 符号化された画像における誤り隠蔽のための補間フィルタの定義
US7457362B2 (en) * 2003-10-24 2008-11-25 Texas Instruments Incorporated Loop deblock filtering of block coded video in a very long instruction word processor
KR101000926B1 (ko) 2004-03-11 2010-12-13 삼성전자주식회사 영상의 불연속성을 제거하기 위한 필터 및 필터링 방법
US7539248B2 (en) * 2004-04-29 2009-05-26 Mediatek Incorporation Adaptive de-blocking filtering apparatus and method for MPEG video decoder
KR101204788B1 (ko) * 2004-06-03 2012-11-26 삼성전자주식회사 영상의 공간 예측 부호화 방법, 부호화 장치, 복호화 방법및 복호화 장치
JP4050754B2 (ja) * 2005-03-23 2008-02-20 株式会社東芝 ビデオエンコーダ及び動画像信号の符号化方法
KR101246294B1 (ko) * 2006-03-03 2013-03-21 삼성전자주식회사 영상의 인트라 예측 부호화, 복호화 방법 및 장치
KR100882949B1 (ko) * 2006-08-17 2009-02-10 한국전자통신연구원 화소 유사성에 따라 적응적인 이산 코사인 변환 계수스캐닝을 이용한 부호화/복호화 장치 및 그 방법
KR101312260B1 (ko) 2007-01-19 2013-09-25 삼성전자주식회사 에지 영역을 효과적으로 압축하고 복원하는 방법 및 장치
CN103281542B (zh) * 2007-06-29 2017-07-14 夏普株式会社 图像编码装置、图像编码方法、图像译码装置、图像译码方法
CN101409833B (zh) * 2007-10-12 2012-10-03 昆山杰得微电子有限公司 去块效应滤波装置及方法
US8576906B2 (en) * 2008-01-08 2013-11-05 Telefonaktiebolaget L M Ericsson (Publ) Adaptive filtering
JP2009194617A (ja) * 2008-02-14 2009-08-27 Sony Corp 画像処理装置、画像処理方法、画像処理方法のプログラム及び画像処理方法のプログラムを記録した記録媒体
KR101460608B1 (ko) * 2008-03-04 2014-11-14 삼성전자주식회사 필터링된 예측 블록을 이용한 영상 부호화, 복호화 방법 및장치
KR101379187B1 (ko) * 2008-06-23 2014-04-15 에스케이 텔레콤주식회사 블록 변환을 이용한 인트라 예측 방법 및 장치와 그를이용한 영상 부호화/복호화 방법 및 장치
KR101517768B1 (ko) * 2008-07-02 2015-05-06 삼성전자주식회사 영상의 부호화 방법 및 장치, 그 복호화 방법 및 장치
CN101321290B (zh) * 2008-07-17 2010-12-15 北京数码视讯科技股份有限公司 基于数字信号处理器的去块滤波方法
KR101590500B1 (ko) * 2008-10-23 2016-02-01 에스케이텔레콤 주식회사 동영상 부호화/복호화 장치, 이를 위한 인트라 예측 방향에기반한 디블록킹 필터링 장치 및 필터링 방법, 및 기록 매체
US8295360B1 (en) * 2008-12-23 2012-10-23 Elemental Technologies, Inc. Method of efficiently implementing a MPEG-4 AVC deblocking filter on an array of parallel processors
US8514942B2 (en) * 2008-12-31 2013-08-20 Entropic Communications, Inc. Low-resolution video coding content extraction
JPWO2010087157A1 (ja) 2009-01-29 2012-08-02 パナソニック株式会社 画像符号化方法及び画像復号方法
JP2010183162A (ja) * 2009-02-03 2010-08-19 Mitsubishi Electric Corp 動画像符号化装置
KR101379185B1 (ko) 2009-04-14 2014-03-31 에스케이 텔레콤주식회사 예측 모드 선택 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치
JP5169978B2 (ja) * 2009-04-24 2013-03-27 ソニー株式会社 画像処理装置および方法
JP5597968B2 (ja) * 2009-07-01 2014-10-01 ソニー株式会社 画像処理装置および方法、プログラム、並びに記録媒体
KR101510108B1 (ko) * 2009-08-17 2015-04-10 삼성전자주식회사 영상의 부호화 방법 및 장치, 그 복호화 방법 및 장치
KR101302660B1 (ko) * 2009-09-14 2013-09-03 에스케이텔레콤 주식회사 고해상도 동영상의 부호화/복호화 방법 및 장치
US9277227B2 (en) * 2009-10-22 2016-03-01 Thomas Licensing Methods and apparatus for DC intra prediction mode for video encoding and decoding
CN101710990A (zh) 2009-11-10 2010-05-19 华为技术有限公司 视频图像编码处理、解码处理方法和装置及编解码系统
KR20110054244A (ko) * 2009-11-17 2011-05-25 삼성전자주식회사 미디언 필터를 이용한 깊이영상 부호화의 인트라 예측 장치 및 방법
KR101623124B1 (ko) 2009-12-03 2016-05-24 에스케이 텔레콤주식회사 비디오 인코딩 장치 및 그 인코딩 방법, 비디오 디코딩 장치 및 그 디코딩 방법, 및 거기에 이용되는 방향적 인트라 예측방법
CN101783957B (zh) * 2010-03-12 2012-04-18 清华大学 一种视频预测编码方法和装置
KR101503269B1 (ko) * 2010-04-05 2015-03-17 삼성전자주식회사 영상 부호화 단위에 대한 인트라 예측 모드 결정 방법 및 장치, 및 영상 복호화 단위에 대한 인트라 예측 모드 결정 방법 및 장치
KR20110113561A (ko) 2010-04-09 2011-10-17 한국전자통신연구원 적응적인 필터를 이용한 인트라 예측 부호화/복호화 방법 및 그 장치
US8619857B2 (en) * 2010-04-09 2013-12-31 Sharp Laboratories Of America, Inc. Methods and systems for intra prediction
US8644375B2 (en) * 2010-04-09 2014-02-04 Sharp Laboratories Of America, Inc. Methods and systems for intra prediction
WO2011127964A2 (en) * 2010-04-13 2011-10-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus for intra predicting a block, apparatus for reconstructing a block of a picture, apparatus for reconstructing a block of a picture by intra prediction
WO2011129619A2 (ko) * 2010-04-13 2011-10-20 삼성전자 주식회사 트리 구조 부호화 단위에 기반한 디블록킹 필터링을 수행하는 비디오 부호화 방법과 그 장치 및 복호화 방법과 그 장치
KR101791242B1 (ko) * 2010-04-16 2017-10-30 에스케이텔레콤 주식회사 영상 부호화/복호화 장치 및 방법
KR101885258B1 (ko) * 2010-05-14 2018-08-06 삼성전자주식회사 비디오 신호의 부호화 방법과 그 장치, 및 비디오 복호화 방법과 그 장치
US20110317757A1 (en) * 2010-06-25 2011-12-29 Qualcomm Incorporated Intra prediction mode signaling for finer spatial prediction directions
EP2942957A1 (en) * 2010-07-02 2015-11-11 HUMAX Holdings Co., Ltd. Apparatus for decoding images for intra-prediction
US9172968B2 (en) * 2010-07-09 2015-10-27 Qualcomm Incorporated Video coding using directional transforms
ES2729031T3 (es) * 2010-07-14 2019-10-29 Ntt Docomo Inc Intra-predicción de baja complejidad para codificación de vídeo
CN105227958B (zh) * 2010-07-20 2019-06-25 Sk电信有限公司 用于解码视频信号的解码装置
KR20120012385A (ko) * 2010-07-31 2012-02-09 오수미 인트라 예측 부호화 장치
KR101373814B1 (ko) * 2010-07-31 2014-03-18 엠앤케이홀딩스 주식회사 예측 블록 생성 장치
US9716886B2 (en) * 2010-08-17 2017-07-25 M&K Holdings Inc. Method for restoring an intra prediction mode
US9008175B2 (en) * 2010-10-01 2015-04-14 Qualcomm Incorporated Intra smoothing filter for video coding
EP3833025A1 (en) * 2010-10-08 2021-06-09 GE Video Compression, LLC Picture coding supporting block partitioning and block merging
KR20120039388A (ko) * 2010-10-15 2012-04-25 에스케이하이닉스 주식회사 반도체 소자의 제조 방법
US20130215963A1 (en) * 2010-10-26 2013-08-22 Humax Co., Ltd. Adaptive intra-prediction encoding and decoding method
CN106851320B (zh) * 2010-11-04 2020-06-02 Ge视频压缩有限责任公司 数字存储介质、解码比特流的方法
CN107181950B (zh) * 2010-12-08 2020-11-06 Lg 电子株式会社 一种执行内预测的编码装置和解码装置
CN105959706B (zh) 2011-01-12 2021-01-08 三菱电机株式会社 图像编码装置和方法、以及图像译码装置和方法
WO2012115420A2 (ko) 2011-02-23 2012-08-30 엘지전자 주식회사 필터링을 이용한 화면 내 예측 방법 및 이러한 방법을 사용하는 장치
EP3703368B1 (en) * 2011-03-06 2022-11-02 LG Electronics Inc. Intra prediction method for chrominance blocks
CN107249131B (zh) * 2011-03-30 2020-04-24 Lg 电子株式会社 视频解码装置和视频编码装置
US9042458B2 (en) * 2011-04-01 2015-05-26 Microsoft Technology Licensing, Llc Multi-threaded implementations of deblock filtering
WO2012134046A2 (ko) * 2011-04-01 2012-10-04 주식회사 아이벡스피티홀딩스 동영상의 부호화 방법
CN106851270B (zh) * 2011-04-25 2020-08-28 Lg电子株式会社 执行帧内预测的编码设备和解码设备
CA2772894A1 (en) * 2011-05-17 2012-11-17 Her Majesty The Queen In Right Of Canada, As Represented By The Ministerof Industry, Through The Communications Research Centre Canada Image and video encoding and decoding
KR101383775B1 (ko) * 2011-05-20 2014-04-14 주식회사 케이티 화면 내 예측 방법 및 장치
KR20120140181A (ko) * 2011-06-20 2012-12-28 한국전자통신연구원 화면내 예측 블록 경계 필터링을 이용한 부호화/복호화 방법 및 그 장치
EP2824926B1 (en) 2011-06-24 2021-04-14 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method, image decoding method
CN103765886B (zh) * 2011-08-29 2017-06-13 苗太平洋控股有限公司 以amvp模式产生预测区块的方法
BR122020018116B1 (pt) * 2012-01-17 2023-11-21 Gensquare Llc Método para aplicar um deslocamento de borda
CN110868588B (zh) * 2012-01-18 2023-09-15 韩国电子通信研究院 视频解码装置、视频编码装置和计算机可读记录介质
WO2014166338A1 (en) * 2013-04-11 2014-10-16 Mediatek Inc. Method and apparatus for prediction value derivation in intra coding
US9451254B2 (en) * 2013-07-19 2016-09-20 Qualcomm Incorporated Disabling intra prediction filtering
US9883197B2 (en) * 2014-01-09 2018-01-30 Qualcomm Incorporated Intra prediction of chroma blocks using the same vector
US20160105685A1 (en) * 2014-10-08 2016-04-14 Qualcomm Incorporated Boundary filtering and cross-component prediction in video coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014131162A (ja) * 2012-12-28 2014-07-10 Nippon Telegr & Teleph Corp <Ntt> イントラ予測符号化方法、イントラ予測復号方法、イントラ予測符号化装置、イントラ予測復号装置、それらのプログラム並びにプログラムを記録した記録媒体
WO2015002444A1 (ko) * 2013-07-01 2015-01-08 삼성전자 주식회사 필터링을 수반한 비디오 부호화 및 복호화 방법 및 그 장치
US10003805B2 (en) 2013-07-01 2018-06-19 Samsung Electronics Co., Ltd. Video encoding and decoding method accompanied with filtering, and device thereof
US9855720B2 (en) 2013-09-23 2018-01-02 Morphotrust Usa, Llc Unidirectional opacity watermark

Also Published As

Publication number Publication date
US9591327B2 (en) 2017-03-07
KR101451922B1 (ko) 2014-10-23
US9225981B2 (en) 2015-12-29
BR112013021229A2 (pt) 2019-08-13
BR112013021229B1 (pt) 2022-05-24
ES2618860T3 (es) 2017-06-22
KR20130106337A (ko) 2013-09-27
CN103780911A (zh) 2014-05-07
US20130301720A1 (en) 2013-11-14
CA2828462A1 (en) 2012-12-27
EP2723078A2 (en) 2014-04-23
CA2910612A1 (en) 2012-12-27
KR20120140182A (ko) 2012-12-28
JP2016040934A (ja) 2016-03-24
CA3081215A1 (en) 2012-12-27
CA3081215C (en) 2023-02-07
JP2022031647A (ja) 2022-02-22
US10021416B2 (en) 2018-07-10
BR112014010333B1 (pt) 2022-05-24
JP5982421B2 (ja) 2016-08-31
CN103404151B (zh) 2015-05-20
CA3026266A1 (en) 2012-12-27
US10003820B2 (en) 2018-06-19
US20210203986A1 (en) 2021-07-01
CA3011853A1 (en) 2012-12-27
US20140205000A1 (en) 2014-07-24
KR20140066680A (ko) 2014-06-02
US20160198191A1 (en) 2016-07-07
CN103780911B (zh) 2019-01-25
BR122021025319B1 (pt) 2022-12-06
US20140192873A1 (en) 2014-07-10
EP2757791B1 (en) 2016-12-21
US20160198172A1 (en) 2016-07-07
BR122021025309B1 (pt) 2022-12-06
WO2012177053A3 (ko) 2013-04-04
CA3185432A1 (en) 2012-12-27
CA3026266C (en) 2020-03-10
JP6666968B2 (ja) 2020-03-18
KR101451921B1 (ko) 2014-10-23
US10979734B2 (en) 2021-04-13
DK2723078T3 (en) 2017-08-28
US9332262B2 (en) 2016-05-03
IN2014CN02639A (US07585860-20090908-C00083.png) 2015-08-07
KR20140066679A (ko) 2014-06-02
EP3217665A1 (en) 2017-09-13
US9900618B2 (en) 2018-02-20
CA3026271A1 (en) 2012-12-27
CA3011871A1 (en) 2012-12-27
CA2910612C (en) 2018-08-28
CA2944541C (en) 2019-01-15
JP2014161038A (ja) 2014-09-04
JP2019004491A (ja) 2019-01-10
US10986368B2 (en) 2021-04-20
US20210227256A1 (en) 2021-07-22
US11689742B2 (en) 2023-06-27
CN103796029A (zh) 2014-05-14
KR101488497B1 (ko) 2015-01-30
US10516897B2 (en) 2019-12-24
JP2014520476A (ja) 2014-08-21
EP4307682A3 (en) 2024-04-17
CA3011847C (en) 2021-10-12
JP7241148B2 (ja) 2023-03-16
CA3026271C (en) 2020-03-10
BR112014010333A2 (pt) 2017-10-10
JP7053696B2 (ja) 2022-04-12
KR20120140181A (ko) 2012-12-28
KR101451919B1 (ko) 2014-10-23
US20190379905A1 (en) 2019-12-12
EP2747432A1 (en) 2014-06-25
DK3217665T3 (da) 2019-12-16
CA2828462C (en) 2016-11-22
US20230319308A1 (en) 2023-10-05
JP2023065635A (ja) 2023-05-12
US20190110072A1 (en) 2019-04-11
KR20120140222A (ko) 2012-12-28
JP2014161039A (ja) 2014-09-04
JP2014161037A (ja) 2014-09-04
JP2018129815A (ja) 2018-08-16
CN103404151A (zh) 2013-11-20
CN103796030B (zh) 2016-08-17
CN103796029B (zh) 2016-05-18
US10904569B2 (en) 2021-01-26
US20170134748A1 (en) 2017-05-11
EP3614668B1 (en) 2023-12-20
CA3011851A1 (en) 2012-12-27
US10536717B2 (en) 2020-01-14
KR101451924B1 (ko) 2014-10-23
KR101451918B1 (ko) 2014-10-23
JP2017011735A (ja) 2017-01-12
CA3011863C (en) 2020-06-30
US20190379908A1 (en) 2019-12-12
US11711541B2 (en) 2023-07-25
US10979735B2 (en) 2021-04-13
JP6422922B2 (ja) 2018-11-14
US20230336779A1 (en) 2023-10-19
EP4307682A2 (en) 2024-01-17
JP2020080568A (ja) 2020-05-28
KR20130106336A (ko) 2013-09-27
EP2757791A3 (en) 2014-07-30
EP2747433A1 (en) 2014-06-25
EP3614668A1 (en) 2020-02-26
KR101357640B1 (ko) 2014-02-05
CA3011871C (en) 2021-10-12
US9154781B2 (en) 2015-10-06
KR20130106338A (ko) 2013-09-27
CA3011847A1 (en) 2012-12-27
JP7097192B2 (ja) 2022-07-07
CN103796030A (zh) 2014-05-14
JP5976793B2 (ja) 2016-08-24
KR20140062454A (ko) 2014-05-23
CA3011863A1 (en) 2012-12-27
EP2757791A2 (en) 2014-07-23
KR20140066678A (ko) 2014-06-02
US20190379906A1 (en) 2019-12-12
US10205964B2 (en) 2019-02-12
US20190110073A1 (en) 2019-04-11
EP3217665B1 (en) 2019-11-06
KR101451920B1 (ko) 2014-10-23
US20140204998A1 (en) 2014-07-24
JP2016040935A (ja) 2016-03-24
KR20140066681A (ko) 2014-06-02
US20190379907A1 (en) 2019-12-12
KR101357641B1 (ko) 2014-02-05
US20160198189A1 (en) 2016-07-07
EP2723078B1 (en) 2017-05-03
KR101451923B1 (ko) 2014-10-23
CA3011851C (en) 2020-06-30
EP2723078A4 (en) 2014-06-18
CA2944541A1 (en) 2012-12-27
CA3011853C (en) 2021-01-05
KR20140062455A (ko) 2014-05-23

Similar Documents

Publication Publication Date Title
JP7241148B2 (ja) 映像符号化/復号化方法及びその装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12803047

Country of ref document: EP

Kind code of ref document: A2

REEP Request for entry into the european phase

Ref document number: 2012803047

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2012803047

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 13983207

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2828462

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2014516915

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 122021025319

Country of ref document: BR

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112013021229

Country of ref document: BR

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112014010333

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112014010333

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20140429

ENP Entry into the national phase

Ref document number: 112013021229

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20130820