WO2014107073A1 - Procédé et appareil d'encodage de vidéo, et procédé et appareil de décodage de ladite vidéo - Google Patents

Procédé et appareil d'encodage de vidéo, et procédé et appareil de décodage de ladite vidéo Download PDF

Info

Publication number
WO2014107073A1
WO2014107073A1 PCT/KR2014/000108 KR2014000108W WO2014107073A1 WO 2014107073 A1 WO2014107073 A1 WO 2014107073A1 KR 2014000108 W KR2014000108 W KR 2014000108W WO 2014107073 A1 WO2014107073 A1 WO 2014107073A1
Authority
WO
WIPO (PCT)
Prior art keywords
intra prediction
neighboring pixels
coding unit
unit
current block
Prior art date
Application number
PCT/KR2014/000108
Other languages
English (en)
Korean (ko)
Inventor
민정혜
이태미
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Publication of WO2014107073A1 publication Critical patent/WO2014107073A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Definitions

  • a picture is divided into macro blocks to encode an image.
  • Each macroblock is encoded in all encoding modes available for inter prediction and intra prediction, and then one encoding mode is selected according to the bit rate required for encoding the macro block and the degree of distortion of the original macro block and the decoded macro block. Select to encode the macro block.
  • FIG. 8 illustrates encoding information according to depths, according to an embodiment of the present invention.
  • 21 is a flowchart of a video encoding method, according to an embodiment.
  • Image data in the largest coding unit is encoded based on coding units according to depths according to at least one depth less than or equal to the maximum depth, and encoding results based on the coding units for each depth are compared. As a result of comparing the encoding error of the coding units according to depths, a depth having the smallest encoding error may be selected. At least one coding depth may be determined for each maximum coding unit.
  • the partition type includes not only symmetric partitions in which the height or width of the prediction unit is divided by a symmetrical ratio, but also partitions divided in an asymmetrical ratio, such as 1: n or n: 1, by a geometric form It may optionally include partitioned partitions, arbitrary types of partitions, and the like.
  • a transform depth indicating a number of divisions between the height and the width of the coding unit divided to the transform unit may be set. For example, if the size of the transform unit of the current coding unit of size 2Nx2N is 2Nx2N, the transform depth is 0, the transform depth 1 if the size of the transform unit is NxN, and the transform depth 2 if the size of the transform unit is N / 2xN / 2. Can be. That is, the transformation unit having a tree structure may also be set for the transformation unit according to the transformation depth.
  • the receiver 205 receives and parses a bitstream of an encoded video.
  • the image data and encoding information extractor 220 extracts image data encoded for each coding unit from the parsed bitstream according to coding units having a tree structure for each maximum coding unit, and outputs the encoded image data to the image data decoder 230.
  • the image data and encoding information extractor 220 may extract information about a maximum size of a coding unit of the current picture from a header for the current picture.
  • the information about the coded depth and the encoding mode according to the maximum coding units extracted by the image data and the encoding information extractor 220 may be encoded according to the depth according to the maximum coding unit, as in the video encoding apparatus 100 according to an embodiment.
  • the coding unit 325 of the video data 320 is divided three times from the largest coding unit having a long axis size of 64, and the depth is three layers deep, so that the long axis size is 32, 16. , Up to 8 coding units may be included. As the depth increases, the expressive power of the detailed information may be improved.
  • an intra predictor 410, a motion estimator 420, a motion compensator 425, and a frequency converter that are components of the image encoder 400 may be used.
  • 430, quantization unit 440, entropy encoding unit 450, inverse quantization unit 460, frequency inverse transform unit 470, deblocking unit 480, and loop filtering unit 490 are all the maximum coding units. In each case, an operation based on each coding unit among the coding units having a tree structure should be performed in consideration of the maximum depth.
  • the intra predictor 410, the motion estimator 420, and the motion compensator 425 partition each coding unit among coding units having a tree structure in consideration of the maximum size and the maximum depth of the current maximum coding unit.
  • a prediction mode, and the frequency converter 430 should determine the size of a transform unit in each coding unit among the coding units having a tree structure.
  • FIG. 5 is a block diagram of an image decoder based on coding units, according to an embodiment of the present invention.
  • the bitstream 505 is parsed through the parsing unit 510, and the encoded image data to be decoded and information about encoding necessary for decoding are parsed.
  • the encoded image data is output as inverse quantized data through the entropy decoder 520 and the inverse quantizer 530, and the image data of the spatial domain is restored through the frequency inverse transformer 540.
  • Data in the spatial domain that has passed through the intra predictor 550 and the motion compensator 560 may be post-processed through the deblocking unit 570 and the loop filtering unit 580 to be output to the reconstructed frame 595.
  • the post-processed data through the deblocking unit 570 and the loop filtering unit 580 may be output as the reference frame 585.
  • the intra predictor 550 and the motion compensator 560 determine partitions and prediction modes for each coding unit having a tree structure, and the frequency inverse transform unit 540 must determine the size of the transform unit for each coding unit. do.
  • the prediction unit of the coding unit 630 of size 16x16 having a depth of 2 includes a partition 630 of size 16x16, partitions 632 of size 16x8, and a partition of size 8x16 included in the coding unit 630 of size 16x16. 634, partitions 636 of size 8x8.
  • the prediction unit of the coding unit 640 of size 8x8 having a depth of 3 includes a partition 640 of size 8x8, partitions 642 of size 8x4 and a size of 4x8 included in the coding unit 640 of size 8x8. Partitions 644, partitions 646 of size 4x4.
  • the data of the 64x64 coding unit 710 is encoded by performing frequency transformation on the 32x32, 16x16, 8x8, and 4x4 transform units having a size of 64x64 or less, and the transform unit having the least error with the original is obtained. Can be selected.
  • the output unit 130 of the video encoding apparatus 100 is information about an encoding mode, and information about a partition type 800 and information 810 about a prediction mode for each coding unit of each coded depth.
  • the information 820 about the size of the transformation unit may be encoded and transmitted.
  • prediction coding For each partition type, prediction coding must be performed repeatedly for one 2N_0x2N_0 partition, two 2N_0xN_0 partitions, two N_0x2N_0 partitions, and four N_0xN_0 partitions.
  • prediction encoding For partitions having a size 2N_0x2N_0, a size N_0x2N_0, a size 2N_0xN_0, and a size N_0xN_0, prediction encoding may be performed in an intra mode and an inter mode. The skip mode may be performed only for prediction encoding on partitions having a size of 2N_0x2N_0.
  • one partition 2N_ (d-1) x2N_ (d-1), two partitions 2N_ (d-1) xN_ (d-1), two sizes N_ (d-1) x2N_ Prediction encoding is repeatedly performed for each partition of (d-1) and four partitions of size N_ (d-1) xN_ (d-1), so that a partition type having a minimum encoding error may be searched. .
  • 10, 11, and 12 illustrate a relationship between a coding unit, a prediction unit, and a frequency transformation unit, according to an embodiment of the present invention.
  • the coding units 1010 are coding units according to coding depths determined by the video encoding apparatus 100 according to an embodiment with respect to the maximum coding unit.
  • the prediction unit 1060 is partitions of prediction units of each coding depth of each coding depth among the coding units 1010, and the transformation unit 1070 is transformation units of each coding depth for each coding depth.
  • the partition type information indicates the symmetric partition types 2Nx2N, 2NxN, Nx2N and NxN, in which the height or width of the prediction unit is divided by the symmetrical ratio, and the asymmetric partition types 2NxnU, 2NxnD, nLx2N, nRx2N, which are divided by the asymmetrical ratio.
  • the asymmetric partition types 2NxnU and 2NxnD are divided into heights 1: 3 and 3: 1, respectively, and the asymmetric partition types nLx2N and nRx2N are divided into 1: 3 and 3: 1 widths, respectively.
  • the encoding information held by each adjacent data unit is checked, it may be determined whether the adjacent data units are included in the coding unit having the same coding depth.
  • the coding unit of the corresponding coding depth may be identified by using the encoding information held by the data unit, the distribution of the coded depths within the maximum coding unit may be inferred.
  • the prediction coding when the prediction coding is performed by referring to the neighboring coding unit, the data adjacent to the current coding unit in the coding unit according to depths is encoded by using the encoding information of the adjacent coding units according to depths.
  • the neighboring coding unit may be referred to by searching.
  • FIG. 13 illustrates a relationship between a coding unit, a prediction unit, and a transformation unit, according to encoding mode information of Table 1.
  • Table 2 is only one example, and whether or not to use the filtered peripheral pixels according to various block sizes and intra prediction modes may be set in other ways.
  • the threshold (Thres_val) has a smaller value than that of the luminance component. Therefore, in the case of the chrominance component, the filtered neighboring pixel is used as a reference pixel for intra prediction.
  • the reference pixel determiner 1420 filters and reconstructs the neighboring pixels reconstructed and the reconstructed neighboring pixels based on the size of the block of the chrominance component and the intra prediction mode as shown in the following pseudo code.
  • the neighboring pixels to be used for intra prediction of the color difference component block among the neighboring pixels obtained by filtering the extracted neighboring pixels twice may be determined.
  • Thres_val ⁇ 6, // 4x4 block
  • the filtered neighboring pixel twice is used as the reference pixel.
  • a greater number of intra prediction modes may be used than an intra prediction mode used in conventional H.264 / AVC.
  • a total of 35 intra prediction modes may be used for a block of luminance components.
  • 15 illustrates a prediction mode index allocated according to an intra prediction mode.
  • the intra prediction mode 0 is a planar mode
  • the intra prediction mode 1 is a DC mode
  • the intra prediction modes 2 to 34 are intra prediction modes having directionalities as illustrated in FIG. 15.
  • an Intra_FromLuma mode using the intra prediction mode of the luminance component may be added to the block of the chrominance component.
  • the prediction mode index of the Intra_FromLuma mode is assigned a value of 36.
  • FIG. 18 illustrates neighboring pixels used in a current block and intra prediction according to an embodiment of the present invention.
  • FIG. 19 is a reference diagram for describing a filtering process of neighboring pixels according to an exemplary embodiment of the present invention
  • FIG. 20 illustrates neighboring pixels to be filtered.
  • the filtering unit 1410 may generate a second filtered surrounding pixel ContextFiltered2 [n] by recalculating a weighted average value between the first filtered surrounding pixels ContextFiltered1 [n].
  • the filtering unit 1210 generates a second filtered peripheral pixel by applying a 3-tap filter to the first filtered peripheral pixels ContextFiltered1 [n] as shown in Equation 2 below.
  • the reference pixel determiner 1420 determines neighboring pixels to be used for intra prediction of the current block among the filtered neighboring pixels and the original neighboring pixels based on the size of the current block and the intra prediction mode to be performed. . As illustrated in Tables 2 to 4, the reference pixel determiner 1420 is based on the size of the chrominance component block and the intra prediction mode to be applied, independently of the process of determining the reference pixel during intra prediction of the luminance component block. The neighboring pixels used in the intra prediction of the color difference component block among the original neighboring pixels or at least one filtered neighboring pixels are determined.
  • the intra prediction execution unit 1430 generates an prediction value by performing intra prediction on the current block according to the intra prediction mode information by using the neighboring pixel determined by the reference pixel determination unit 1420.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé et un appareil pour encoder une vidéo, qui filtrent les pixels voisins utilisés dans l'intra prédiction du bloc actuel encodé, et qui effectuent une intra prédiction en utilisant les pixels voisins filtrés. Le pixel voisin à utiliser en tant que pixel de référence par rapport au pixel voisin d'origine et au pixel voisin filtré peut être déterminé sur la base de la taille du bloc de composante de chrominance et du mode d'intra prédiction qui doit être appliqué.
PCT/KR2014/000108 2013-01-04 2014-01-06 Procédé et appareil d'encodage de vidéo, et procédé et appareil de décodage de ladite vidéo WO2014107073A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361748819P 2013-01-04 2013-01-04
US61/748,819 2013-01-04

Publications (1)

Publication Number Publication Date
WO2014107073A1 true WO2014107073A1 (fr) 2014-07-10

Family

ID=51062345

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/000108 WO2014107073A1 (fr) 2013-01-04 2014-01-06 Procédé et appareil d'encodage de vidéo, et procédé et appareil de décodage de ladite vidéo

Country Status (2)

Country Link
KR (1) KR20140089488A (fr)
WO (1) WO2014107073A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017222326A1 (fr) * 2016-06-24 2017-12-28 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo
CN108429910A (zh) * 2017-02-15 2018-08-21 扬智科技股份有限公司 图像压缩方法
US11533508B2 (en) 2018-06-08 2022-12-20 Kt Corporation Method and apparatus for encoding/decoding residual data based on a plurality of transformations
RU2792225C2 (ru) * 2018-06-08 2023-03-21 Кт Корпорейшен Способ и устройство обработки видеосигнала

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180029905A (ko) 2016-09-13 2018-03-21 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
WO2019203487A1 (fr) * 2018-04-19 2019-10-24 엘지전자 주식회사 Procédé et appareil pour coder une image sur la base d'une prédiction intra

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008160819A (ja) * 2006-11-29 2008-07-10 Matsushita Electric Ind Co Ltd 画像処理方法および画像処理装置
KR20100095914A (ko) * 2009-02-23 2010-09-01 에스케이 텔레콤주식회사 채널 상관 관계를 이용한 영상 부호화/복호화 장치 및 방법과 그를 위한 컴퓨터로 읽을 수 있는 기록매체
KR20110018189A (ko) * 2009-08-17 2011-02-23 삼성전자주식회사 영상의 부호화 방법 및 장치, 그 복호화 방법 및 장치
KR20120043661A (ko) * 2010-10-26 2012-05-04 (주)휴맥스 적응적 화면내 예측 부호화 및 복호화 방법
KR20120140222A (ko) * 2011-06-20 2012-12-28 한국전자통신연구원 영상 부호화/복호화 방법 및 그 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008160819A (ja) * 2006-11-29 2008-07-10 Matsushita Electric Ind Co Ltd 画像処理方法および画像処理装置
KR20100095914A (ko) * 2009-02-23 2010-09-01 에스케이 텔레콤주식회사 채널 상관 관계를 이용한 영상 부호화/복호화 장치 및 방법과 그를 위한 컴퓨터로 읽을 수 있는 기록매체
KR20110018189A (ko) * 2009-08-17 2011-02-23 삼성전자주식회사 영상의 부호화 방법 및 장치, 그 복호화 방법 및 장치
KR20120043661A (ko) * 2010-10-26 2012-05-04 (주)휴맥스 적응적 화면내 예측 부호화 및 복호화 방법
KR20120140222A (ko) * 2011-06-20 2012-12-28 한국전자통신연구원 영상 부호화/복호화 방법 및 그 장치

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363628A (zh) * 2016-06-24 2022-04-15 株式会社Kt 图像编码方法、图像解码方法以及装置
US11445177B2 (en) 2016-06-24 2022-09-13 Kt Corporation Method and apparatus for processing video signal
CN109417628A (zh) * 2016-06-24 2019-03-01 株式会社Kt 视频信号处理方法和装置
US10735720B2 (en) 2016-06-24 2020-08-04 Kt Corporation Method and apparatus for processing video signal
US11445178B2 (en) 2016-06-24 2022-09-13 Kt Corporation Method and apparatus for processing video signal
CN109417628B (zh) * 2016-06-24 2022-03-08 株式会社Kt 视频信号处理方法和装置
CN114363629A (zh) * 2016-06-24 2022-04-15 株式会社Kt 图像编码方法、图像解码方法以及装置
US11350084B2 (en) 2016-06-24 2022-05-31 Kt Corporation Method and apparatus for processing video signal
WO2017222326A1 (fr) * 2016-06-24 2017-12-28 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo
US11445179B2 (en) 2016-06-24 2022-09-13 Kt Corporation Method and apparatus for processing video signal
CN108429910A (zh) * 2017-02-15 2018-08-21 扬智科技股份有限公司 图像压缩方法
CN108429910B (zh) * 2017-02-15 2021-09-10 扬智科技股份有限公司 图像压缩方法
US11533508B2 (en) 2018-06-08 2022-12-20 Kt Corporation Method and apparatus for encoding/decoding residual data based on a plurality of transformations
RU2792225C2 (ru) * 2018-06-08 2023-03-21 Кт Корпорейшен Способ и устройство обработки видеосигнала
US12003772B2 (en) 2018-06-08 2024-06-04 Kt Corporation Method and apparatus for encoding/decoding residual data based on a plurality of transformations

Also Published As

Publication number Publication date
KR20140089488A (ko) 2014-07-15

Similar Documents

Publication Publication Date Title
WO2013002586A2 (fr) Méthode et appareil d'encodage et de décodage d'image utilisant l'intra-prédiction
WO2016200100A1 (fr) Procédé et appareil de codage ou de décodage d'image au moyen d'une syntaxe de signalisation pour une prédiction de poids adaptatif
WO2013002589A2 (fr) Procédé et dispositif de prédiction de la composante de chrominance d'une image à partir la composante de luminance
WO2013115572A1 (fr) Procédé et appareil de codage et décodage vidéo basés sur des unités de données hiérarchiques comprenant une prédiction de paramètre de quantification
WO2012124961A2 (fr) Procédé et appareil d'encodage d'images et procédé et appareil de décodage d'images
WO2011021838A2 (fr) Procédé et appareil de codage vidéo et procédé et appareil de décodage vidéo
WO2013109123A1 (fr) Procédé et dispositif de codage vidéo permettant d'améliorer la vitesse de traitement de prédiction intra, et procédé et dispositif de décodage vidéo
WO2012087077A2 (fr) Procédé et dispositif permettant de coder un mode d'intra-prédiction pour une unité de prédiction d'image et procédé et dispositif permettant de décoder un mode d'intra-prédiction pour une unité de prédiction d'image
WO2011087297A2 (fr) Procédé et appareil de codage vidéo de codage à l'aide d'un filtrage de dégroupage et procédé et appareil de décodage vidéo à l'aide d'un filtrage de dégroupage
WO2013002557A2 (fr) Procédé et appareil pour encoder des informations de mouvement et méthode et appareil pour les décoder
WO2011126309A2 (fr) Procédé et appareil destinés à coder une vidéo et procédé et appareil destinés à décoder une vidéo
WO2011126281A2 (fr) Procédé et appareil destinés à coder une vidéo en exécutant un filtrage en boucle sur la base d'une unité de données à structure arborescente et procédé et appareil destinés à décoder une vidéo en procédant de même
WO2012173415A2 (fr) Procédé et appareil pour coder des informations de mouvement et procédé et appareil pour les décoder
WO2014007524A1 (fr) Procédé et appareil permettant d'effectuer un codage entropique d'une vidéo et procédé et appareil permettant d'effectuer un décodage entropique d'une vidéo
WO2011129620A2 (fr) Procédé de codage vidéo et appareil de codage vidéo basés sur des unités de codage déterminées selon une structure arborescente, et procédé de décodage vidéo et appareil de décodage vidéo basés sur des unités de codage déterminées selon une structure arborescente
WO2014171713A1 (fr) Procédé et appareil de codage/décodage vidéo utilisant une prédiction intra
WO2013005963A2 (fr) Procédé et appareil pour coder de la vidéo, et procédé et appareil pour décoder de la vidéo, par prédiction inter, au moyen de blocs d'image contigus
WO2011126275A2 (fr) Détermination d'un mode de prédiction intra d'une unité de codage d'image et d'une unité de décodage d'image
WO2012044126A2 (fr) Procédé et appareil d'intra-prédiction d'image
WO2011016702A2 (fr) Procédé et appareil de codage d'images et procédé et appareil de décodage de ces images codées
WO2013005962A2 (fr) Procédé de codage vidéo à intra-prédiction utilisant un processus de vérification pour possibilité de référence unifiée, procédé de décodage vidéo et dispositif associé
WO2013002585A2 (fr) Procédé et appareil de codage/décodage entropique
WO2013066051A1 (fr) Procédé et appareil pour déterminer un modèle contextuel de codage et décodage entropique du coefficient de transformation
WO2013109122A1 (fr) Procédé et appareil pour codage vidéo et procédé et appareil pour décodage vidéo modifiant l'ordre d'analyse en fonction d'une unité de codage hiérarchique
WO2015020504A1 (fr) Procédé et appareil pour déterminer un mode de fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14735440

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14735440

Country of ref document: EP

Kind code of ref document: A1