WO2014029086A1 - Procédés d'amélioration de l'héritage des vecteurs de mouvement et la prédiction du mouvement entre vues pour une carte de profondeurs - Google Patents

Procédés d'amélioration de l'héritage des vecteurs de mouvement et la prédiction du mouvement entre vues pour une carte de profondeurs Download PDF

Info

Publication number
WO2014029086A1
WO2014029086A1 PCT/CN2012/080463 CN2012080463W WO2014029086A1 WO 2014029086 A1 WO2014029086 A1 WO 2014029086A1 CN 2012080463 W CN2012080463 W CN 2012080463W WO 2014029086 A1 WO2014029086 A1 WO 2014029086A1
Authority
WO
WIPO (PCT)
Prior art keywords
mvi
mode
flag
signaled
skip
Prior art date
Application number
PCT/CN2012/080463
Other languages
English (en)
Inventor
Liang Zhao
Jicheng An
Jian-Liang Lin
Original Assignee
Mediatek Singapore Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Singapore Pte. Ltd. filed Critical Mediatek Singapore Pte. Ltd.
Priority to PCT/CN2012/080463 priority Critical patent/WO2014029086A1/fr
Publication of WO2014029086A1 publication Critical patent/WO2014029086A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • the invention relates generally to Three-Dimensional (3D) video processing.
  • the present invention relates to methods for motion vector inheritance and inter- view motion prediction for depth map in 3D video coding.
  • the 3D video coding is developed for encoding/decoding multi-view videos simultaneously captured by several cameras. It is represented using the multi-view video/texture plus depth format, in which a small number of captured views as well as depth maps are coded and the resulting bit stream packages are multiplexed into a 3D video bit stream.
  • MVI motion vector inheritance
  • coding unit In HTM4.0, the basic unit for compression, termed coding unit (CU), is a 2Nx2N square block, and each CU can be recursively split into four smaller CUs until the predefined minimum size is reached.
  • Each CU contains one or multiple prediction units (PUs). Therefore, in order to enable efficient encoding of the depth map data, MVI allows inheritance of the treeblock subdivision into CUs and PUs and their corresponding motion parameters from the video signal. Since the motion vectors of the video signal have the quarter- sample accuracy, whereas for the depth map signal only the full-sample accuracy is used, the motion vectors are quantized to their nearest full-sample position in the inheritance process.
  • the encoder can adaptively decide whether the motion data are inherited from the co-located region of the video signal or if new motion data are transmitted, as shown in Fig. 1.
  • the syntax is integrated into the merge and skip mode.
  • the merging candidate list has been extended by adding the MVI coding mode as the first candidate in the merging candidate list for depth map coding.
  • the MVI mode can be applied at any level of the treeblock hierarchy for the depth map. If the MVI mode is applied at a higher level of the depth map coding tree (the CU size that is larger than the CU size coded in the corresponding video signal), the CU subdivision, together with the corresponding motion data, is inherited from the video signal. This makes it possible to specify once for a whole treeblock, typically corresponding to 64 x 64 depth samples by inheriting the partition of the CU/PU and the motion information coded in the corresponding video signal into the depth map signal. On the other hands, if MVI is applied at a CU level which is the same or smaller than the CU sized coded in the corresponding video signal, only the motion information will be inherited from the video signal.
  • the MVI mode is only possible, if the whole region of the video signal, that the motion data and partitioning are inherited from, is coded using inter prediction.
  • the inter-view motion prediction is employed for texture coding.
  • a disparity vector (DV) for current block is firstly derived, and then the prediction block in the already coded picture in the reference view is located by adding the DV to the location of current block.
  • MCP motion- compensated prediction
  • the associated motion parameters can be used as candidate motion parameters for the current block in the current view in AMVP and merge/skip modes.
  • the derived DV can also be directly used as a candidate DV for disparity-compensated prediction (DCP) in AMVP and merge/skip modes.
  • DCP disparity-compensated prediction
  • HTM4.0 MVI mode is signalled as a merge/skip candidate.
  • this invention we propose to signal the MVI mode flag at the CU level before or after the skip_flag.
  • the first merge candidate refers to the MVI mode (i.e., merging with the corresponding block from the associated video signal), when MVI is enabled in current slice.
  • the CU will be firstly split into leaf CUs as the splitting in the co-located texture region, and then the leaf CU in depth map will always use Size_NxN partition to perform the motion compensation regardless the partition for the co- located texture block.
  • Size_NxN partition This implementation method follows the spirit of MVI only when the asymmetrical partition (AMP) is not used in texture coding since the Size_NxN partition has the finest granularity MV assignment for all the symmetrical partitions but not for AMP.
  • MVI asymmetrical partition
  • several methods can be used for depth map to perform the motion compensation.
  • MVI is applied for merge/skip mode.
  • MVI mode is limited to the skip mode only. There the MVI flag will not be included in the merging candidate list for the merge mode.
  • the inter-view motion prediction is only used for texture coding but not for depth map coding.
  • Fig. 1 is a diagram illustrating the concept of motion parameter inheritance
  • Fig. 2 is a syntax table in current HTM4.0
  • Fig. 3 is a syntax table for the first embodiment of signaling the mvi_flag before the skip_flag;
  • Fig. 4 is a syntax table for the first embodiment of signaling the mvi_flag after the skip_f ag.
  • MVI mode is signalled as a merge/skip candidate.
  • the MVI mode flag is true, the current depth CU will merge with the corresponding block from the associated video signal.
  • no residual data will be transmitted to the decoder, therefore the syntax element representing whether the current CU having residual or not will not be signaled in the bitstream. Since the MVI mode is not signalled as a merge/skip candidate any more, the candidate number in merge/skip mode is reduced by 1.
  • the MVI mode can also be signalled after the skip_flag.
  • Fig. 4 illustrates the syntax table related to the mvi_flag signalled after the skip_flag.
  • mvi_flag[ xO ][ yO ] 1 specifies that for the current coding unit, when decoding a P or B slice, no more syntax elements are parsed after mvi_flag[ xO ][ yO ].
  • mvi_flag[ xO ][ yO ] 0 specifies that the coding unit is not MVI mode.
  • the array indices xO, yO specify the location ( xO, yO ) of the top-left luma sample of the considered coding block relative to the top-left luma sample of the picture.
  • mvi_flag[ xO ][ yO ] is not present, it is inferred to be equal to 0.
  • CAB AC Three contexts are used for CAB AC, which depend on the mvi_flag of the neighboring left and above CU. If both the mvi_flag of the left and above CU are true, the third context is used; else if, both the mvi_flag of the left and above CU are false, the first context is used; else, the second context is used.
  • Two contexts are used for CABAC, which depend on the mvi_flag of the neighboring left and above CU. If both the mvi_flag of the left and above CU are true, the first context is used; else, the second context is used.
  • the mvi_flag and the skip_flag can also be adaptively switched depending on whether the reference or neighbor block is skip or MVI mode. For example, if the majority of the spatial neighbor blocks are skip modes, the skip_flag will be coded first. On the other hands, if the majority of the spatial neighbor blocks are MVI mode, the mvi_flag will be coded first.
  • the first merge candidate refers to the MVI mode (i.e., merging with the corresponding block from the associated video signal), when MVI is enabled in current slice.
  • the CU will be firstly split into leaf CUs as the splitting in the co-located texture region, and then the leaf CU in depth map will always use Size_NxN partition to perform the motion compensation regardless the partition for the co- located texture block.
  • This implementation method follows the spirit of MVI only when the asymmetrical partition (AMP) is not used in texture coding since the Size_NxN partition has the finest granularity MV assignment for all the symmetrical partitions but not for AMP.
  • the leaf CU in depth map uses the same partition size as the texture to perform the motion compensation.
  • Each 4x4 block of the leaf CU in depth map uses the same motion vector as the texture to perform the motion compensation.
  • the leaf CU in depth map always uses Size_2Nx2N to perform the motion compensation for simplification. In this case, there are several methods for selecting the final motion vector of the Size_2Nx2N partition.
  • the motion vector of the top-left 4x4 block will be selected as the motion vector of the Size_2Nx2N partition.
  • the average motion vector of all 4x4 blocks in the co-located block in texture will be selected as the motion vector of the Size_2Nx2N partition.
  • the average motion vector of different partitions in the co-located block in texture will be selected as the motion vector of the Size_2Nx2N partition.
  • the leaf CU in depth map uses the same partition size as the texture to perform the motion compensation; else if the leaf CU size in depth map is less than the associated texture CU size, Size_NxN is used to perform the motion compensation.
  • MVI is applied for merge/skip mode.
  • MVI mode is limited to the skip mode only. The MVI flag will not be included in the merging candidate list for the merge mode.
  • MVI mode is restricted to CU whose size is larger than the smallest CU (SCU), which is 8x8 in HTM common test condition.
  • MVI mode is restricted to the CU whose size is larger than or equal to the size of co- located block in the associated texture.
  • MVI mode is restricted to the CU whose CU split level is less than or equal to the CU split level of co-located block in the associated texture.
  • the inter-view motion prediction is only used for texture coding but not for depth map coding.
  • an inter- view candidate is added in the candidate list of AMVP and merge/skip mode for depth map coding.
  • the inter-view candidate derivation for depth map is also similar as that for texture. First a DV is derived for current block, and then this DV can be used directly as a candidate for DCP, or can also be used to locate the prediction block in reference view, if the prediction block in reference view uses MCP, then the motion vectors of the prediction block can be used as the motion candidate for current block.
  • the DV can be derived by the neighboring blocks as specified in JCT2- A0097 and JCT2-A0126.
  • the DV can also be derived by converting the estimated depth value to disparity by using the camera parameters.
  • the estimated depth value can be derived by the depth map estimation method as in current HTM for texture coding.
  • the DV can also be derived by the neighboring reconstruction depth map pixels. For example, first obtain the average value of the top and left reconstruction pixels, and then converting this average depth map value to a disparity by using the camera parameters.
  • the inter- view candidate in AMVP and merge/skip modes for depth map can be inserted in any position of the candidate list.
  • the inter-view candidate in AMVP can be in the first position or third position, and in merge/skip mode can be in the position after the MVI mode candidate. Therefore, for depth map, the candidate number is 3 for AMVP mode, and 7 for merge/skip mode.
  • a flag can be inserted in SPS of depth map to indicate whether the method in item 1 is on or off, which is similar as the texture.
  • This method is a combination of the first and the sixth embodiments.
  • the MVI mode flag is signaled before the skip flag, and not signaled as a candidate in merge/skip mode. Therefore, the candidate number for depth map will be 3 for AMVP mode and 6 for merge/skip mode, which is the same as that for texture.
  • inter-view candidate positions in AMVP and merge/skip candidate lists are also the same as those in texture, i.e., the inter-view candidate is located at the third position in AMVP, and the first position in merge/skip.
  • An MVI mode flag such as an ON/OFF flag can be signaled in CU level, PU level, or other levels.
  • the MVI mode flag can be signaled as a merging candidate in the merge/skip mode. If the merging candidate representing the MVI mode is selected, MVI is on, else the MVI is off. In an embodiment, the MVI mode flag can only be signaled as a merging candidate in the skip mode, and the MVI mode will not be signaled as a merging candidate in the merge mode.
  • the merging candidate representing the MVI mode can be in the first position of the candidate list, or in other positions of the candidate list.
  • the MVI mode on/off flag can be only signaled for PU with 2Nx2N partition, or can be only signaled for PU with other particular partitions, or can be signaled for all the PU regardless the partition. If the MVI mode is signaled as a merging candidate in the merge or skip mode and can be only signaled for PU with some particular partition such as 2Nx2N, then for the PU with other partitions the merging candidate list will not include the MVI mode, and then the candidate number for merge/skip mode will be reduced by 1.
  • the MVI mode on/off flag can be signaled in CU level before the skip flag signaling, i.e., in the first signaling position in CU level, or can be signaled immediately after the skip flag signaling, or can be signaled in CU level in other positions, or can also be signalled in the position adaptively switched depending on whether the reference or neighbor block is skip or MVI mode.
  • the merging candidate list will not include the MVI mode.
  • the MVI mode on/off flag can be coded by CABAC. The contexts used to code the MVI mode on/off flag can depend on the MVI mode on/off flag of the neighboring blocks.
  • the neighboring blocks can be left CU, top CU, or others.
  • the contexts number can be 2 or 3 or others.
  • one context can be used when all the neighboring blocks having MVI mode on; one context can be used when all the neighboring blocks having MVI mode off; one context can be used when the MVI mode on and off are both exist in the neighboring blocks.
  • the MVI mode on/off flag can be coded by the bypass mode of CABAC. Only one context can be used to code the MVI mode on/off flag. In another embodiment, the MVI mode on/off flag can be coded by VLC.
  • the current block will merge with the co-located block from the associated video (texture) signal when the MVI mode is selected (i.e., the MVI mode is on).
  • the current block will not have residual data when the MVI mode is selected. That is to say that the flag indicating whether the current block having residual or not will not be signaled in the bitstream when the MVI mode is selected.
  • the PU partition of current block will be set as the PU partition of the co-located block in the associated texture, not always NxN partition as in current HTM4.0 when the MVI mode is selected.
  • the motion vector of each 4x4 block will be set to the motion vector of the co-located block in the associated texture.
  • the PU partition of current block will be always set as 2Nx2N.
  • the motion parameters of each 4x4 block in current block will be set as the motion parameters of the each co-located 4x4 block in the associated texture.
  • the motion parameters of each 4x4 block in current block will be set as the same one.
  • the motion parameter of each 4x4 block in current block will be set as the average of all the motion parameters of the co-located 4x4 block in the associated texture.
  • the motion parameter of each 4x4 block in current block will be set as the average of all the motion parameters in different partitions in co- located block in the associated texture.
  • the motion parameter of each 4x4 block in current block will be set as the motion parameter of one particular 4x4 block in the associated texture.
  • the one particular 4x4 block in the associated texture can be in top-left position, middle position, or in other particular positions of the co-located block in the associated texture.
  • the MVI mode on/off flag can be signaled only when the depth of current block is less than or equal to the depth of co-located block in the associated texture, and the MVI mode is off when the depth of current block is larger than the depth of co-located block in the associated texture.
  • the MVI mode flag can be signaled only when the depth of current block is larger than the smallest CU (SCU), and the MVI mode is off when the depth of current block is equal to the SCU.
  • the inter-view candidate can be in the AMVP and merge/skip candidate list for depth map.
  • the inter-view candidate can be in any position of the AMVP and merge/skip candidate list.
  • the inter-view candidate can be in the third position of the AMVP candidate list, and in the second position of the merge/skip candidate list.
  • the position of inter-view candidate for depth map can be the same as that for texture.
  • the inter-view candidate derivation for depth map follows the same concept as that for texture.
  • the inter-view candidate derivation includes a step called disparity vector (DV) derivation for current block.
  • the DV can be derived by the neighboring blocks. For example, as specified in JCT2-A0097 and JCT2-A0126.
  • the DV can also be derived by converting the estimated depth value to disparity by using the camera parameters.
  • the estimated depth value can be derived by the depth map estimation method as in current HTM for texture coding.
  • the DV can also be derived by the neighboring reconstruction depth map pixels. For example, first obtain the average value of the top and left reconstruction pixels, and then converting this average depth map value to a disparity by using the camera parameters.
  • a flag can be inserted in SPS of depth map to indicate whether the inter-view candidate in the AMVP and merge/skip candidate list for depth map is on or off, which is similar as the case for texture.
  • the MVI mode is not in the candidate list of merge/skip mode and inter-view candidate is in the candidate list of both AMVP and merge/skip mode. Therefore, the candidate numbers in AMVP and merge/skip modes for depth map are 3 and 6 respectively, which are the same as those for texture.
  • the MVI and inter-view motion prediction methods described above can be used in a video encoder as well as in a video decoder.
  • Embodiments of MVI and inter-view motion prediction methods according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).
  • These processors can be configured to perform particular tasks according to the invention, by executing machine- readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware codes may be developed in different programming languages and different format or style.
  • the software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne des procédés d'amélioration de l'héritage des vecteurs de mouvement et la prédiction du mouvement entre vues pour une carte de profondeurs dans le codage vidéo multi-vues et le codage vidéo 3D. Le mode d'héritage des vecteurs de mouvement est utilisé pour obtenir les vecteurs de mouvement du bloc de la carte de profondeurs provenant de la région colocalisée du signal vidéo / de texture. La prédiction du mouvement entre vues emploie un vecteur de disparité déduit pour obtenir les informations codées de mouvement d'une vue de référence ou pour réaliser directement une prédiction compensée par rapport à la disparité.
PCT/CN2012/080463 2012-08-22 2012-08-22 Procédés d'amélioration de l'héritage des vecteurs de mouvement et la prédiction du mouvement entre vues pour une carte de profondeurs WO2014029086A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/080463 WO2014029086A1 (fr) 2012-08-22 2012-08-22 Procédés d'amélioration de l'héritage des vecteurs de mouvement et la prédiction du mouvement entre vues pour une carte de profondeurs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/080463 WO2014029086A1 (fr) 2012-08-22 2012-08-22 Procédés d'amélioration de l'héritage des vecteurs de mouvement et la prédiction du mouvement entre vues pour une carte de profondeurs

Publications (1)

Publication Number Publication Date
WO2014029086A1 true WO2014029086A1 (fr) 2014-02-27

Family

ID=50149349

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/080463 WO2014029086A1 (fr) 2012-08-22 2012-08-22 Procédés d'amélioration de l'héritage des vecteurs de mouvement et la prédiction du mouvement entre vues pour une carte de profondeurs

Country Status (1)

Country Link
WO (1) WO2014029086A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015133866A1 (fr) * 2014-03-06 2015-09-11 삼성전자 주식회사 Procédé de décodage de vidéo inter-couche et appareil associé effectuant une prédiction par sous-blocs, et procédé de codage de vidéo inter-couche et appareil associé effectuant une prédiction par sous-blocs
WO2015141977A1 (fr) * 2014-03-20 2015-09-24 엘지전자 주식회사 Procédé et dispositif de codage/décodage vidéo 3d
WO2015139183A1 (fr) * 2014-03-17 2015-09-24 Mediatek Inc. Procede de signalisation en mode de partitionnement de blocs sur la base de profondeur pour le codage video trdiimensionnel et multi-vues
WO2015139206A1 (fr) * 2014-03-18 2015-09-24 Mediatek Singapore Pte. Ltd. Procédés de codage vidéo 3d

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1694535A (zh) * 2005-06-20 2005-11-09 浙江大学 一种复杂度可分级的模式选择方法
WO2012071871A1 (fr) * 2010-11-29 2012-06-07 Mediatek Inc. Procédé et appareil pour prédicteur de vecteur de mouvement étendu

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1694535A (zh) * 2005-06-20 2005-11-09 浙江大学 一种复杂度可分级的模式选择方法
WO2012071871A1 (fr) * 2010-11-29 2012-06-07 Mediatek Inc. Procédé et appareil pour prédicteur de vecteur de mouvement étendu

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PING WU ET AL.: "Introduction to the High-Efficiency Video Coding Standard", ZTE COMMUNICATIONS, vol. 10, no. 2, June 2012 (2012-06-01), pages 2 - 8 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015133866A1 (fr) * 2014-03-06 2015-09-11 삼성전자 주식회사 Procédé de décodage de vidéo inter-couche et appareil associé effectuant une prédiction par sous-blocs, et procédé de codage de vidéo inter-couche et appareil associé effectuant une prédiction par sous-blocs
WO2015139183A1 (fr) * 2014-03-17 2015-09-24 Mediatek Inc. Procede de signalisation en mode de partitionnement de blocs sur la base de profondeur pour le codage video trdiimensionnel et multi-vues
WO2015139206A1 (fr) * 2014-03-18 2015-09-24 Mediatek Singapore Pte. Ltd. Procédés de codage vidéo 3d
WO2015141977A1 (fr) * 2014-03-20 2015-09-24 엘지전자 주식회사 Procédé et dispositif de codage/décodage vidéo 3d
CN106068649A (zh) * 2014-03-20 2016-11-02 Lg电子株式会社 3d视频编码/解码方法和设备

Similar Documents

Publication Publication Date Title
US11234002B2 (en) Method and apparatus for encoding and decoding a texture block using depth based block partitioning
JP6472877B2 (ja) 視点合成予測を含む3dまたは多視点映像符号化の方法
JP6415414B2 (ja) 3dビデオエンコーダ、システム、プログラム、コンピュータ可読記録媒体
EP3130147B1 (fr) Procédés de prédiction et de décodage de vecteur de bloc pour codage à mode de copie de bloc intra
US20190191180A1 (en) Method for sub-pu motion information inheritance in 3d video coding
KR101706309B1 (ko) 3차원 비디오 코딩을 위한 뷰간 후보자 유도 방법 및 장치
CN106464907B (zh) 运动合并模式下消除视点合成预测候选冗余的装置和方法
AU2013330031B2 (en) Method and apparatus for motion information prediction and inheritance in video coding
AU2019370424B9 (en) Video picture prediction method and apparatus
JP6042556B2 (ja) 3dビデオ符号化における制約される視差ベクトル導出の方法と装置
CA2908115A1 (fr) Procede de codage cabac simplifie dans le codage de video 3d
WO2014166068A1 (fr) Raffinement d'une prédiction de vue composite destiné au codage de vidéo 3d
WO2015062002A1 (fr) Procédés de prédiction de niveau de sous-pu
CA2904424C (fr) Procede et appareil de signalisation de parametres de camera lors d'un codage video 3d
CN110430433B (zh) 导出子预测单元的时间点之间的运动信息的方法和装置
CA2896132C (fr) Procede et appareil de codage dependant de la profondeur compatible
WO2014029086A1 (fr) Procédés d'amélioration de l'héritage des vecteurs de mouvement et la prédiction du mouvement entre vues pour une carte de profondeurs
WO2015135175A1 (fr) Procédé simplifié de partitionnement de bloc basé sur la profondeur
US9716884B2 (en) Method of signaling for mode selection in 3D and multi-view video coding
WO2014106327A1 (fr) Procédé et appareil pour prédiction résiduelle inter-vue lors du codage vidéo multi-vues
WO2015103747A1 (fr) Remplissage de trous de paramètres de mouvement
KR20160064845A (ko) 깊이 부호화를 위한 서브-예측 유닛 레벨 기반 화면간 움직임 예측 방법 및 장치
WO2014166096A1 (fr) Élaboration de vues de référence pour la prédiction de mouvement inter-vues et la prédiction de résidus inter-vues
WO2015139183A1 (fr) Procede de signalisation en mode de partitionnement de blocs sur la base de profondeur pour le codage video trdiimensionnel et multi-vues
WO2014047781A1 (fr) Procédés de prédiction résiduelle entre vues

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12883183

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12883183

Country of ref document: EP

Kind code of ref document: A1