WO2014047781A1 - Methods for inter-view residual prediction - Google Patents

Methods for inter-view residual prediction Download PDF

Info

Publication number
WO2014047781A1
WO2014047781A1 PCT/CN2012/081924 CN2012081924W WO2014047781A1 WO 2014047781 A1 WO2014047781 A1 WO 2014047781A1 CN 2012081924 W CN2012081924 W CN 2012081924W WO 2014047781 A1 WO2014047781 A1 WO 2014047781A1
Authority
WO
WIPO (PCT)
Prior art keywords
current
view
ivrp
inter
predetermined condition
Prior art date
Application number
PCT/CN2012/081924
Other languages
French (fr)
Inventor
Jicheng An
Kai Zhang
Jian-Liang Lin
Original Assignee
Mediatek Singapore Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Singapore Pte. Ltd. filed Critical Mediatek Singapore Pte. Ltd.
Priority to PCT/CN2012/081924 priority Critical patent/WO2014047781A1/en
Publication of WO2014047781A1 publication Critical patent/WO2014047781A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • the invention relates generally to Three-Dimensional (3D) video processing.
  • the present invention relates to methods for inter-view residual prediction in 3D video coding.
  • 3D video coding is developed for encoding or decoding video data of multiple views simultaneously captured by several cameras. Since all cameras capture the same scene from different viewpoints, multi-view video data contains a large amount of inter- view redundancy.
  • inter-view residual prediction has been integrated to the current HEVC based 3D video coding.
  • inter- view residual prediction (IVRP) is illustrated in Figure 1.
  • the inter- view residual prediction is based on a disparity vector derivation for the current block.
  • a disparity vector is firstly derived for a current prediction unit (PU) and then the residual block in the reference view that is referenced by the disparity vector is used for predicting the residual of the current PU.
  • PU current prediction unit
  • FIG. 2 A more detailed illustration of the concept for deriving a reference block location inside the reference view is given in Figure 2.
  • a disparity vector is firstly determined for current PU.
  • the disparity vector is added to the location of the top-left sample of the current PU yielding the location of the top-left sample of the reference block. Then, similar as for motion compensation, the block of residual samples in a reference view that is located at the derived reference location is subtracted from the current residual and only the resulting difference signal is transform coded. If the disparity vector points to a sub-sample location, the residual prediction signal is obtained by interpolating the residual samples of the reference view using a bi-linear filter.
  • the usage of the inter-view residual prediction can be adaptively controlled at the prediction unit (PU) level.
  • An IVRP on/off control flag is signaled as part of the coding unit
  • CU CU syntax when all the following conditions are true: 1. Current CU is of texture (not depth) in dependent view in non-I slice. 2. Current CU uses at least one motion-compensated prediction.
  • At least one transform units (TU) covered or partially covered by the reference block in the reference view is associated with a non-intra coded CU and contains non-zero coded block flag (CBF).
  • CBF non-zero coded block flag
  • the IVRP on/off control flag is signaled as 0 or not signaled for a CU, the IVRP is off for the CU, i.e., the residual of the CU is conventionally coded using the HEVC transform coding. Otherwise, if the IVRP on/off flag is signaled as 1 for a CU, then each PU in it should determine whether to use the IVRP or not according to the reference picture type as follows:
  • the IVRP is half on.
  • the half on means the reference residual used in IVRP is multiplied by 1 ⁇ 2.
  • HTM4.0 3D video test model version 4.0
  • the parsing of the IVRP on/off control flag depends on the information (prediction modes, and CBF) from the reference view. Since the reference view and current view are not coded in one slice, if these information in reference view are not decoded correctly, then the successive CUs in the entirely slice cannot be parsed correctly, which is clearly not desirable.
  • the residual prediction signal is obtained by interpolating the residual samples of the reference view using a bi-linear filter, which will increase the complexity in both encoder and decoder.
  • Current PU uses the merge/skip mode with 2Nx2N partition and the inter-view merging candidate in temporal direction is used (i.e., merging candidate index equal to 0 and its motion parameters are in temporal direction).
  • the residual prediction signal is obtained by rounding the sub- sample location to the nearest integer location of the reference view.
  • the IVRP on or off is combined with the mode information of the PU, therefore in the mode decision process, the encoder does not need to test both the IVRP on and off cases for one mode as before and then the encoding complexity is reduced.
  • Fig. 1 is a diagram illustrating the basic concept for inter-view residual prediction
  • Fig. 2 is a diagram illustrating the derivation of the location of reference residual block.
  • the IVRP exploits the correlation of the residual signal between reference view and current view. Regardless the loop filtering, disparity-compensated prediction and the intra prediction, the residual signal can be seen as the reconstruction signal minus motion- compensated prediction (MCP). If the MCP of current PU and the MCP of reference block in reference view have high correlation, then the residual signal in current PU and reference block will have high correlation and then the IVRP will be efficient, and vice versa, assuming that the reconstruction of current PU and reference block have high correlation.
  • MCP motion- compensated prediction
  • the MCP of current PU and reference block can be expected to have high correlation when the inter- view merging candidate in temporal direction is selected for current PU, since in that case the motion parameters (MP) of reference block are directly used for current PU.
  • the inter- view merging candidate has two cases. The first one is that the motion parameters (in temporal direction) of the reference block in reference view are directly used for current PU, and if those motion parameters are not available, then the disparity vector (in inter-view direction) will be used, which is the second case. Hence, here the inter-view merging candidate in temporal direction means the first case.
  • Current PU uses the merge/skip mode with 2Nx2N partition and the inter-view merging candidate in a temporal direction is used (i.e., merging candidate index equal to 0 and the motion parameters are in temporal direction).
  • the condition that inter- view merging candidate in temporal direction can be checked as merging candidate index equal to 0 and the motion parameters are in temporal direction for implementation.
  • the second condition can also include the AMVP (inter) mode as follows: Current PU uses the merge/skip mode or AMVP (inter) mode with 2Nx2N partition, and the inter-view merging candidate in temporal direction or inter-view AMVP candidate in temporal direction is selected.
  • the residual prediction signal is obtained by rounding the sub- sample location to the nearest integer location of the reference view to avoid the interpolation procedure.
  • the IVRP on or off is combined with the mode information of the PU, therefore in the mode decision process, the encoder does not need to try both the IVRP on and off cases for one mode as before and then the encoding complexity is reduced.
  • inter- view residual prediction methods described above can be used in a video encoder as well as in a video decoder.
  • Embodiments of disparity vector derivation methods according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).
  • processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware codes may be developed in different programming languages and different format or style.
  • the software code may also be compiled for different target platform.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Abstract

Methods of the IVRP for multi-view video coding and 3D video coding are disclosed. It is proposed that the IVRP is combined with the inter-view merging candidate in temporal direction and no IVRP on/off control flag signalled, therefore the parsing issue of the original IVRP can be solved with coding gain and reduced complexity. It is also proposed that if the disparity vector of current PU points to a sub-sample location, the residual prediction signal is obtained by rounding the sub-sample location to the nearest integer location of the reference view to avoid the interpolation procedure.

Description

METHODS FOR INTER-VIEW RESIDUAL PREDICTION
BACKGROUND OF THE INVENTION
Field of the Invention
[0001] The invention relates generally to Three-Dimensional (3D) video processing. In particular, the present invention relates to methods for inter-view residual prediction in 3D video coding.
Description of the Related Art
[0002] 3D video coding is developed for encoding or decoding video data of multiple views simultaneously captured by several cameras. Since all cameras capture the same scene from different viewpoints, multi-view video data contains a large amount of inter- view redundancy.
To share the previously encoded residual information of reference views, an additional tool called inter-view residual prediction has been integrated to the current HEVC based 3D video coding.
[0003] The basic principle of the inter- view residual prediction (IVRP) is illustrated in Figure 1. The inter- view residual prediction is based on a disparity vector derivation for the current block.
A disparity vector is firstly derived for a current prediction unit (PU) and then the residual block in the reference view that is referenced by the disparity vector is used for predicting the residual of the current PU.
[0004] A more detailed illustration of the concept for deriving a reference block location inside the reference view is given in Figure 2. A disparity vector is firstly determined for current PU.
The disparity vector is added to the location of the top-left sample of the current PU yielding the location of the top-left sample of the reference block. Then, similar as for motion compensation, the block of residual samples in a reference view that is located at the derived reference location is subtracted from the current residual and only the resulting difference signal is transform coded. If the disparity vector points to a sub-sample location, the residual prediction signal is obtained by interpolating the residual samples of the reference view using a bi-linear filter.
[0005] The usage of the inter-view residual prediction can be adaptively controlled at the prediction unit (PU) level. An IVRP on/off control flag is signaled as part of the coding unit
(CU) syntax when all the following conditions are true: 1. Current CU is of texture (not depth) in dependent view in non-I slice. 2. Current CU uses at least one motion-compensated prediction.
3. At least one transform units (TU) covered or partially covered by the reference block in the reference view is associated with a non-intra coded CU and contains non-zero coded block flag (CBF).
[0006] If the IVRP on/off control flag is signaled as 0 or not signaled for a CU, the IVRP is off for the CU, i.e., the residual of the CU is conventionally coded using the HEVC transform coding. Otherwise, if the IVRP on/off flag is signaled as 1 for a CU, then each PU in it should determine whether to use the IVRP or not according to the reference picture type as follows:
1. If the current PU uses only motion-compensated prediction, then the IVRP is on.
2. If the current PU uses only disparity-compensated prediction, then the IVRP is off.
3. If the current PU is bi-directional prediction and one is motion-compensated prediction and the other is disparity-compensated prediciton, then the IVRP is half on. The half on means the reference residual used in IVRP is multiplied by ½.
[0007] As presented above, there are some drawbacks in the IVRP of current HEVC-based 3D video test model version 4.0 (HTM4.0) as follows:
1. The parsing of the IVRP on/off control flag depends on the information (prediction modes, and CBF) from the reference view. Since the reference view and current view are not coded in one slice, if these information in reference view are not decoded correctly, then the successive CUs in the entirely slice cannot be parsed correctly, which is clearly not desirable.
2. The parsing of the IVRP on/off control flag requires to check the prediction type
(motion-compensated prediction or disparity-compensated prediction) of current CU. If the CU is coded by merge/skip mode, then the merging candidate list needs to be constructed firstly before parsing the IVRP control flag, which could impact the parsing throughput and is not desirable at parsing stage.
3. If the disparity vector points to a sub- sample location, the residual prediction signal is obtained by interpolating the residual samples of the reference view using a bi-linear filter, which will increase the complexity in both encoder and decoder. BRIEF SUMMARY OF THE INVENTION
In light of the previously described problems, an improved IVRP method is proposed. In this proposed method, there is no IVRP on/off control flag signaled in CU level, instead the IVRP for a PU is on when all of the following conditions are true:
1. Current PU is of texture (not depth) in dependent view in non-I slice.
2. Current PU uses the merge/skip mode with 2Nx2N partition and the inter-view merging candidate in temporal direction is used (i.e., merging candidate index equal to 0 and its motion parameters are in temporal direction).
[0008] It is also proposed that, if the disparity vector of current PU points to a sub-sample location, the residual prediction signal is obtained by rounding the sub- sample location to the nearest integer location of the reference view.
[0009] Since there is no syntax element signaled in the proposed method, so there is no parsing error or parsing throughput issues as listed as the first and second drawbacks in the IVRP of HTM4.0.
[0010] Moreover, in the proposed method, the IVRP on or off is combined with the mode information of the PU, therefore in the mode decision process, the encoder does not need to test both the IVRP on and off cases for one mode as before and then the encoding complexity is reduced.
[0011] Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF DRAWINGS
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
[0012] Fig. 1 is a diagram illustrating the basic concept for inter-view residual prediction;
[0013] Fig. 2 is a diagram illustrating the derivation of the location of reference residual block.
DETAILED DESCRIPTION OF THE INVENTION
[0014] The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
[0015] The IVRP exploits the correlation of the residual signal between reference view and current view. Regardless the loop filtering, disparity-compensated prediction and the intra prediction, the residual signal can be seen as the reconstruction signal minus motion- compensated prediction (MCP). If the MCP of current PU and the MCP of reference block in reference view have high correlation, then the residual signal in current PU and reference block will have high correlation and then the IVRP will be efficient, and vice versa, assuming that the reconstruction of current PU and reference block have high correlation.
[0016] The MCP of current PU and reference block can be expected to have high correlation when the inter- view merging candidate in temporal direction is selected for current PU, since in that case the motion parameters (MP) of reference block are directly used for current PU. The inter- view merging candidate has two cases. The first one is that the motion parameters (in temporal direction) of the reference block in reference view are directly used for current PU, and if those motion parameters are not available, then the disparity vector (in inter-view direction) will be used, which is the second case. Apparently, here the inter-view merging candidate in temporal direction means the first case.
[0017] In order to overcome the pre-stated parsing error and parsing throughput problems and keep the R-D performance in the same time, we remove the IVRP syntax in CU level and always use the IVRP for the PU that fulfills the aforementioned inter- view merging candidate condition. And we further restrict the IVRP to be enabled only for 2Nx2N partition which is the mostly-used partition for further simplification without coding performance loss. Therefore, it is proposed that the IVRP is on for a PU when all of the following conditions are true:
1. Current PU is of texture (not depth) in dependent view in non-I slice.
2. Current PU uses the merge/skip mode with 2Nx2N partition and the inter-view merging candidate in a temporal direction is used (i.e., merging candidate index equal to 0 and the motion parameters are in temporal direction).
[0018] The condition that inter- view merging candidate in temporal direction can be checked as merging candidate index equal to 0 and the motion parameters are in temporal direction for implementation. [0019] The second condition can also include the AMVP (inter) mode as follows: Current PU uses the merge/skip mode or AMVP (inter) mode with 2Nx2N partition, and the inter-view merging candidate in temporal direction or inter-view AMVP candidate in temporal direction is selected.
[0020] It is also proposed that, if the disparity vector of current PU points to a sub-sample location, the residual prediction signal is obtained by rounding the sub- sample location to the nearest integer location of the reference view to avoid the interpolation procedure.
[0021] Since there is no syntax element signaled in the proposed method, so there is no parsing error or parsing throughput problems as listed as the first and second drawbacks in the IVRP of HTM4.0.
[0022] Moreover, in the proposed method, the IVRP on or off is combined with the mode information of the PU, therefore in the mode decision process, the encoder does not need to try both the IVRP on and off cases for one mode as before and then the encoding complexity is reduced.
[0023] The inter- view residual prediction methods described above can be used in a video encoder as well as in a video decoder. Embodiments of disparity vector derivation methods according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
[0024] The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims

1. A method of inter- view residual prediction for multi-view video coding or 3D video coding, wherein inter-view residual prediction (IVRP) is enabled according to a predetermined condition.
2. The method as claimed in claim 1, wherein the IVRP is enabled when a current prediction unit (PU) is texture of a non-I slice in dependent view, and the IVRP is disabled otherwise.
3. The method as claimed in claim 1, wherein an IVRP on/off control flag is signaled for a current PU when the current PU is texture of a non-I slice in dependent view, and the IVRP on/off control flag is not signaled otherwise.
4. The method as claimed in claim 1, wherein the predetermined condition comprises a current PU uses 2Nx2N partition in merge/skip mode and an inter- view merging candidate in a temporal direction is used.
5. The method as claimed in claim 1 , wherein the predetermined condition comprises a current PU uses merge/skip mode and an inter- view merging candidate in a temporal direction is used.
6. The method as claimed in claim 1, wherein the predetermined condition comprises a current PU has 2Nx2N partition.
7. The method as claimed in claim 1, wherein the predetermined condition comprises a current PU uses same motion parameters as a reference block in a reference view.
8. The method as claimed in claim 1, wherein the predetermined condition comprises a current PU uses merge/skip mode and a merging candidate index equal to 0 and only motion- compensated prediction is used.
9. The method as claimed in claim 1 , wherein the predetermined condition comprises a current PU uses merge/skip mode and motion parameters are in a temporal direction.
10. The method as claimed in claim 1, wherein the predetermined condition comprises a current PU uses Advanced Motion Vector Prediction (AMVP) mode with 2Nx2N partition and an inter- view candidate in a temporal direction is used.
11. The method as claimed in claim 1, wherein the predetermined condition comprises a current PU uses AMVP mode and an inter-view candidate in a temporal direction is used.
12. The method as claimed in claim 1, wherein the predetermined condition comprises a current PU uses the AMVP mode and is temporal prediction.
13. The method as claimed in claim 1, if a disparity vector of a current PU points to a sub- sample location, a residual prediction signal is obtained by rounding the sub-sample location to the nearest integer location of a reference view.
PCT/CN2012/081924 2012-09-25 2012-09-25 Methods for inter-view residual prediction WO2014047781A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/081924 WO2014047781A1 (en) 2012-09-25 2012-09-25 Methods for inter-view residual prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/081924 WO2014047781A1 (en) 2012-09-25 2012-09-25 Methods for inter-view residual prediction

Publications (1)

Publication Number Publication Date
WO2014047781A1 true WO2014047781A1 (en) 2014-04-03

Family

ID=50386773

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/081924 WO2014047781A1 (en) 2012-09-25 2012-09-25 Methods for inter-view residual prediction

Country Status (1)

Country Link
WO (1) WO2014047781A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101001383A (en) * 2006-01-12 2007-07-18 三星电子株式会社 Multilayer-based video encoding/decoding method and video encoder/decoder using smoothing prediction
WO2007081117A1 (en) * 2006-01-07 2007-07-19 Electronics And Telecommunications Research Institute Method and apparatus for inter-viewing reference in multi-viewpoint video coding
CN101669367A (en) * 2007-03-02 2010-03-10 Lg电子株式会社 A method and an apparatus for decoding/encoding a video signal
CN101816180A (en) * 2007-08-06 2010-08-25 汤姆森特许公司 Methods and apparatus for motion skip mode with multiple inter-view reference pictures

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007081117A1 (en) * 2006-01-07 2007-07-19 Electronics And Telecommunications Research Institute Method and apparatus for inter-viewing reference in multi-viewpoint video coding
CN101001383A (en) * 2006-01-12 2007-07-18 三星电子株式会社 Multilayer-based video encoding/decoding method and video encoder/decoder using smoothing prediction
CN101669367A (en) * 2007-03-02 2010-03-10 Lg电子株式会社 A method and an apparatus for decoding/encoding a video signal
CN101816180A (en) * 2007-08-06 2010-08-25 汤姆森特许公司 Methods and apparatus for motion skip mode with multiple inter-view reference pictures

Similar Documents

Publication Publication Date Title
US10484680B2 (en) Method and apparatus of intra mode coding
US11234002B2 (en) Method and apparatus for encoding and decoding a texture block using depth based block partitioning
CN107079161B (en) Method for intra picture block copying for screen content and video coding
JP2022153645A (en) Effective partition encoding/decoding with high degree of freedom in partition
WO2015003383A1 (en) Methods for inter-view motion prediction
RU2768377C1 (en) Method and device for video coding using improved mode of merging with motion vector difference
EP3223523A1 (en) Method and apparatus for removing redundancy in motion vector predictors
US20160165209A1 (en) Method of Sub-PU Syntax Signaling and Illumination Compensation for 3D and Multi-view Video Coding
WO2016008157A1 (en) Methods for motion compensation using high order motion model
US20130039426A1 (en) Video decoder and a video encoder using motion-compensated prediction
WO2015062002A1 (en) Methods for sub-pu level prediction
EP2666297A1 (en) Method and apparatus for parsing error robustness of temporal motion vector prediction
WO2014166068A1 (en) Refinement of view synthesis prediction for 3-d video coding
WO2014005280A1 (en) Method and apparatus to improve and simplify inter-view motion vector prediction and disparity vector prediction
WO2015100710A1 (en) Existence of inter-view reference picture and availability of 3dvc coding tools
US10057598B2 (en) Method, device, and computer readable medium for encoding and decoding of three dimensional video of a current block of fixed images involving coding a current block as a function of coding information and coding process using selective inheritance
WO2015006922A1 (en) Methods for residual prediction
WO2014029086A1 (en) Methods to improve motion vector inheritance and inter-view motion prediction for depth map
WO2016065538A1 (en) Guided cross-component prediction
WO2014047781A1 (en) Methods for inter-view residual prediction
WO2014106327A1 (en) Method and apparatus for inter-view residual prediction in multiview video coding
WO2013155664A1 (en) Methods and apparatuses of coding structure for scalable extension of high efficiency video coding (hevc)
WO2014166096A1 (en) Reference view derivation for inter-view motion prediction and inter-view residual prediction
RU2795830C2 (en) Method and device for video encoding using improved merge mode with motion vector difference
Heo et al. Reusable HEVC Design in 3D‐HEVC

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12885661

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12885661

Country of ref document: EP

Kind code of ref document: A1