WO2015192372A1 - A simplified method for illumination compensation in multi-view and 3d video coding - Google Patents
A simplified method for illumination compensation in multi-view and 3d video coding Download PDFInfo
- Publication number
- WO2015192372A1 WO2015192372A1 PCT/CN2014/080406 CN2014080406W WO2015192372A1 WO 2015192372 A1 WO2015192372 A1 WO 2015192372A1 CN 2014080406 W CN2014080406 W CN 2014080406W WO 2015192372 A1 WO2015192372 A1 WO 2015192372A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- prediction
- lls
- calculate
- procedures
- flags
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
Definitions
- the invention relates generally to Three-Dimensional (3D) video processing.
- the presented invention relates to illumination compensation (IC).
- illumination compensation (IC) is adopted to compensate the difference of illumination intensity between views.
- the two parameters a and b are derived (or named as 'trained') with the neighboring reconstructed samples of the current block and the reference block as depicted Fig. 1.
- a neighboring sample x, of the reference block and a neighboring sample y t of the current block which possess the same relative position as depicted in Fig.2 are treated as a training pair.
- a neighboring sample x, of the reference block and a neighboring sample y t of the current block which possess the same relative position as depicted in Fig.2 are treated as a training pair.
- only one of each two adjacent samples are involved in the training set.
- the number of training pairs is proportional to the block size. For example, there are 8 training pairs for a 8x8 block and 64 training pairs for a 64x64 block Thus the training process will be more complex for larger blocks.
- IC is applied to each component such as Y (Luma), U (Cb), and V (Cr) separately.
- the training process is also executed separately.
- the parameters a and b are training independently for each component.
- IC the complexity of IC is mainly due to the linear least square (LLS) method, in which multiple normal multiplication operations are necessary, utilized for training a better a and b in both encoder and decoder.
- LLS linear least square
- IC is utilized for both united-direction prediction and bi-directional prediction, both luminance and chroma components. These caused the increased usage of normal multiplication operations.
- Fig. 1 is a diagram illustrating a general IC paradigm in the current 3D-HEVC
- Fig. 2 is a diagram illustrating training samples from neighboring samples of the reference block (labeled as 'x for left neighboring samples and ' ⁇ ,' for above neighboring samples) and from neighboring samples of the current block (labeled as for left neighboring samples and 'y for above neighboring samples);
- Fig. 3 is a diagram illustrating the way checking whether linear least square (LLS) is utilized to calculate a parameter in the proposed method.
- LLS linear least square
- the current derived predicted direction is bi-directional prediction in IC mode, it is set as a forward prediction direction and the other prediction information is not changed, or it is set an IC procedure not performing LLS method to calculate the a and b parameters.
- the encoder does not perform bi-directional prediction for IC mode, or not perform LLS method to calculate the a and b parameters.
- the decoder never performs bi-directional prediction for IC mode, or never perform LLS method to calculate the a and b parameters.
- the flags in stream which identify the prediction direction limit the prediction directions to forward and backward ones.
- a parameter is equal to a rounding value of ⁇ y(i)/ ⁇ x(i).
- some color components' a parameters are equal to division-translated-to- multiplication values of ⁇ y(i)/ ⁇ x(i), such as ⁇ y(i)x(((2 «16)/ ⁇ x(i)+(l «8))»16).
- a ninth embodiment when the current block is coded in IC mode, if the block size is smaller than MxL, some color components' a parameters are equal to the corresponding values derived from some additional flags transmitted in sequence, slice, Coding Unit (CU) or Transform Unit (TU) level.
- CU Coding Unit
- TU Transform Unit
- chroma components when the current block is coded in IC mode, if the block size is smaller than MxL, chroma components do not utilize the LLS method to calculate a but set a as 1.
- the M and L values for any instance of above embodiments can be set larger than 64.
- any combinations of first to twelfth embodiments are included.
- an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).
- processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware codes may be developed in different programming languages and different format or style.
- the software code may also be compiled for different target platform.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Abstract
It is proposed to further reduce the complexity of IC by simplifying the usage of LLS procedures to calculate a parameter. The a parameter calculation may be different for different color components, CU sizes, prediction modes and prediction directions.
Description
A SIMPLIFIED METHOD FOR ILLUMINATION COMPENSATION IN
MULTI-VIEW AND 3D VIDEO CODING
FIELD OF INVENTION
The invention relates generally to Three-Dimensional (3D) video processing. In particular, the presented invention relates to illumination compensation (IC).
BACKGROUND OF THE INVENTION
In the current 3D-HEVC, illumination compensation (IC) is adopted to compensate the difference of illumination intensity between views.
When IC is applied, the prediction value y is calculated as y = a*x+b, where x is a sample value in the reference block in the reference view. The two parameters a and b are derived (or named as 'trained') with the neighboring reconstructed samples of the current block and the reference block as depicted Fig. 1.
In the training process as specified in HEVC, a neighboring sample x, of the reference block and a neighboring sample yt of the current block which possess the same relative position as depicted in Fig.2 are treated as a training pair. To reduce the number of training pairs, only one of each two adjacent samples are involved in the training set.
The number of training pairs is proportional to the block size. For example, there are 8 training pairs for a 8x8 block and 64 training pairs for a 64x64 block Thus the training process will be more complex for larger blocks.
IC is applied to each component such as Y (Luma), U (Cb), and V (Cr) separately. The training process is also executed separately. The parameters a and b are training independently for each component.
N-1 N-1
b =∑y( -ax∑x( ;
As depicted above, the complexity of IC is mainly due to the linear least square (LLS) method, in which multiple normal multiplication operations are necessary, utilized for training a better a and b in both encoder and decoder. However, in current design, IC is utilized for both united-direction prediction and bi-directional prediction, both luminance and chroma
components. These caused the increased usage of normal multiplication operations.
SUMMARY OF THE INVENTION
In light of the previously described problems, methods are proposed to simplify illumination compensation (IC).
Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
Fig. 1 is a diagram illustrating a general IC paradigm in the current 3D-HEVC;
Fig. 2 is a diagram illustrating training samples from neighboring samples of the reference block (labeled as 'x for left neighboring samples and 'χ^,' for above neighboring samples) and from neighboring samples of the current block (labeled as for left neighboring samples and 'y for above neighboring samples);
Fig. 3 is a diagram illustrating the way checking whether linear least square (LLS) is utilized to calculate a parameter in the proposed method. DETAILED DESCRIPTION
The following description is of the best-contemplated mode of carrying out the invention.
This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
It is proposed to further reduce the complexity of illumination compensation (IC) by reducing the usage frequency of linear least square (LLS) method through additional checking whether LLS should be utilized, as shown in Fig. 3.
In a first embodiment, if the current derived predicted direction is bi-directional prediction in IC mode, it is set as a forward prediction direction and the other prediction information is not changed, or it is set an IC procedure not performing LLS method to calculate the a and b parameters.
In a second embodiment, the encoder does not perform bi-directional prediction for IC mode, or not perform LLS method to calculate the a and b parameters.
In a third embodiment, the decoder never performs bi-directional prediction for IC mode,
or never perform LLS method to calculate the a and b parameters.
In a fourth embodiment, when the current block is coded in IC mode, the flags in stream which identify the prediction direction limit the prediction directions to forward and backward ones.
In a fifth embodiment, when the current block is coded in IC mode, not all color components utilize the LLS procedures to calculate values a and b.
In a sixth embodiment, when the current block is coded in IC mode, if the block size is smaller than MxL, some color components' a parameter is equal to a rounding value of ∑y(i)/∑x(i).
In a seventh embodiment, when the current block is coded in IC mode, if the block size is smaller than MxL, some color components' a parameters are equal to division-translated-to- multiplication values of∑y(i)/∑x(i), such as∑y(i)x(((2«16)/∑x(i)+(l«8))»16).
In an eighth embodiment, when the current block is coded in IC mode, if the block size is smaller than MxL, some color components' a parameters are equal to 1, so b is equal to∑y(i) - ∑x(i).
In a ninth embodiment, when the current block is coded in IC mode, if the block size is smaller than MxL, some color components' a parameters are equal to the corresponding values derived from some additional flags transmitted in sequence, slice, Coding Unit (CU) or Transform Unit (TU) level.
In a tenth embodiment, when the current block is coded in IC mode, if the block size is smaller than MxL, some color components do not utilize IC procedure but utilize normal prediction mode to complete the prediction.
In an eleventh embodiment, when the current block is coded in IC mode, if the block size is smaller than MxL, chroma components do not utilize the LLS method to calculate a but set a as 1.
In a twelfth embodiment, the M and L values for any instance of above embodiments can be set larger than 64.
In other embodiments, any combinations of first to twelfth embodiments are included.
The methods described above can be used in a video encoder as well as in a video decoder. Embodiments of disparity vector derivation methods according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be
program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Claims
1. A method to reduce complexity of illumination compensation (IC), comprising not utilizing linear least square (LLS) procedures when IC is on to calculate a or b parameters for at least one color component of a predetermined set of coding unit (CU) sizes or patterns, a predetermined set of prediction directions, or a predetermined set of inter-prediction modes.
2. The method as claimed in claim 1, wherein when IC is on, bi-directional prediction modes does not performing the LLS procedures to calculate a parameter.
3. The method as claimed in claim 1, wherein when IC is on and bi-directional prediction modes are never utilized.
4. The method as claimed in claim 1, wherein if one bi-directional prediction mode is utilized, IC mode is forbidden.
5. The method as claimed in claim 3, wherein when IC is on, flags in a video stream which specify prediction directions limit the prediction directions to forward and backward ones.
6. The method as claimed in claim 4, wherein when bi-directional prediction is on, there are no flags in a video stream which specify whether a current block is predicted using IC.
7. The method as claimed in claim 1, wherein the LLS procedures to calculate a and b comprise at least one of following methods:
(l)setting a and b as instant values;
(2)only setting a as instant value, b is derived from x(i) and y(i);
(3) division-translated-to-multiplication values of ∑y(i)/∑x(i), such as ∑y(i) x (((2« 16)/∑x(i)+( 1 «8))» 16); or
(4) a or b is equal to a value decoded from flags transmitted in a video streams.
8. The method as claimed in claim 7, wherein while setting a and b as instant values, a pair value of {a,b} is { 1, 0}, which shows a procedure the same with IC is off.
9. The method as claimed in claim 7, wherein while only setting a as instant values, the pair value of {a,b} is { 1,∑y(i)-∑x(i)}.
10. The method as claimed in claim 7, wherein while a or b is decoded from flags transmitted in the video streams, the flags are signalized in sequence, picture, slice, coding unit (CU), prediction unit (PU) or transform unit (TU) level.
11. The method as claimed in claim 1, wherein when IC is on, chroma components do not utilize the LLS procedures to calculate a or b parameters but set a as an instant value, b as ∑y(i)-∑x(i)}.
12. The method as claimed in claim 1, wherein when IC is on, if a current block size is
smaller than MxL, luma or chroma components do not utilize the LLS procedures to calculate a.
13. The method as claimed in claim 1, wherein if a current derived predicted direction is bi-directional prediction, it is changed to be a forward prediction direction and other prediction information is not changed, or it is set an IC procedure not performing the LLS procedures to calculate the a and b parameters.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2014/080406 WO2015192372A1 (en) | 2014-06-20 | 2014-06-20 | A simplified method for illumination compensation in multi-view and 3d video coding |
JP2016571299A JP2017520994A (en) | 2014-06-20 | 2015-06-18 | Sub-PU syntax signaling and illumination compensation method for 3D and multi-view video coding |
PCT/CN2015/081753 WO2015192781A1 (en) | 2014-06-20 | 2015-06-18 | Method of sub-pu syntax signaling and illumination compensation for 3d and multi-view video coding |
CN201580001621.4A CN105519120B (en) | 2014-06-20 | 2015-06-18 | For the three-dimensional of video data or the compartment model coding method of multi-view video coding |
US14/905,705 US10218957B2 (en) | 2014-06-20 | 2015-06-18 | Method of sub-PU syntax signaling and illumination compensation for 3D and multi-view video coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2014/080406 WO2015192372A1 (en) | 2014-06-20 | 2014-06-20 | A simplified method for illumination compensation in multi-view and 3d video coding |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015192372A1 true WO2015192372A1 (en) | 2015-12-23 |
Family
ID=54934725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/080406 WO2015192372A1 (en) | 2014-06-20 | 2014-06-20 | A simplified method for illumination compensation in multi-view and 3d video coding |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015192372A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019006363A1 (en) * | 2017-06-30 | 2019-01-03 | Vid Scale, Inc. | Local illumination compensation using generalized bi-prediction |
WO2019147826A1 (en) * | 2018-01-25 | 2019-08-01 | Qualcomm Incorporated | Advanced motion vector prediction speedups for video coding |
WO2020118211A3 (en) * | 2018-12-08 | 2020-07-30 | Qualcomm Incorporated | Interaction of illumination compensation with inter-prediction |
CN111819857A (en) * | 2018-03-14 | 2020-10-23 | 联发科技股份有限公司 | Method and apparatus for optimizing partition structure for video encoding and decoding |
CN112703732A (en) * | 2018-09-19 | 2021-04-23 | 交互数字Vc控股公司 | Local illumination compensation for video encoding and decoding using stored parameters |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1622638A (en) * | 2004-12-27 | 2005-06-01 | 北京中星微电子有限公司 | Image brightness correcting method of video monitoring system |
CN101216941A (en) * | 2008-01-17 | 2008-07-09 | 上海交通大学 | Motion estimation method under violent illumination variation based on corner matching and optic flow method |
US20130163666A1 (en) * | 2010-09-03 | 2013-06-27 | Dolby Laboratories Licensing Corporation | Method and System for Illumination Compensation and Transition for Video Coding and Processing |
US20140139627A1 (en) * | 2012-11-20 | 2014-05-22 | Qualcomm Incorporated | Adaptive luminance compensation in three dimensional video coding |
-
2014
- 2014-06-20 WO PCT/CN2014/080406 patent/WO2015192372A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1622638A (en) * | 2004-12-27 | 2005-06-01 | 北京中星微电子有限公司 | Image brightness correcting method of video monitoring system |
CN101216941A (en) * | 2008-01-17 | 2008-07-09 | 上海交通大学 | Motion estimation method under violent illumination variation based on corner matching and optic flow method |
US20130163666A1 (en) * | 2010-09-03 | 2013-06-27 | Dolby Laboratories Licensing Corporation | Method and System for Illumination Compensation and Transition for Video Coding and Processing |
US20140139627A1 (en) * | 2012-11-20 | 2014-05-22 | Qualcomm Incorporated | Adaptive luminance compensation in three dimensional video coding |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019006363A1 (en) * | 2017-06-30 | 2019-01-03 | Vid Scale, Inc. | Local illumination compensation using generalized bi-prediction |
WO2019147826A1 (en) * | 2018-01-25 | 2019-08-01 | Qualcomm Incorporated | Advanced motion vector prediction speedups for video coding |
US10652571B2 (en) | 2018-01-25 | 2020-05-12 | Qualcomm Incorporated | Advanced motion vector prediction speedups for video coding |
CN111819857A (en) * | 2018-03-14 | 2020-10-23 | 联发科技股份有限公司 | Method and apparatus for optimizing partition structure for video encoding and decoding |
CN112703732A (en) * | 2018-09-19 | 2021-04-23 | 交互数字Vc控股公司 | Local illumination compensation for video encoding and decoding using stored parameters |
WO2020118211A3 (en) * | 2018-12-08 | 2020-07-30 | Qualcomm Incorporated | Interaction of illumination compensation with inter-prediction |
CN113170123A (en) * | 2018-12-08 | 2021-07-23 | 高通股份有限公司 | Interaction of illumination compensation and inter-frame prediction |
US11290743B2 (en) | 2018-12-08 | 2022-03-29 | Qualcomm Incorporated | Interaction of illumination compensation with inter-prediction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11234002B2 (en) | Method and apparatus for encoding and decoding a texture block using depth based block partitioning | |
WO2016115981A1 (en) | Method of video coding for chroma components | |
US10218957B2 (en) | Method of sub-PU syntax signaling and illumination compensation for 3D and multi-view video coding | |
WO2016008157A1 (en) | Methods for motion compensation using high order motion model | |
WO2016054979A1 (en) | Method of 3d or multi-view video coding including view synthesis prediction | |
US9992494B2 (en) | Method of depth based block partitioning | |
EP2974310B1 (en) | Method for sub-range based coding a depth lookup table | |
WO2015192372A1 (en) | A simplified method for illumination compensation in multi-view and 3d video coding | |
WO2016115708A1 (en) | Methods for chroma component coding with separate intra prediction mode | |
US10499075B2 (en) | Method for coding a depth lookup table | |
WO2013102299A1 (en) | Residue quad tree depth for chroma components | |
US10244258B2 (en) | Method of segmental prediction for depth and texture data in 3D and multi-view coding systems | |
WO2016115736A1 (en) | Additional intra prediction modes using cross-chroma-component prediction | |
WO2016065538A1 (en) | Guided cross-component prediction | |
US11006147B2 (en) | Devices and methods for 3D video coding | |
WO2015131404A1 (en) | Methods for depth map coding | |
WO2017035833A1 (en) | Neighboring-derived prediction offset (npo) | |
WO2013159326A1 (en) | Inter-view motion prediction in 3d video coding | |
WO2016044974A1 (en) | Palette table signalling | |
WO2015100732A1 (en) | A padding method for intra block copying | |
WO2015139201A1 (en) | Simplified illumination compensation in multi-view and 3d video coding | |
WO2015103747A1 (en) | Motion parameter hole filling | |
WO2016165122A1 (en) | Inter prediction offset | |
WO2016070363A1 (en) | Merge with inter prediction offset | |
WO2015006900A1 (en) | A disparity derived depth coding method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14895017 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14895017 Country of ref document: EP Kind code of ref document: A1 |