WO2015192371A1 - Texture based depth coding method - Google Patents

Texture based depth coding method Download PDF

Info

Publication number
WO2015192371A1
WO2015192371A1 PCT/CN2014/080404 CN2014080404W WO2015192371A1 WO 2015192371 A1 WO2015192371 A1 WO 2015192371A1 CN 2014080404 W CN2014080404 W CN 2014080404W WO 2015192371 A1 WO2015192371 A1 WO 2015192371A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
candidate
sad
block
calibration process
Prior art date
Application number
PCT/CN2014/080404
Other languages
French (fr)
Inventor
Kai Zhang
Jicheng An
Xianguo Zhang
Han HUANG
Original Assignee
Mediatek Singapore Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Singapore Pte. Ltd. filed Critical Mediatek Singapore Pte. Ltd.
Priority to PCT/CN2014/080404 priority Critical patent/WO2015192371A1/en
Publication of WO2015192371A1 publication Critical patent/WO2015192371A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • the invention relates generally to Three-Dimensional (3D) video processing.
  • the presented invention relates to depth coding method.
  • 3D video coding schemes such as 3D-HEVC
  • texture pictures in different views and their corresponding depth pictures are coded together as depicted in Fig. l .
  • Coding tools are developed to explore the redundancy between different views, or between depth and texture.
  • a fundamental redundancy is between depth and texture pictures.
  • disparity information is attainable by analyzing the two texture pictures.
  • a depth picture can be generated.
  • this approach is widely used to get the original depth picture when no distance measurement equipment is available.
  • redundancy happens when a depth picture on View 1 is signaled after two texture pictures on View 0 and View 1 at the same time have already been signaled, since the depth information is behind the two texture pictures.
  • DDD Disparity derived depth
  • DVx wdep + b , (1) where w and b are two camera parameters.
  • (1) is implemented in an integer form thus w and b can be transmitted from the encoder to the decoder as integers.
  • DCP disparity- compensated prediction
  • a DDD candidate will be inserted into the merging candidate list right after the texture candidate.
  • all prediction samples in the current block are set as dep calculated by (2), which is implemented in an integer form.
  • Fig. 2 shows the procedure to derive a depth value from its corresponding disparity vector. It should be noted that the DDD candidate is invalid on the base view since DCP is applied only on dependent views.
  • DDD reduce the fundamental redundancy to some extent, its utilization is restricted since it can only be applied when the collocated texture block of the current depth block is predicted by DCP.
  • the fundamental redundancy has not been explored sufficiently by DDD.
  • Fig. 1 is a diagram illustrating an exemplary 3D video coding structure with three views
  • Fig. 2 is a diagram illustrating a depth value derived from its corresponding disparity vector
  • Fig. 3 is a diagram illustrating an exemplary depth compensation calibration architecture at decoder
  • Fig. 4 is a diagram illustrating an exemplary depth reconstruction calibration architecture at decoder
  • Fig. 5 is a diagram illustrating an exemplary calibration process architecture
  • Fig.6 is a diagram illustrating an exemplary selection of candidate depth values from a 4x4 region. The depth pixel values at the black positions are chosen.
  • DCC depth compensation calibration
  • DRC depth reconstruction calibration
  • the reconstruction of a depth block is calibrated before being output or being referred to by following blocks or pictures.
  • Fig. 4 demonstrates an exemplary DRC architecture at decoder. The calibration process depends on the texture pictures from at least two different views.
  • the calibration process is designed to modify a depth block depending on the texture pictures from at least two different views.
  • the input to the calibration process can be prediction pixels in a depth block in DCC; or the input can be reconstruction pixels in a depth block in DRC.
  • the output of the calibration process can be modified prediction pixels in DCC; or the output can be modified reconstruction pixels in DRC.
  • the input depth block is divided into several regions.
  • the regions cover the whole input depth block and do not overlap with each other.
  • the regions may possess the same size and shape.
  • a depth value T is determined by analyzing texture pictures from at least two different views. All pixels in RegionA is set equal to be T.
  • Fig. 5 demonstrates an example how to determine the value T.
  • T is chosen from several candidate depth values, denoted as Cando, Candi, Cand ⁇ .
  • a corresponding disparity vector noted as DV ⁇ can be obtained by converting the depth value into a disparity vector.
  • the equation (1) can be applied for the conversion.
  • RegionB The corresponding region of RegionA in the texture picture at the same time and in the same view with the current depth picture.
  • RegionB The terminology 'corresponding' means RegionA and RegionB possess the same shape, same size and are at the same relative position in their pictures.
  • RegionC covers RegionB totally. In other words, RegionC contains RegionB or RegionC is equal to RegionC.
  • a disparity aligned region for RegionC can be located in the texture picture at the same time in the base view.
  • the sum of absolute difference (SAD) between RegionC and the disparity aligned region with DV / t is calculated as SAD ⁇ .
  • RegionA is an MxN block, where M and N are positive integers. In still another embodiment, RegionA is an MxM block, where is a positive integers. For example, M is 4 or 8.
  • RegionC is an MxN block, where M and N are positive integers.
  • RegionC is an MxM block, where M is positive integers. For example, M is 4 or 8.
  • RegionA is a single pixel.
  • RegionA is the whole input block.
  • RegionC is larger than RegionB.
  • RegionC is the same region as RegionB.
  • RegionA is a 4x4 block.
  • the depth pixel values at the black positions are chosen as candidate depth values.
  • V+offset and V-offset can be chosen as a candidate depth value, where offset is a positive integer.
  • the number of different candidate depth values used to calculate the SAD cannot exceed a maximum value M. If M different candidates have been checked, then determining process for value T is stopped. The candidate value producing the minimum SAD currently will be chosen to be T.
  • the determining process for value T is stopped if a candidate depth value produces a SAD lower than a threshold.
  • the candidate value producing the minimum SAD currently will be chosen to be T.
  • the disparity vector converted from a candidate depth value can hold an integer-pixel precision or a sub-pixel precision. If it hold a sub-pixel precision, interpolation filtering is applied to get the disparity aligned region for RegionC.
  • interpolation filtering for example, the interpolation filter for luma or chroma used in the motion compensation (MC) process by HEVC can be applied. In another example a bi-linear interpolation filter can be applied.
  • the SAD calculation process can be applied on the luma component, it can also be applied on chroma components.
  • the SAD on luma (Y) and two chroma (Cb and Cr) components for the Ath candidate are SAD Y FC SAD Cr f e respectively
  • the calibration process can be used or not adaptively.
  • the encoder can send the information of whether to use the calibration process to the decoder explicitly.
  • the decoder can derive whether to use the calibration process in the same way as the encoder explicitly.
  • the calibration process can be applied on coding tree unit (CTU), coding unit (CU), prediction unit (PU) or transform unit (TU).
  • CTU coding tree unit
  • CU coding unit
  • PU prediction unit
  • TU transform unit
  • the encoder can send the information of whether to use the calibration process to the decoder in video parameter set (VPS), sequence parameter set (SPS), picture parameter set (PPS), slice header (SH), CTU, CU, PU, TU.
  • VPS video parameter set
  • SPS sequence parameter set
  • PPS picture parameter set
  • SH slice header
  • CTU CTU
  • CU CU
  • PU PU
  • the calibration process can only be applied for CU with some particular sizes. For example, it can only be applied to a CU with size larger than 8x8. In another example, it can only be applied to a CU with size smaller than 64x64.
  • the calibration process can only be applied for CU with some particular PU partition. For example, it can only be applied to a CU with 2Nx2N partition.
  • the calibration process can be applied for a PU coded as merge mode.
  • the calibration process can be applied for a PU coded as merge mode and a specific merging candidate is selected.
  • the specific merging candidate is after the texture candidate in the merging candidate list.
  • the specific merging candidate shares the same motion information as the texture candidate. And the prediction block input to the calibration process is obtained by this motion information.
  • the specific merging candidate shares the same motion information as the first candidate in the HEVC merging candidate list if the texture candidate is unavailable. And the prediction block input to the calibration process is obtained by this motion information.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).
  • processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware codes may be developed in different programming languages and different format or style.
  • the software code may also be compiled for different target platform.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Abstract

This contribution presents a texture based depth coding method in 3D-HEVC. Based on at least two texture pictures from different views, a calibration process can be applied on a prediction depth block or a reconstruction depth block. The calibrated pixels are then output as new prediction block or reconstruction block for following coding process.

Description

TEXTURE BASED DEPTH CODING METHOD
FIELD OF INVENTION
The invention relates generally to Three-Dimensional (3D) video processing. In particular, the presented invention relates to depth coding method.
BACKGROUND OF THE INVENTION
In 3D video coding schemes such as 3D-HEVC, texture pictures in different views and their corresponding depth pictures are coded together as depicted in Fig. l . Coding tools are developed to explore the redundancy between different views, or between depth and texture.
A fundamental redundancy is between depth and texture pictures. With texture pictures at the same time from two different views, disparity information is attainable by analyzing the two texture pictures. By converting the disparity vectors for each pixel to depth values, a depth picture can be generated. In fact, this approach is widely used to get the original depth picture when no distance measurement equipment is available. Obviously, redundancy happens when a depth picture on View 1 is signaled after two texture pictures on View 0 and View 1 at the same time have already been signaled, since the depth information is behind the two texture pictures.
Disparity derived depth (DDD) coding is adopted in 3D-HEVC to explore this fundamental redundancy partially. A disparity vector (DVx, 0) can be derived from its corresponding depth value (dep) by a linear relationship as
DVx = wdep + b , (1) where w and b are two camera parameters. In 3D-HEVC, (1) is implemented in an integer form thus w and b can be transmitted from the encoder to the decoder as integers.
It can be seen from (1) that the conversion between a depth value dep and DVx is reversible. Thus, a depth value can also be derived from its corresponding disparity vector {DVx, DVy) as dep =—-DVx -—. (2) w w
When the collocated texture block of the current depth block is predicted by disparity- compensated prediction (DCP), a DDD candidate will be inserted into the merging candidate list right after the texture candidate. In a DDD candidate, all prediction samples in the current block are set as dep calculated by (2), which is implemented in an integer form. Fig. 2 shows the procedure to derive a depth value from its corresponding disparity vector. It should be noted that the DDD candidate is invalid on the base view since DCP is applied only on dependent views.
Although DDD reduce the fundamental redundancy to some extent, its utilization is restricted since it can only be applied when the collocated texture block of the current depth block is predicted by DCP. The fundamental redundancy has not been explored sufficiently by DDD.
SUMMARY OF THE INVENTION
In light of the previously described problems, several texture based depth coding methods are proposed to explore the fundamental redundancy more sufficiently.
Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
Fig. 1 is a diagram illustrating an exemplary 3D video coding structure with three views;
Fig. 2 is a diagram illustrating a depth value derived from its corresponding disparity vector;
Fig. 3 is a diagram illustrating an exemplary depth compensation calibration architecture at decoder;
Fig. 4 is a diagram illustrating an exemplary depth reconstruction calibration architecture at decoder;
Fig. 5 is a diagram illustrating an exemplary calibration process architecture;
Fig.6 is a diagram illustrating an exemplary selection of candidate depth values from a 4x4 region. The depth pixel values at the black positions are chosen.
DETAILED DESCRIPTION
The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
Several texture based depth coding methods are proposed to explore the fundamental redundancy more sufficiently. In one embodiment, depth compensation calibration (DCC) is proposed. In DCC, the prediction of a depth block is calibrated before it is used to get the residues at encoder or get the reconstruction at decoder. Fig. 3 demonstrates an exemplary DCC architecture at decoder. The calibration process depends on the texture pictures from at least two different views.
In another embodiment, depth reconstruction calibration (DRC) is proposed. In DRC, the reconstruction of a depth block is calibrated before being output or being referred to by following blocks or pictures. Fig. 4 demonstrates an exemplary DRC architecture at decoder. The calibration process depends on the texture pictures from at least two different views.
The calibration process is designed to modify a depth block depending on the texture pictures from at least two different views. The input to the calibration process can be prediction pixels in a depth block in DCC; or the input can be reconstruction pixels in a depth block in DRC. The output of the calibration process can be modified prediction pixels in DCC; or the output can be modified reconstruction pixels in DRC.
In one embodiment of the calibration process, the input depth block is divided into several regions. The regions cover the whole input depth block and do not overlap with each other. The regions may possess the same size and shape. For a region RegionA in the input depth block, a depth value T is determined by analyzing texture pictures from at least two different views. All pixels in RegionA is set equal to be T.
Fig. 5 demonstrates an example how to determine the value T. T is chosen from several candidate depth values, denoted as Cando, Candi, Cand^. For each candidate, noted as Cand/t, a corresponding disparity vector noted as DV^ can be obtained by converting the depth value into a disparity vector. For example, the equation (1) can be applied for the conversion. The corresponding region of RegionA in the texture picture at the same time and in the same view with the current depth picture is denoted as RegionB. The terminology 'corresponding' means RegionA and RegionB possess the same shape, same size and are at the same relative position in their pictures. Another region noted as RegionC covers RegionB totally. In other words, RegionC contains RegionB or RegionC is equal to RegionC. With DV^, a disparity aligned region for RegionC can be located in the texture picture at the same time in the base view. The sum of absolute difference (SAD) between RegionC and the disparity aligned region with DV/t is calculated as SAD ^. Suppose the -Sth depth candidate can generate the minimum value of SAD, or in a formula way S=argmin^ (SAD k). Then the depth value Cand^is chosen as the value T.
In another embodiment, RegionA is an MxN block, where M and N are positive integers. In still another embodiment, RegionA is an MxM block, where is a positive integers. For example, M is 4 or 8.
In still another embodiment, RegionC is an MxN block, where M and N are positive integers.
In still another embodiment, RegionC is an MxM block, where M is positive integers. For example, M is 4 or 8.
In still another embodiment, RegionA is a single pixel.
In still another embodiment, RegionA is the whole input block.
In still another embodiment, RegionC is larger than RegionB.
In still another embodiment, RegionC is the same region as RegionB.
In still another embodiment, several depth values in RegionA are chosen as the candidate depth values. Fig. 6 demonstrates an example. RegionA is a 4x4 block. The depth pixel values at the black positions are chosen as candidate depth values.
In still another embodiment, if V is a candidate depth value, then V+offset and V-offset can be chosen as a candidate depth value, where offset is a positive integer.
In still another embodiment, the number of different candidate depth values used to calculate the SAD cannot exceed a maximum value M. If M different candidates have been checked, then determining process for value T is stopped. The candidate value producing the minimum SAD currently will be chosen to be T.
In still another embodiment, the determining process for value T is stopped if a candidate depth value produces a SAD lower than a threshold. The candidate value producing the minimum SAD currently will be chosen to be T.
In still another embodiment, the disparity vector converted from a candidate depth value can hold an integer-pixel precision or a sub-pixel precision. If it hold a sub-pixel precision, interpolation filtering is applied to get the disparity aligned region for RegionC. For example, the interpolation filter for luma or chroma used in the motion compensation (MC) process by HEVC can be applied. In another example a bi-linear interpolation filter can be applied.
The SAD calculation process can be applied on the luma component, it can also be applied on chroma components.
In still another embodiment, the SAD on luma (Y) and two chroma (Cb and Cr) components for the Ath candidate are SAD Y FC
Figure imgf000005_0001
SADCr fe respectively, then the final SAD used in the comparison should be calculated as SAD=WY* SAD wCB* SADcVwCR* SADCR FC ; wY, wcb and wCr can be any integer or real numbers. For example, wY =1, wcb =wCr=4.
In still another embodiment, the calibration process can be used or not adaptively. The encoder can send the information of whether to use the calibration process to the decoder explicitly. Or the decoder can derive whether to use the calibration process in the same way as the encoder explicitly.
In still another embodiment, the calibration process can be applied on coding tree unit (CTU), coding unit (CU), prediction unit (PU) or transform unit (TU).
In still another embodiment, the encoder can send the information of whether to use the calibration process to the decoder in video parameter set (VPS), sequence parameter set (SPS), picture parameter set (PPS), slice header (SH), CTU, CU, PU, TU.
In still another embodiment, the calibration process can only be applied for CU with some particular sizes. For example, it can only be applied to a CU with size larger than 8x8. In another example, it can only be applied to a CU with size smaller than 64x64.
In still another embodiment, the calibration process can only be applied for CU with some particular PU partition. For example, it can only be applied to a CU with 2Nx2N partition.
In still another embodiment, the calibration process can be applied for a PU coded as merge mode.
In still another embodiment, the calibration process can be applied for a PU coded as merge mode and a specific merging candidate is selected. For example, the specific merging candidate is after the texture candidate in the merging candidate list.
In still another embodiment, the specific merging candidate shares the same motion information as the texture candidate. And the prediction block input to the calibration process is obtained by this motion information.
In still another embodiment, the specific merging candidate shares the same motion information as the first candidate in the HEVC merging candidate list if the texture candidate is unavailable. And the prediction block input to the calibration process is obtained by this motion information.
The proposed method described above can be used in a video encoder as well as in a video decoder. Embodiments of methods according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims

1. A method of coding depth component, comprising,
receiving an input of a depth block;
calibrating pixel values of the depth block based on at least two texture pictures from different views; and
outputting the calibrated depth block.
2. The method as claimed in claim 1, wherein a prediction of a depth block is calibrated before it is used to get residues at encoder or get reconstruction at decoder.
3. The method as claimed in claim 1, wherein reconstruction of a depth block is calibrated before being output or being referred to by following blocks or pictures.
4. The method as claimed in claim 1, wherein the depth values generated from a disparity vectors is used in prediction or as a predictor of the depth coding.
5. The method as claimed in claim 1, wherein regions cover a whole input depth block and do not overlap with each other, the regions possess the same size and shape, for a region
RegionA in the input depth block, a depth value T is determined by analyzing texture pictures from at least two different views, and all pixels in RegionA is set equal to be T.
6. The method as claimed in claim 5, wherein T is chosen from several candidate depth values, denoted as Cando, Candi, ... , Cand^, for each candidate, noted as Cand^, a corresponding disparity vector noted as DV^ is obtained by converting the depth value into a disparity vector, and another region noted as RegionC covers RegionB which is the corresponding region of RegionA in the texture picture totally, with DV^, a disparity aligned region for RegionC is located in the texture picture at the same time in the base view, a sum of absolute difference (SAD) between RegionC and the disparity aligned region with DV^ is calculated as SAD ¾ wherein suppose the -Sth depth candidate generates a minimum value of SAD, or in a formula way S=argmin k (SAD k), then the depth value Cands is chosen as the value T.
7. The method as claimed in claim 5, wherein RegionA is an MxN block, where M and N are positive integers, particularly, RegionA is an MxM block, where is a positive integers, and RegionA is a single pixel if M is equal to 1.
8. The method as claimed in claim 6, wherein RegionC is an MxN block, where M and N are positive integers, particularly, RegionC is an MxM block, where is a positive integers, and RegionC is a single pixel if M is equal to 1.
9. The method as claimed in claim 6, wherein several depth values in RegionA are chosen as the candidate depth values; The pixel values at some specific positions can be chosen as the candidate depth values; The specific position includes but not limited to center, left above, right bottom, right above and left bottom; RegionA is a 4x4 block; The depth pixel values at the black positions are chosen as candidate depth values.
10. The method as claimed in claim 6, wherein if V is a candidate depth value, then V+offset and V-offset can be chosen as a candidate depth value, where offset is a positive integer.
11. The method as claimed in claim 6, wherein a number of different candidate depth values used to calculate the SAD never exceed a maximum value M, if M different candidates have been checked, then determining process for value T is stopped, and the candidate value producing the minimum SAD currently will be chosen to be T.
12. The method as claimed in claim 6, wherein the determining process for value T is stopped if a candidate depth value produces a SAD lower than a threshold; The candidate value producing the minimum SAD currently will be chosen to be T.
13. The method as claimed in claim 6, wherein the disparity vector converted from a candidate depth value can hold an integer-pixel precision or a sub-pixel precision; If it hold a sub-pixel precision, interpolation filtering is applied to get the disparity aligned region for RegionC.
14. The method as claimed in claim 6, wherein The SAD calculation process is applied on the luma component, chroma components, or both luma and chroma components.
15. The method as claimed in claim 6, wherein the SAD on luma (Y) and two chroma (Cb and Cr) components for the Ath candidate are SAD SADcb ¾ SADCr k respectively, then the final SAD used in the comparison is calculated as SAD = WY* SAD +
. WY wcb and wcr ig any integer or real numbers.
Figure imgf000009_0001
16. The method as claimed in claim 1, wherein the calibration process is used or not adaptively, the encoder sends information of whether to use the calibration process to the decoder explicitly, or the decoder derive whether to use the calibration process in the same way as the encoder explicitly.
17. The method as claimed in claim 1 and claim 16, wherein the calibration process is applied on coding tree unit (CTU), coding unit (CU), prediction unit (PU) or transform unit (TU) level.
18. The method as claimed in claim 1, wherein the encoder sends information of whether to use the calibration process to the decoder in video parameter set (VPS), sequence parameter set (SPS), picture parameter set (PPS), slice header (SH), CTU, CU, PU, or TU level.
19. The method as claimed in claim 1, wherein the calibration process is only applied for CU with a predetermined set of sizes.
20. The method as claimed in claim 1, wherein the calibration process is applied for CU with a predetermined set of PU partitions.
21. The method as claimed in claim 1, wherein the calibration process is applied for a PU coded as merge mode.
22. The method as claimed in claim 21, wherein the calibration process is applied for a PU coded as merge mode and a specific merging candidate is selected.
23. The method as claimed in claim 22, wherein the specific merging candidate shares same motion information as the texture candidate, and the prediction block input to the calibration process is obtained by this motion information.
24. The method as claimed in claim 23, wherein the specific merging candidate shares the same motion information as a first candidate in a merging candidate list if the texture candidate is unavailable, and the prediction block input to the calibration process is obtained by this motion information.
PCT/CN2014/080404 2014-06-20 2014-06-20 Texture based depth coding method WO2015192371A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/080404 WO2015192371A1 (en) 2014-06-20 2014-06-20 Texture based depth coding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/080404 WO2015192371A1 (en) 2014-06-20 2014-06-20 Texture based depth coding method

Publications (1)

Publication Number Publication Date
WO2015192371A1 true WO2015192371A1 (en) 2015-12-23

Family

ID=54934724

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/080404 WO2015192371A1 (en) 2014-06-20 2014-06-20 Texture based depth coding method

Country Status (1)

Country Link
WO (1) WO2015192371A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013030456A1 (en) * 2011-08-30 2013-03-07 Nokia Corporation An apparatus, a method and a computer program for video coding and decoding
CN103621093A (en) * 2011-06-15 2014-03-05 联发科技股份有限公司 Method and apparatus of texture image compression in 3D video coding
CN103826135A (en) * 2013-12-24 2014-05-28 浙江大学 Three-dimensional video depth map coding method based on just distinguishable parallax error estimation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103621093A (en) * 2011-06-15 2014-03-05 联发科技股份有限公司 Method and apparatus of texture image compression in 3D video coding
WO2013030456A1 (en) * 2011-08-30 2013-03-07 Nokia Corporation An apparatus, a method and a computer program for video coding and decoding
CN103826135A (en) * 2013-12-24 2014-05-28 浙江大学 Three-dimensional video depth map coding method based on just distinguishable parallax error estimation

Similar Documents

Publication Publication Date Title
US11234002B2 (en) Method and apparatus for encoding and decoding a texture block using depth based block partitioning
CN110581997B (en) Motion vector precision refinement
JP6633694B2 (en) Multi-view signal codec
JP6472877B2 (en) Method for 3D or multi-view video encoding including view synthesis prediction
WO2016008157A1 (en) Methods for motion compensation using high order motion model
JP5535625B2 (en) Method and apparatus for adaptive reference filtering
WO2014075236A1 (en) Methods for residual prediction with pseudo residues in 3d video coding
US9473787B2 (en) Video coding apparatus and video coding method
CN110944183B (en) Prediction using non-sub-block spatial motion vectors in inter mode
JP2020506484A (en) Method and apparatus for processing image property maps
WO2020058958A1 (en) Construction for motion candidates list
WO2015192372A1 (en) A simplified method for illumination compensation in multi-view and 3d video coding
KR101750421B1 (en) Moving image encoding method, moving image decoding method, moving image encoding device, moving image decoding device, moving image encoding program, and moving image decoding program
WO2015006922A1 (en) Methods for residual prediction
WO2014029086A1 (en) Methods to improve motion vector inheritance and inter-view motion prediction for depth map
WO2015192371A1 (en) Texture based depth coding method
CN110662073B (en) Boundary filtering of sub-blocks
US20160286212A1 (en) Video encoding apparatus and method, and video decoding apparatus and method
WO2014106327A1 (en) Method and apparatus for inter-view residual prediction in multiview video coding
JP5711636B2 (en) Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, image decoding program
US20170019683A1 (en) Video encoding apparatus and method and video decoding apparatus and method
WO2015103747A1 (en) Motion parameter hole filling
JP6310340B2 (en) Video encoding apparatus, video decoding apparatus, video encoding method, video decoding method, video encoding program, and video decoding program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14894826

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14894826

Country of ref document: EP

Kind code of ref document: A1