WO2015006924A1 - An additional texture merging candidate - Google Patents
An additional texture merging candidate Download PDFInfo
- Publication number
- WO2015006924A1 WO2015006924A1 PCT/CN2013/079472 CN2013079472W WO2015006924A1 WO 2015006924 A1 WO2015006924 A1 WO 2015006924A1 CN 2013079472 W CN2013079472 W CN 2013079472W WO 2015006924 A1 WO2015006924 A1 WO 2015006924A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- merging candidate
- texture
- additional
- candidate list
- list
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/56—Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
Definitions
- the invention relates generally to Three-Dimensional (3D) video processing.
- the presented invention relates to depth coding method.
- a texture merging candidate is adopted into the merging candidate list at position 0 in depth map coding.
- the current texture merging candidate inherits motion parameters from the center position of the collocated texture Prediction Unit (PU) as depicted in Fig. 1.
- Fig. 1 is a diagram illustrating the center position in the collocated texture PU
- Fig. 2 is a diagram illustrating the right bottom position in the collocated texture PU
- Fig. 3 is a diagram illustrating exemplary positions in the collocated texture PU.
- An additional or second texture merging candidate which inherits motion parameters from the right bottom position of the collocated texture prediction unit (PU) as depicted in Fig. 2, is proposed in depth coding and added into the merging candidate list.
- the proposed additional or second texture merging candidate can be added at any position in the merging candidate list.
- the proposed additional or second texture merging candidate is added before the temporal merging candidate which is located at position 5 in the merging candidate list.
- the pruning process could be applied to remove the proposed additional or second texture merging candidate if it is redundant.
- the second texture merging candidate is compared to the original or the first texture merging candidate.
- the proposed additional texture merging candidate is inserted into the merging candidate list at position 5 if it is not identical to the first merging candidate at position 0. It should be noted that the additional texture merging candidate can be set as invalid if the right bottom position is outside the current largest coding unit (LCU) or LCU row.
- LCU current largest coding unit
- an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).
- processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware codes may be developed in different programming languages and different format or style.
- the software code may also be compiled for different target platform.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Abstract
This contribution presents an additional texture merging candidate in 3D-HEVC. The current texture merging candidate inherits motion parameters from the center position of the collocated texture PU. Besides that one, an additional texture merging candidate is proposed, which inherits motion parameters from the right bottom position of the collocated texture PU.
Description
AN ADDITIONAL TEXTURE MERGING CANDIDATE
FIELD OF INVENTION
The invention relates generally to Three-Dimensional (3D) video processing. In particular, the presented invention relates to depth coding method.
BACKGROUND OF THE INVENTION
In the current 3D-HEVC, a texture merging candidate is adopted into the merging candidate list at position 0 in depth map coding. The current texture merging candidate inherits motion parameters from the center position of the collocated texture Prediction Unit (PU) as depicted in Fig. 1.
SUMMARY OF THE INVENTION
In light of the previously described problems, an additional texture merging candidate is proposed to improve the efficiency in depth map coding.
Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
Fig. 1 is a diagram illustrating the center position in the collocated texture PU;
Fig. 2 is a diagram illustrating the right bottom position in the collocated texture PU;
Fig. 3 is a diagram illustrating exemplary positions in the collocated texture PU.
DETAILED DESCRIPTION
The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
An additional or second texture merging candidate, which inherits motion parameters from the right bottom position of the collocated texture prediction unit (PU) as depicted in Fig. 2, is proposed in depth coding and added into the merging candidate list. The proposed additional or
second texture merging candidate can be added at any position in the merging candidate list. For example, the proposed additional or second texture merging candidate is added before the temporal merging candidate which is located at position 5 in the merging candidate list.
The pruning process could be applied to remove the proposed additional or second texture merging candidate if it is redundant. In the pruning process, the second texture merging candidate is compared to the original or the first texture merging candidate. The proposed additional texture merging candidate is inserted into the merging candidate list at position 5 if it is not identical to the first merging candidate at position 0. It should be noted that the additional texture merging candidate can be set as invalid if the right bottom position is outside the current largest coding unit (LCU) or LCU row.
The proposed method described above can be used in a video encoder as well as in a video decoder. Embodiments of methods according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Claims
1. A method of coding depth component, wherein additional merging candidates derived from a collocated texture component are in a merging candidate list besides a existing texture merging candidate in HTM7.0.
2. The method as claimed in claim 1, wherein an additional texture merging candidate, which inherits motion parameters from position X of a collocated texture Prediction Unit (PU), is inserted into the merging candidate list, wherein position X is not a center position.
3. The method as claimed in claim 2, wherein position X is at a position except the center position, including but not limited to: an outside left position, inside left position, outside above position, inside above position, inside top left position, outside top left position, inside right bottom position, outside right bottom position, inside above right position, outside above right position, inside left bottom position and outside left bottom position.
4. The method as claimed in claim 2, wherein one or more texture merging candidate is inserted in the merging candidate list.
5. The method as claimed in claim 2, wherein the additional texture merging candidate is not inserted into the merging candidate list if position X and the collocated texture PU do not belong to the same largest coding unit (LCU).
6. The method as claimed in claim 2, wherein the additional texture merging candidate is not inserted into the merging candidate list if it is identical to a merging candidate already in the merging candidate list.
7. The method as claimed in claim 2, wherein the additional texture merging candidate is inserted into any position of the merging candidate list.
8. The method as claimed in claim 2, wherein the additional texture merging candidate is inserted after the existing texture merging candidate in HTM7.0.
9. The method as claimed in claim 2, wherein the additional texture merging candidate is
added before a temporal merging candidate in the merging candidate list.
10. The method as claimed in claim 2, wherein the additional texture merging candidate is added after all spatial merging candidates in the merging candidate list.
11. The method as claimed in claim 2, wherein the additional texture merging candidate is not inserted into the merging candidate list if it is identical to a first merging candidate in the merging candidate list.
12. The method as claimed in claim 2, wherein the additional texture merging candidate is not inserted into the merging candidate list if position X is not inter-coded.
13. The method as claimed in claim 2, wherein the additional texture merging candidate is added into the merging candidate list only when a first or original merging candidate in unavailable, and the additional texture merging candidate is used to replace the first merging candidate when the first merging candidate is unavailable.
14. The method as claimed in claim 2, wherein two texture merging candidates are included in the merging candidate list, and a first texture merging candidate inherits motion parameters from a center position of the collocated texture PU and a second texture merging candidate inherits motion parameters from position X of the collocated texture PU.
15. The method as claimed in claim 14, wherein a pruning process is applied to compare the second texture merging candidate with the first texture merging candidate in order to remove redundancy.
16. The method as claimed in claim 2, wherein the motion parameters comprise motion or disparity vectors and prediction direction, where the prediction direction is uni-prediction or bi- prediction.
17. The method as claimed in claim 16, wherein the motion parameters further comprise Prediction Unit (PU) or sub-PU partition.
18. The method as claimed in claim 2, wherein a flag is used to control whether the
additional texture merging candidate is used.
19. The method as claimed in claim 18, wherein the flag is explicitly signaled in the sequence, view, picture or slice level, or slice header, wherein the slice level comprises sequence parameter set (SPS), video parameter set (VPS), adaptive parameter set (APS).
20. The method as claimed in claim 18, wherein the flag is implicitly derived at decoder side.
21. The method as claimed in claim 2, wherein position X is the same as the position used to derived additional or second inter-view merging candidate in texture coding.
22. The method as claimed in claim 2, wherein position Y is used to get the additional texture merging candidate if position X is unavailable.
23. The method as claimed in claim 2, wherein position Y is used to get the additional texture merging candidate if the one got by position X is identical to other merging candidate in the candidate list.
24. The method as claimed in claim 2, wherein position Y is used to get the additional texture merging candidate if no motion parameter can be got from the position X.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2013/079472 WO2015006924A1 (en) | 2013-07-16 | 2013-07-16 | An additional texture merging candidate |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2013/079472 WO2015006924A1 (en) | 2013-07-16 | 2013-07-16 | An additional texture merging candidate |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015006924A1 true WO2015006924A1 (en) | 2015-01-22 |
Family
ID=52345690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/079472 WO2015006924A1 (en) | 2013-07-16 | 2013-07-16 | An additional texture merging candidate |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015006924A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020043206A1 (en) * | 2018-08-30 | 2020-03-05 | 华为技术有限公司 | Method and apparatus for updating historical candidate list |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102257818A (en) * | 2008-10-17 | 2011-11-23 | 诺基亚公司 | Sharing of motion vector in 3d video coding |
WO2012171477A1 (en) * | 2011-06-15 | 2012-12-20 | Mediatek Inc. | Method and apparatus of texture image compression in 3d video coding |
WO2013030456A1 (en) * | 2011-08-30 | 2013-03-07 | Nokia Corporation | An apparatus, a method and a computer program for video coding and decoding |
WO2013053309A1 (en) * | 2011-10-11 | 2013-04-18 | Mediatek Inc. | Method and apparatus of motion and disparity vector derivation for 3d video coding and hevc |
US20130176390A1 (en) * | 2012-01-06 | 2013-07-11 | Qualcomm Incorporated | Multi-hypothesis disparity vector construction in 3d video coding with depth |
-
2013
- 2013-07-16 WO PCT/CN2013/079472 patent/WO2015006924A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102257818A (en) * | 2008-10-17 | 2011-11-23 | 诺基亚公司 | Sharing of motion vector in 3d video coding |
WO2012171477A1 (en) * | 2011-06-15 | 2012-12-20 | Mediatek Inc. | Method and apparatus of texture image compression in 3d video coding |
WO2013030456A1 (en) * | 2011-08-30 | 2013-03-07 | Nokia Corporation | An apparatus, a method and a computer program for video coding and decoding |
WO2013053309A1 (en) * | 2011-10-11 | 2013-04-18 | Mediatek Inc. | Method and apparatus of motion and disparity vector derivation for 3d video coding and hevc |
US20130176390A1 (en) * | 2012-01-06 | 2013-07-11 | Qualcomm Incorporated | Multi-hypothesis disparity vector construction in 3d video coding with depth |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020043206A1 (en) * | 2018-08-30 | 2020-03-05 | 华为技术有限公司 | Method and apparatus for updating historical candidate list |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015003383A1 (en) | Methods for inter-view motion prediction | |
WO2015109598A1 (en) | Methods for motion parameter hole filling | |
WO2016008157A1 (en) | Methods for motion compensation using high order motion model | |
WO2016165069A1 (en) | Advanced temporal motion vector prediction in video coding | |
WO2015000108A1 (en) | An improved texture merging candidate in 3dvc | |
WO2014166068A1 (en) | Refinement of view synthesis prediction for 3-d video coding | |
WO2015062002A1 (en) | Methods for sub-pu level prediction | |
WO2016008161A1 (en) | Temporal derived bi-directional motion vector predictor | |
WO2017143467A1 (en) | Localized luma mode prediction inheritance for chroma coding | |
WO2015192781A1 (en) | Method of sub-pu syntax signaling and illumination compensation for 3d and multi-view video coding | |
WO2015006920A1 (en) | An adaptive disparity vector derivation method | |
JP2016506185A5 (en) | ||
WO2015100710A1 (en) | Existence of inter-view reference picture and availability of 3dvc coding tools | |
WO2015100731A1 (en) | Methods for determining the prediction partitions | |
WO2013107347A1 (en) | Method and apparatus for simplified motion vector predictor derivation | |
WO2014005280A1 (en) | Method and apparatus to improve and simplify inter-view motion vector prediction and disparity vector prediction | |
WO2015123806A1 (en) | Methods for depth based block partitioning | |
WO2015135175A1 (en) | Simplified depth based block partitioning method | |
WO2015006922A1 (en) | Methods for residual prediction | |
WO2014166109A1 (en) | Methods for disparity vector derivation | |
WO2015006924A1 (en) | An additional texture merging candidate | |
WO2014029086A1 (en) | Methods to improve motion vector inheritance and inter-view motion prediction for depth map | |
WO2015131404A1 (en) | Methods for depth map coding | |
WO2013159326A1 (en) | Inter-view motion prediction in 3d video coding | |
WO2015143603A1 (en) | An improved method for temporal motion vector prediction in video coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13889360 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13889360 Country of ref document: EP Kind code of ref document: A1 |