WO2015135137A1 - A method of motion information sharing in multi-view and 3d video coding - Google Patents
A method of motion information sharing in multi-view and 3d video coding Download PDFInfo
- Publication number
- WO2015135137A1 WO2015135137A1 PCT/CN2014/073226 CN2014073226W WO2015135137A1 WO 2015135137 A1 WO2015135137 A1 WO 2015135137A1 CN 2014073226 W CN2014073226 W CN 2014073226W WO 2015135137 A1 WO2015135137 A1 WO 2015135137A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- picture
- depth
- mvs
- decoded
- encoded
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
Definitions
- the invention relates generally to Three-Dimensional (3D) video processing.
- the presented invention relates to the motion information sharing between different components.
- TMVP temporal motion vector prediction
- IVMC inter-view motion compensation
- a motion compression is applied to reduce the storage required by MVs.
- 3D-HEVC a 4: 1 compression is applied after a picture is encoded /decoded. After all pictures in an access unit (AU) are encoded /decoded, a further 4: 1 compression is applied.
- MVs are stored for each coding layer separately; no matter it is a texture component or a depth component as demonstrated in Fig. 1.
- the reference picture can be a temporal collocated picture, or it can be an inter-view picture.
- a texture picture is corresponding to a depth picture if they are in the same view and at the same time instant.
- Fig. 1 is a diagram illustrating the MV storage method in the current 3D- HEVC.
- Fig. 2 is a diagram illustrating to the proposed MV sharing method.
- MVs in the corresponding texture picture for the reference picture are utilized in TMVP or IVMC, when a depth picture is encoded /decoded, as depicted in Fig. 2.
- the reference picture can be a temporal collocated picture, or it can be an inter-view picture.
- MVs in the corresponding texture picture are located and accessed in the same manner as from a depth reference picture when they are used in TMVP or IVMC. For example, MV is fetched from the collocated block in TMVP. In another example, MV is fetched from the corresponding block with a disparity vector (DV) in IVMC.
- DV disparity vector
- a texture picture is corresponding to a depth picture if they are in the same view and at the same time instant. For instance, a texture picture and its corresponding depth picture should possess the same picture order count (POC) and view index. Since MVs in a depth picture is not utilized in TMVP or IVMC by its following depth pictures, those MVs need not to be stored.
- POC picture order count
- MVs in the corresponding texture picture should be scaled or shifted when they are used in TMVP or IVMC for depth coding.
- a MV( MVx, MVy) in the corresponding texture picture should be shifted to (MVx»2, MVy»2) when it is used in TMVP or IVMC for depth coding.
- a MV( MVx, MVy) in the corresponding texture picture should be shifted to ((MVx+2)»2, (MVy+2)»2) when it is used in TMVP or IVMC for depth coding.
- MVs in a depth picture are not stored after the depth picture is coded/ decoded.
- MVs in a depth picture are not compressed after the depth picture is coded/ decoded.
- the 4: 1 MV compression is not applied after a depth picture is coded/ decoded.
- MVs in a depth picture are treated differently after the depth picture is coded/ decoded according to which view the depth picture belongs to.
- the 4: 1 MV compression is not applied after a depth picture is coded/ decoded if it is not in the base view (with view index 0).
- the 16: 1 MV compression is not applied after a depth picture is coded/ decoded if it is not in the base view (with view index 0).
- the MVs in a depth picture are not stored after a depth picture is coded/ decoded if it is not in the base view (with view index 0).
- MVs in a depth picture are treated differently after the depth picture is coded/ decoded according to whether IVMC or TMVP is used or not for depth coding.
- the 4: 1 MV compression is not applied after a depth picture is coded/ decoded if IVMC is not used for depth coding.
- the 16: 1 MV compression is not applied after a depth picture is coded/ decoded if TMVP is not used for depth coding.
- the MVs in a depth picture are not stored after a depth picture is coded/ decoded if TMVP is not used for depth coding.
- an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).
- processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware codes may be developed in different programming languages and different format or style.
- the software code may also be compiled for different target platform.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
An improved MV utilization and storage method is proposed. In the proposed method, the MVs in depth coding need not to be stored by sharing MVs from corresponding texture pictures.
Description
A METHOD OF MOTION INFORMATION SHARING IN
MULTI-VIEW AND 3D VIDEO CODING
TECHNICAL FIELD
[0001] The invention relates generally to Three-Dimensional (3D) video processing. In particular, the presented invention relates to the motion information sharing between different components.
BACKGROUND
[0002] In the current 3D-HEVC, temporal motion vector prediction (TMVP) and inter-view motion compensation (IVMC) are utilized both on texture and depth component. In TMVP, the motion vector predictor (MVP) is derived from the motion information of a collocated block in a collocated picture. In IVMC, the MVP is derived from the motion information of a corresponding block in a base-view reference picture. As a result, motion vectors (MVs) of previous coded pictures must be stored and buffered in order to be used by other pictures to be coded.
[0003] A motion compression is applied to reduce the storage required by MVs. In 3D-HEVC, a 4: 1 compression is applied after a picture is encoded /decoded. After all pictures in an access unit (AU) are encoded /decoded, a further 4: 1 compression is applied.
[0004] In the current design, MVs are stored for each coding layer separately; no matter it is a texture component or a depth component as demonstrated in Fig. 1. The reference picture can be a temporal collocated picture, or it can be an inter-view picture. A texture picture is corresponding to a depth picture if they are in the same view and at the same time instant.
[0005] However, it may be inefficient because depth coding does not depend on TMVP and IVMC as much as texture coding. And MVs in depth component is not as accurate as in texture component.
SUMMARY
[0006] In light of the previously described problems, a method is proposed to reduce the MV storage by allowing a depth component to share the stored MVs from the corresponding texture component.
[0007] Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF DRAWINGS [0008] The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
[0009] Fig. 1 is a diagram illustrating the MV storage method in the current 3D- HEVC.
[0010] Fig. 2 is a diagram illustrating to the proposed MV sharing method.
DETAILED DESCRIPTION
[0011] The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
[0012] It is proposed to reduce the MV storage by allowing a depth component to share the stored MVs from the corresponding texture component. In this way, the MVs of depth component need not to be stored thus the MV buffer can be reduced.
[0013] In one embodiment, MVs in the corresponding texture picture for the reference picture, which is a depth picture, are utilized in TMVP or IVMC, when a depth picture is encoded /decoded, as depicted in Fig. 2. The reference picture can be a temporal collocated picture, or it can be an inter-view picture. MVs in the
corresponding texture picture are located and accessed in the same manner as from a depth reference picture when they are used in TMVP or IVMC. For example, MV is fetched from the collocated block in TMVP. In another example, MV is fetched from the corresponding block with a disparity vector (DV) in IVMC. A texture picture is corresponding to a depth picture if they are in the same view and at the same time instant. For instance, a texture picture and its corresponding depth picture should possess the same picture order count (POC) and view index. Since MVs in a depth picture is not utilized in TMVP or IVMC by its following depth pictures, those MVs need not to be stored.
[0014] In another embodiment, MVs in the corresponding texture picture should be scaled or shifted when they are used in TMVP or IVMC for depth coding. For example, a MV( MVx, MVy) in the corresponding texture picture should be shifted to (MVx»2, MVy»2) when it is used in TMVP or IVMC for depth coding. In another example, a MV( MVx, MVy) in the corresponding texture picture should be shifted to ((MVx+2)»2, (MVy+2)»2) when it is used in TMVP or IVMC for depth coding.
[0015] In still another embodiment, MVs in a depth picture are not stored after the depth picture is coded/ decoded.
[0016] In still another embodiment, MVs in a depth picture are not compressed after the depth picture is coded/ decoded. For example, the 4: 1 MV compression is not applied after a depth picture is coded/ decoded.
[0017] In still another embodiment, MVs in a depth picture are treated differently after the depth picture is coded/ decoded according to which view the depth picture belongs to. For example, the 4: 1 MV compression is not applied after a depth picture is coded/ decoded if it is not in the base view (with view index 0). In another example, the 16: 1 MV compression is not applied after a depth picture is coded/ decoded if it is not in the base view (with view index 0). In still another example, the MVs in a depth picture are not stored after a depth picture is coded/ decoded if it is not in the base view (with view index 0).
[0018] In still another embodiment, MVs in a depth picture are treated differently after the depth picture is coded/ decoded according to whether IVMC or TMVP is used or not for depth coding. For example, the 4: 1 MV compression is not applied after a depth picture is coded/ decoded if IVMC is not used for depth coding. In another example, the 16: 1 MV compression is not applied after a depth picture is coded/ decoded if TMVP is not used for depth coding. In still another example, the
MVs in a depth picture are not stored after a depth picture is coded/ decoded if TMVP is not used for depth coding.
[0019] The methods described above can be used in a video encoder as well as in a video decoder. Embodiments of disparity vector derivation methods according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
[0020] The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Claims
1. A method of storing and utilizing MVs in three-dimensional video coding (3DVC), wherein motion vectors (MVs) of a picture are treated differently depends on whether the picture is a depth picture or a texture picture.
2. The method as claimed in claim 1, wherein the MVs of a depth picture are not stored after encoded or decoded.
3. The method as claimed in claim 1, wherein the MVs of a depth picture are not compressed after encoded/decoded.
4. The method as claimed in claim 1, wherein the MVs of a depth picture are treated differently depends on view or coding layer the depth picture belongs to.
5. The method as claimed in claim 4, wherein the MVs of a depth picture are not stored after encoded/decoded if the depth picture belongs to a base view.
6. The method as claimed in claim 4, wherein the MVs of a depth picture are not compressed after encoded/decoded if the depth picture belongs to a base view.
7. The method as claimed in claim 1, wherein the MVs in a corresponding texture picture T for a reference depth picture D are utilized in TMVP or IVMC, when a depth picture is encoded/decoded.
8. The method as claimed in claim 7, wherein the reference picture D used in TMVP or IVMC is determined following the same way as normal process in TMVP or IVMC.
9. The method as claimed in claim 7, wherein the corresponding texture picture T and reference depth picture D are at the same time instant and in the same view.
10. The method as claimed in claim 7, wherein the MVs in the corresponding texture picture are located and accessed in the same manner as from a reference depth picture when they are used in TMVP or IVMC.
11. The method as claimed in claim 7, wherein the MVs in the corresponding texture picture are scaled or shifted when the MVs are used in TMVP or IVMC for depth coding.
12. The method as claimed in claim 11, wherein a MV( MVx, MVy) in the corresponding texture picture is shifted to (MVx»2, MVy»2) when the MV is used in TMVP or IVMC for depth coding.
13. The method as claimed in claim 11, wherein a MV( MVx, MVy) in the
corresponding texture picture is shifted to ((MVx+2)»2, (MVy+2)»2) when the MV is used in TMVP or IVMC for depth coding.
14. The method as claimed in claim 3, wherein 4: 1 motion compression is not applied to MVs of a depth picture after encoded/ decoded; wherein 16: 1 motion compression is still applied to the MVs of a depth picture after a whole access unit (AU) is encoded/ decoded.
15. The method as claimed in claim 3, wherein 4: 1 motion compression is still applied to the MVs of a depth picture after encoded/ decoded; wherein 16: 1 motion compression is not applied to the MVs of a depth picture after a whole access unit (AU) is encoded/ decoded.
16. The method as claimed in claim 6, wherein 4: 1 motion compression is not applied to the MVs of a depth picture after encoded/ decoded if the depth picture belongs to a base view; wherein 16: 1 motion compression is still applied to the MVs of a depth picture after a whole access unit (AU) is encoded/ decoded if the depth picture belongs to a base view.
17. The method as claimed in claim 6, wherein 4: 1 motion compression is not applied to the MVs of a depth picture after encoded/ decoded if the depth picture does not belong to a base view; wherein 16: 1 motion compression is still applied to the MVs of a depth picture after a whole access unit (AU) is encoded/ decoded if the depth picture does not belong to a base view.
18. The method as claimed in claim 6, wherein 4: 1 motion compression is applied to the MVs of a depth picture after encoded/ decoded if the depth picture belongs to a base view; wherein 16: 1 motion compression is not applied to the MVs of a depth picture after a whole access unit (AU) is encoded/ decoded if the depth picture belongs to a base view.
19. The method as claimed in claim 6, wherein 4: 1 motion compression is applied to the MVs of a depth picture after encoded/ decoded if the depth picture does not belong to a base view; wherein 16: 1 motion compression is not applied to the MVs of a depth picture after a whole access unit (AU) is encoded/ decoded if the depth picture does not belong to a base view.
20. A method of MV sharing in multi-view and 3D video coding, wherein a temporal MVP (TMVP candidate) in depth coding is derived from motion information of a corresponding texture component of a collocated picture.
21. A method of MV sharing in multi-view and 3D video coding, wherein an
IVMC candidate in depth coding is derived from motion information of a corresponding texture component of an inter-view reference picture.
22. The method as claimed in claim 20 or 21, wherein a flag is transmitted in a sequence, view, layer, picture, slice, CTU, CTB, CU, or PU level to indicated whether the motion information are accessed from the corresponding texture component or current depth component for TMVP and IVMC derivation.
23. A method of MV sharing in multi-view and 3D video coding, wherein a temporal MVP (TMVP candidate) in texture coding is derived from motion information of a corresponding depth component of a collocated picture and an IVMC candidate in texture coding is derived from the motion information of the corresponding depth component of an inter-view reference picture.
24. The method as claimed in claim 23, wherein a flag is transmitted in a sequence, view, layer, picture, slice, CTU, CTB, CU, or PU level to indicated whether the motion information are accessed from the corresponding depth component or current texture component for TMVP and IVMC derivation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2014/073226 WO2015135137A1 (en) | 2014-03-11 | 2014-03-11 | A method of motion information sharing in multi-view and 3d video coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2014/073226 WO2015135137A1 (en) | 2014-03-11 | 2014-03-11 | A method of motion information sharing in multi-view and 3d video coding |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015135137A1 true WO2015135137A1 (en) | 2015-09-17 |
Family
ID=54070777
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/073226 WO2015135137A1 (en) | 2014-03-11 | 2014-03-11 | A method of motion information sharing in multi-view and 3d video coding |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015135137A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102257818A (en) * | 2008-10-17 | 2011-11-23 | 诺基亚公司 | Sharing of motion vector in 3d video coding |
US20130038686A1 (en) * | 2011-08-11 | 2013-02-14 | Qualcomm Incorporated | Three-dimensional video with asymmetric spatial resolution |
CN103621093A (en) * | 2011-06-15 | 2014-03-05 | 联发科技股份有限公司 | Method and apparatus of texture image compression in 3D video coding |
-
2014
- 2014-03-11 WO PCT/CN2014/073226 patent/WO2015135137A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102257818A (en) * | 2008-10-17 | 2011-11-23 | 诺基亚公司 | Sharing of motion vector in 3d video coding |
CN103621093A (en) * | 2011-06-15 | 2014-03-05 | 联发科技股份有限公司 | Method and apparatus of texture image compression in 3D video coding |
US20130038686A1 (en) * | 2011-08-11 | 2013-02-14 | Qualcomm Incorporated | Three-dimensional video with asymmetric spatial resolution |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10666968B2 (en) | Method of block vector prediction for intra block copy mode coding | |
US10511835B2 (en) | Method and apparatus of decoder side motion derivation for video coding | |
WO2016165069A1 (en) | Advanced temporal motion vector prediction in video coding | |
WO2016008161A1 (en) | Temporal derived bi-directional motion vector predictor | |
WO2015109598A1 (en) | Methods for motion parameter hole filling | |
WO2016008157A1 (en) | Methods for motion compensation using high order motion model | |
WO2016054979A1 (en) | Method of 3d or multi-view video coding including view synthesis prediction | |
EP3025498B1 (en) | Method of deriving default disparity vector in 3d and multiview video coding | |
WO2014166068A1 (en) | Refinement of view synthesis prediction for 3-d video coding | |
WO2015003383A1 (en) | Methods for inter-view motion prediction | |
US10110923B2 (en) | Method of reference view selection for 3D video coding | |
WO2015100710A1 (en) | Existence of inter-view reference picture and availability of 3dvc coding tools | |
WO2019223790A1 (en) | Method and apparatus of affine mode motion-vector prediction derivation for video coding system | |
WO2014166063A1 (en) | Default vector for disparity vector derivation for 3d video coding | |
WO2015135175A1 (en) | Simplified depth based block partitioning method | |
WO2014166109A1 (en) | Methods for disparity vector derivation | |
WO2015006922A1 (en) | Methods for residual prediction | |
WO2014029086A1 (en) | Methods to improve motion vector inheritance and inter-view motion prediction for depth map | |
US20150382020A1 (en) | Method of Inter-View Residual Prediction with Reduced Complexity in Three-Dimensional Video Coding | |
WO2015143603A1 (en) | An improved method for temporal motion vector prediction in video coding | |
WO2015135137A1 (en) | A method of motion information sharing in multi-view and 3d video coding | |
WO2014106327A1 (en) | Method and apparatus for inter-view residual prediction in multiview video coding | |
WO2014023024A1 (en) | Methods for disparity vector derivation | |
CN105144714B (en) | Three-dimensional or multi-view video coding or decoded method and device | |
WO2015006924A1 (en) | An additional texture merging candidate |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14885544 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14885544 Country of ref document: EP Kind code of ref document: A1 |