WO2015000108A1 - An improved texture merging candidate in 3dvc - Google Patents
An improved texture merging candidate in 3dvc Download PDFInfo
- Publication number
- WO2015000108A1 WO2015000108A1 PCT/CN2013/078579 CN2013078579W WO2015000108A1 WO 2015000108 A1 WO2015000108 A1 WO 2015000108A1 CN 2013078579 W CN2013078579 W CN 2013078579W WO 2015000108 A1 WO2015000108 A1 WO 2015000108A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- texture
- list
- picture
- idx
- merging candidate
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
Definitions
- the invention relates generally to Three-Dimensional (3D) video processing.
- the presented invention relates to improvement on the texture merging candidate.
- texture merging candidate serves for depth merge coding.
- motion parameters such as motion vectors (MV) and reference indices are inherited directly from the collocated block in the texture picture.
- Fig. 1 shows the derivation of corresponding texture block. In the working draft, it is described as
- Fig. 1 is a diagram illustrating the derivation of corresponding texture block.
- Fig. 2 is a diagram illustrating the pseudo code of the proposed method.
- the inherited reference picture should be the one with the same POC and Viewld as the reference picture of the collocated block in the texture picture. If no reference picture in the reference lists can satisfy this condition, the texture merging candidate will be treated as invalid in this block.
- Fig.2 demonstrates an example pseudo code for the proposed method. They are other ways to realize the proposed idea.
- the reference picture in List X with reference index equal to idx is denoted as DMRefPOCLX(idx).
- TxtRefPOCLX(idx) the reference picture in List X with reference index equal to idx.
- an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).
- processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware codes may be developed in different programming languages and different format or style.
- the software code may also be compiled for different target platform.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
This contribution presents an improved method for the texture merging candidate in 3D- HEVC. In the current 3D-HEVC, the reference indices and motion vectors of a depth block in texture merging mode are inherited directly from the collocated block in the texture picture. Although this design works well in the common test condition, it may not deal properly with the situation when reference lists of the texture component and the depth component are configured in different ways. To tackle this problem, it is proposed to inherit the POCs and ViewIds of reference pictures instead of the reference indices. In another proposed method, it is proposed to restrict the reference pictures for depth map and collocated texture should have the same POC and ViewId. This bug-fix does not affect the coding performance under common test condition.
Description
AN IMPROVED TEXTURE MERGING CANDIDATE IN 3DVC
TECHNICAL FIELD
[0001] The invention relates generally to Three-Dimensional (3D) video processing. In particular, the presented invention relates to improvement on the texture merging candidate.
BACKGROUND
[0002] In the current 3D-HEVC, texture merging candidate serves for depth merge coding. In texture merging mode, motion parameters such as motion vectors (MV) and reference indices are inherited directly from the collocated block in the texture picture. Fig. 1 shows the derivation of corresponding texture block. In the working draft, it is described as
mvLXT[ 0 ] = ( textMvLX[ xRef ] [ yRef ] [ 0 ] + 2 ) » 2 (H- 143) mvLXT[ 1 ] = ( textMvLX[ xRef ][ yRef ][ 1 ] + 2 ) » 2 (H-144) refldxLX = textRefIdxLX[ xRef ] [ yRef ] (H- 145)
[0003] This design works well in the common test condition (CTC), since the reference lists of the texture component and the depth component are configured in the same way.
[0004] However, it is possible for an encoder to configure the reference lists of the texture component and the depth component in different ways, because they are in different sequences in syntax. If a reference index represents reference pictures in the texture component and the depth component with different POC or Viewld, the inherited MVs will be inaccurate. More seriously, if the reference index in the texture component is invalid in the depth component, a crash will occur.
SUMMARY
[0005] In light of the previously described problems, an improved method for the texture merging candidate in 3DVC is proposed. It is proposed to inherit the POCs
and Viewlds of reference pictures instead of the reference indices.
[0006] There is another solution that restricting the reference pictures for depth map and texture should have the same POC and Viewld.
[0007] Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF DRAWINGS
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
[0008] Fig. 1 is a diagram illustrating the derivation of corresponding texture block.
[0009] Fig. 2 is a diagram illustrating the pseudo code of the proposed method.
DETAILED DESCRIPTION [0010] The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
[0011] In texture merging mode, the inherited reference picture should be the one with the same POC and Viewld as the reference picture of the collocated block in the texture picture. If no reference picture in the reference lists can satisfy this condition, the texture merging candidate will be treated as invalid in this block. Fig.2 demonstrates an example pseudo code for the proposed method. They are other ways to realize the proposed idea.
[0012] In another solution, it is proposed to restrict the reference pictures for depth map and texture should have the same POC and Viewld as follows:
[0013] For current slice of depth map, the reference picture in List X with reference index equal to idx is denoted as DMRefPOCLX(idx).
[0014] For the collocated slice of texture, the reference picture in List X with reference index equal to idx is denoted as TxtRefPOCLX(idx).
[0015] It is proposed that it is a requirement of bitstream conformance that the following conditions apply:
[0016] For each X equal to 0 to 1, the number of reference pictures in List X denoted as nuniRefLX should be the same for the depth map slice and the corresponding texture slice.
[0017] For each X equal to 0 to 1, and each idx equal to 0 to nuniRefLX, the POC of DMRefPOCLX(idx) should be the same as the POC of TxtRefPOCLX(idx), and the Viewld of DMRefPOCLX(idx) should be the same as the Viewld of TxtRefPOCLX(idx).
[0018] The proposed methods described above can be used in a video encoder as well as in a video decoder. Embodiments of methods according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
[0019] The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims
should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Claims
1. A method of generating a texture merging candidate, wherein motion parameters of a current block in a depth component are derived from its collocated block in a texture component.
2. The method as claimed in claim 1, wherein RefldxLX of the current block is different to that of the collocated block in the texture component with X equal to 0 or 1, and RefldxLX represents a reference index for a reference list X of the texture merging candidate.
3. The method as claimed in claim 1, wherein RefPOCLX (RefldxLX) is the same to the POC of one reference picture of the collocated block in the texture component; and RefViewIdLX (RefldxLX) is the same to the Viewld of one reference picture of the collocated block in the texture component; Picture order count (POC) represents a value increasing in output order; RefPOCLX (idx) represents the POC of the reference picture with reference index idx in reference list X; Viweld represents an identifier of view; RefViewIdLX (idx) represents the Viewld of the reference picture with reference index idx in reference list X.
4. The method as claimed in claim 3, wherein reference list X is not used in the texture merging candidate if no reference pictures in the reference list X is satisfied.
5. The method as claimed in claim 4, wherein the texture merging candidate is invalid if the current picture is a P-picture and list 0 is not used.
6. The method as claimed in claim 4, wherein the texture merging candidate is invalid if the current picture is a B-picture and list 0 and list 1 are neither used.
7. The method as claimed in claim 3, wherein the motion vector from reference list X in the texture merging candidate is scaled if no reference pictures in the reference list X is satisfied.
8. A method to construct a reference picture list, wherein it is a requirement of a
bitstream conformance that the following conditions apply: for each X equal to 0 to 1, the number of reference pictures in List X denoted as numRefLX is the same for a depth map slice and a collocated texture slice;
for each X equal to 0 to 1, and each idx equal to 0 to numRefLX, a Picture Order
Count (POC) of DMRefPOCLX(idx) is the same as the POC of TxtRefPOCLX(idx), and View ID (Viewld) of DMRefPOCLX(idx) is the same as the Viewld of TxtRefPOCLX(idx).
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2013/078579 WO2015000108A1 (en) | 2013-07-01 | 2013-07-01 | An improved texture merging candidate in 3dvc |
PCT/CN2014/077859 WO2015000339A1 (en) | 2013-07-01 | 2014-05-20 | Method of texture merging candidate derivation in 3d video coding |
AU2014286821A AU2014286821B2 (en) | 2013-07-01 | 2014-05-20 | Method of texture merging candidate derivation in 3D video coding |
US14/779,431 US10306225B2 (en) | 2013-07-01 | 2014-05-20 | Method of texture merging candidate derivation in 3D video coding |
CN201480025206.8A CN105230014B (en) | 2013-07-01 | 2014-05-20 | Method and its device for the depth map encoding of 3 d video encoding system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2013/078579 WO2015000108A1 (en) | 2013-07-01 | 2013-07-01 | An improved texture merging candidate in 3dvc |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015000108A1 true WO2015000108A1 (en) | 2015-01-08 |
Family
ID=52142983
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/078579 WO2015000108A1 (en) | 2013-07-01 | 2013-07-01 | An improved texture merging candidate in 3dvc |
PCT/CN2014/077859 WO2015000339A1 (en) | 2013-07-01 | 2014-05-20 | Method of texture merging candidate derivation in 3d video coding |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/077859 WO2015000339A1 (en) | 2013-07-01 | 2014-05-20 | Method of texture merging candidate derivation in 3d video coding |
Country Status (3)
Country | Link |
---|---|
US (1) | US10306225B2 (en) |
AU (1) | AU2014286821B2 (en) |
WO (2) | WO2015000108A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106340062A (en) * | 2015-07-09 | 2017-01-18 | 长沙维纳斯克信息技术有限公司 | Three-dimensional texture model file generating method and device |
CN107001250A (en) * | 2015-09-23 | 2017-08-01 | 江苏恒瑞医药股份有限公司 | It is a kind of to prepare the method that Ao Dangka replaces intermediate |
CN110059007A (en) * | 2019-04-03 | 2019-07-26 | 北京奇安信科技有限公司 | System vulnerability scan method, device, computer equipment and storage medium |
CN112887633A (en) * | 2021-01-14 | 2021-06-01 | 四川航天神坤科技有限公司 | Video splicing and three-dimensional monitoring display method and system based on camera |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3002716A1 (en) * | 2013-02-26 | 2014-08-29 | France Telecom | DERIVATION OF MOTION VECTOR OF DISPARITY, 3D VIDEO CODING AND DECODING USING SUCH DERIVATION |
EP3797516A1 (en) | 2018-06-29 | 2021-03-31 | Beijing Bytedance Network Technology Co. Ltd. | Interaction between lut and amvp |
WO2020003278A1 (en) | 2018-06-29 | 2020-01-02 | Beijing Bytedance Network Technology Co., Ltd. | Update of look up table: fifo, constrained fifo |
CN110662057B (en) | 2018-06-29 | 2022-06-21 | 北京字节跳动网络技术有限公司 | Video processing method, device and equipment and method for storing bit stream |
KR20210024502A (en) | 2018-06-29 | 2021-03-05 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Partial/full pruning when adding HMVP candidates to merge/AMVP |
CN110662056B (en) | 2018-06-29 | 2022-06-07 | 北京字节跳动网络技术有限公司 | Which lookup table needs to be updated or not |
EP3791588A1 (en) | 2018-06-29 | 2021-03-17 | Beijing Bytedance Network Technology Co. Ltd. | Checking order of motion candidates in lut |
TWI723444B (en) | 2018-06-29 | 2021-04-01 | 大陸商北京字節跳動網絡技術有限公司 | Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks |
JP7460617B2 (en) | 2018-06-29 | 2024-04-02 | 北京字節跳動網絡技術有限公司 | LUT update conditions |
EP4307679A3 (en) | 2018-07-02 | 2024-06-19 | Beijing Bytedance Network Technology Co., Ltd. | Luts with intra prediction modes and intra mode prediction from non-adjacent blocks |
TW202025760A (en) | 2018-09-12 | 2020-07-01 | 大陸商北京字節跳動網絡技術有限公司 | How many hmvp candidates to be checked |
CN113273186A (en) | 2019-01-10 | 2021-08-17 | 北京字节跳动网络技术有限公司 | Invocation of LUT update |
CN113383554B (en) | 2019-01-13 | 2022-12-16 | 北京字节跳动网络技术有限公司 | Interaction between LUTs and shared Merge lists |
CN113302937B (en) | 2019-01-16 | 2024-08-02 | 北京字节跳动网络技术有限公司 | Motion candidate derivation |
US11055901B2 (en) | 2019-03-07 | 2021-07-06 | Alibaba Group Holding Limited | Method, apparatus, medium, and server for generating multi-angle free-perspective video data |
CN113615193B (en) | 2019-03-22 | 2024-06-25 | 北京字节跳动网络技术有限公司 | Interactions between Merge list build and other tools |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110069760A1 (en) * | 2009-09-22 | 2011-03-24 | Samsung Electronics Co., Ltd. | Apparatus and method for motion estimation of three dimension video |
EP2348732A2 (en) * | 2008-11-10 | 2011-07-27 | LG Electronics Inc. | Method and device for processing a video signal using inter-view prediction |
CN102257818A (en) * | 2008-10-17 | 2011-11-23 | 诺基亚公司 | Sharing of motion vector in 3d video coding |
WO2012059577A1 (en) * | 2010-11-04 | 2012-05-10 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Picture coding supporting block merging and skip mode |
WO2012171477A1 (en) * | 2011-06-15 | 2012-12-20 | Mediatek Inc. | Method and apparatus of texture image compression in 3d video coding |
WO2013030456A1 (en) * | 2011-08-30 | 2013-03-07 | Nokia Corporation | An apparatus, a method and a computer program for video coding and decoding |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
MX357910B (en) * | 2006-07-06 | 2018-07-30 | Thomson Licensing | Method and apparatus for decoupling frame number and/or picture order count (poc) for multi-view video encoding and decoding. |
CN101835056B (en) * | 2010-04-29 | 2011-12-07 | 西安电子科技大学 | Allocation method for optimal code rates of texture video and depth map based on models |
CN102055982B (en) | 2011-01-13 | 2012-06-27 | 浙江大学 | Coding and decoding methods and devices for three-dimensional video |
US20120236934A1 (en) * | 2011-03-18 | 2012-09-20 | Qualcomm Incorporated | Signaling of multiview video plus depth content with a block-level 4-component structure |
EP2717572B1 (en) * | 2011-06-24 | 2018-08-08 | LG Electronics Inc. | Encoding/decoding method and apparatus using a skip mode |
WO2013055148A2 (en) * | 2011-10-12 | 2013-04-18 | 엘지전자 주식회사 | Image encoding method and image decoding method |
US9467694B2 (en) * | 2011-11-21 | 2016-10-11 | Google Technology Holdings LLC | Implicit determination and combined implicit and explicit determination of collocated picture for temporal prediction |
WO2014005280A1 (en) * | 2012-07-03 | 2014-01-09 | Mediatek Singapore Pte. Ltd. | Method and apparatus to improve and simplify inter-view motion vector prediction and disparity vector prediction |
CA2887106A1 (en) * | 2012-10-03 | 2014-04-10 | Mediatek Inc. | Method and apparatus for inter-component motion prediction in three-dimensional video coding |
US20140218473A1 (en) * | 2013-01-07 | 2014-08-07 | Nokia Corporation | Method and apparatus for video coding and decoding |
WO2014166109A1 (en) * | 2013-04-12 | 2014-10-16 | Mediatek Singapore Pte. Ltd. | Methods for disparity vector derivation |
WO2015006951A1 (en) * | 2013-07-18 | 2015-01-22 | Mediatek Singapore Pte. Ltd. | Methods for fast encoder decision |
EP3091741B1 (en) * | 2014-01-02 | 2021-10-27 | Intellectual Discovery Co., Ltd. | Method for decoding multi-view video |
-
2013
- 2013-07-01 WO PCT/CN2013/078579 patent/WO2015000108A1/en active Application Filing
-
2014
- 2014-05-20 AU AU2014286821A patent/AU2014286821B2/en active Active
- 2014-05-20 WO PCT/CN2014/077859 patent/WO2015000339A1/en active Application Filing
- 2014-05-20 US US14/779,431 patent/US10306225B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102257818A (en) * | 2008-10-17 | 2011-11-23 | 诺基亚公司 | Sharing of motion vector in 3d video coding |
EP2348732A2 (en) * | 2008-11-10 | 2011-07-27 | LG Electronics Inc. | Method and device for processing a video signal using inter-view prediction |
US20110069760A1 (en) * | 2009-09-22 | 2011-03-24 | Samsung Electronics Co., Ltd. | Apparatus and method for motion estimation of three dimension video |
WO2012059577A1 (en) * | 2010-11-04 | 2012-05-10 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Picture coding supporting block merging and skip mode |
WO2012171477A1 (en) * | 2011-06-15 | 2012-12-20 | Mediatek Inc. | Method and apparatus of texture image compression in 3d video coding |
WO2013030456A1 (en) * | 2011-08-30 | 2013-03-07 | Nokia Corporation | An apparatus, a method and a computer program for video coding and decoding |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106340062A (en) * | 2015-07-09 | 2017-01-18 | 长沙维纳斯克信息技术有限公司 | Three-dimensional texture model file generating method and device |
CN106340062B (en) * | 2015-07-09 | 2019-05-17 | 长沙维纳斯克信息技术有限公司 | A kind of generation method and device of three-D grain model file |
CN107001250A (en) * | 2015-09-23 | 2017-08-01 | 江苏恒瑞医药股份有限公司 | It is a kind of to prepare the method that Ao Dangka replaces intermediate |
CN110059007A (en) * | 2019-04-03 | 2019-07-26 | 北京奇安信科技有限公司 | System vulnerability scan method, device, computer equipment and storage medium |
CN112887633A (en) * | 2021-01-14 | 2021-06-01 | 四川航天神坤科技有限公司 | Video splicing and three-dimensional monitoring display method and system based on camera |
Also Published As
Publication number | Publication date |
---|---|
US10306225B2 (en) | 2019-05-28 |
US20160050435A1 (en) | 2016-02-18 |
AU2014286821A1 (en) | 2015-10-01 |
WO2015000339A1 (en) | 2015-01-08 |
AU2014286821B2 (en) | 2016-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015000108A1 (en) | An improved texture merging candidate in 3dvc | |
WO2016165069A1 (en) | Advanced temporal motion vector prediction in video coding | |
WO2016008161A1 (en) | Temporal derived bi-directional motion vector predictor | |
WO2015100710A1 (en) | Existence of inter-view reference picture and availability of 3dvc coding tools | |
WO2016054979A1 (en) | Method of 3d or multi-view video coding including view synthesis prediction | |
JP2016506185A5 (en) | ||
WO2015006920A1 (en) | An adaptive disparity vector derivation method | |
WO2014166068A1 (en) | Refinement of view synthesis prediction for 3-d video coding | |
WO2014166109A1 (en) | Methods for disparity vector derivation | |
WO2015192372A1 (en) | A simplified method for illumination compensation in multi-view and 3d video coding | |
WO2015006922A1 (en) | Methods for residual prediction | |
WO2015180166A1 (en) | Improved intra prediction mode coding | |
WO2014029086A1 (en) | Methods to improve motion vector inheritance and inter-view motion prediction for depth map | |
WO2015131404A1 (en) | Methods for depth map coding | |
WO2013159326A1 (en) | Inter-view motion prediction in 3d video coding | |
WO2016123749A1 (en) | Deblocking filtering with adaptive motion vector resolution | |
WO2015196364A1 (en) | Methods for inter-view advanced residual prediction | |
WO2014106346A1 (en) | Method of signalling additional collocated picture for 3dvc | |
WO2015006924A1 (en) | An additional texture merging candidate | |
WO2014106327A1 (en) | Method and apparatus for inter-view residual prediction in multiview video coding | |
WO2015006900A1 (en) | A disparity derived depth coding method | |
WO2015143603A1 (en) | An improved method for temporal motion vector prediction in video coding | |
WO2013106988A1 (en) | Methods and apparatuses of residue transform depth representation | |
WO2014166096A1 (en) | Reference view derivation for inter-view motion prediction and inter-view residual prediction | |
WO2015006899A1 (en) | A simplified dv derivation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13888784 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13888784 Country of ref document: EP Kind code of ref document: A1 |