WO2014114168A1 - Depth modeling mode coding and decoding method and video codec - Google Patents

Depth modeling mode coding and decoding method and video codec Download PDF

Info

Publication number
WO2014114168A1
WO2014114168A1 PCT/CN2013/090691 CN2013090691W WO2014114168A1 WO 2014114168 A1 WO2014114168 A1 WO 2014114168A1 CN 2013090691 W CN2013090691 W CN 2013090691W WO 2014114168 A1 WO2014114168 A1 WO 2014114168A1
Authority
WO
WIPO (PCT)
Prior art keywords
wedgelet
patterns
depth
prediction unit
pattern
Prior art date
Application number
PCT/CN2013/090691
Other languages
French (fr)
Inventor
Hongbin Liu
Jie Jia
Original Assignee
Lg Electronics (China) R&D Center Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lg Electronics (China) R&D Center Co., Ltd. filed Critical Lg Electronics (China) R&D Center Co., Ltd.
Publication of WO2014114168A1 publication Critical patent/WO2014114168A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Definitions

  • a depth image is generally composed of a sharp edge object and a smooth non-edge object.
  • a conventional intra prediction method based on a texture image cannot well describe edge information. Therefore, at the 98 th MPEG meeting, HHI (Heinrich Hertz Institute) proposed an intra prediction method based on depth modeling to code the depth image (H. Schwarz, K. Wegner, "Test Model under Consideration for HEVC based 3D video coding, ISO/IEC JTC1/SC29/WG11 MPEG, Doc. M12350, Nov. 2011, Geneva, Switzerland).
  • the method totally includes four intra prediction modes, wherein in the DMM3 (Depth Modeling Mode), each depth prediction unit (PU) is divided into two parts by a straight line for prediction. As shown in Fig.l, the prediction method is called as a Wedgelet method. For the two divided regions, each of regions is predicted by a constant value.
  • each PU size corresponds to a number of prediction modes.
  • the correspondence between PU sizes and the numbers N of prediction modes is shown in Table 1.
  • Table 1 correspondence between PU sizes and the numbers of DMM3 prediction modes
  • a coder takes a Co-located Texture Luma Block (CTLB) at the same position corresponding to a current depth PU as an original depth image block, calculates a rate distortion cost for each of Wedgelet patterns of a depth prediction unit based thereon, and selects the mode with the smallest rate distortion cost as an optimal Wedgelet pattern.
  • CTLB Co-located Texture Luma Block
  • DMM3 Depth Modeling Mode 3
  • 3D-HEVC 3D high efficiency video coding
  • the prediction of the CTLB has directionality. Firstly, intra mode direction information is used to define a relatively small set of Wedgelet patterns, and then an optimal Wedgelet pattern is searched for in this set. This relatively small set of Wedgelet patterns is a set of available Wedgelet patterns defined according to the intra mode direction information. If the 4 x 4 block at the top left of the CTLB adopts the intra prediction mode, when a difference between the intra prediction direction to which a Wedgelet pattern is mapped and the intra prediction direction of a texture block is within a predetermined range, the Wedgelet pattern is taken as an available pattern; otherwise, the Wedgelet pattern cannot be taken as an available pattern. Such a limitation can effectively narrow down the searching space and decrease the time complexity.
  • the method for searching for an optimal Wedgelet pattern is to firstly construct a coarse set of Wedgelet patterns, and search for the optimal pattern therein, and then search for at most 8 most-adjacent patterns (referred to as a set of fine Wedgelet patterns) around the optimal pattern, thereby obtaining the final optimal Wedgelet pattern.
  • a Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of the line segment.
  • the start points and the end points of the Wedgelet patterns can traverse all the positions (within an allowed range by the Wedgelet pattern accuracy).
  • both a hollow point and a solid point in Fig.2 can be taken as the start point and the end point of a Wedgelet pattern.
  • a coarse set of Wedgelet patterns only comprises a part of available Wedgelet patterns, the start points of which can only traverse one point out of every K start points, that is, one point is selected every K points.
  • the end points of the Wedgelet patterns can only traverse one point out of every K end points.
  • the coarse set of Wedgelet patterns in Fig.2 is a set defined by all the solid points aligned in a straight line. From Fig.2, it can be seen that the coarse set of Wedgelet patterns only has a size approximate 1/K 2 of the set of all the available Wedgelet patterns. Conventionally, K is set as 2.
  • the CTLB is the inter mode coding or when the CTLB is the intra mode coding but the coding mode is 0 or 1, in such a scheme, it is still necessary for a decoder to perform searching although the scheme has a concept of limiting the prediction mode to be searched and defining a smaller available prediction mode set, which greatly increases the complexity of the decoder and the decoding time.
  • An object of the invention is to overcome the shortcomings that a decoder needs to perform searching when a CTLB is inter mode coding or when a CTLB is the intra mode coding but the intra coding mode is 0 or 1, which increases the complexity of the decoder and decoding time.
  • an embodiment of the invention discloses a depth modeling mode coding method, including: constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit; selecting an optimal Wedgelet pattern from the coarse set of Wedgelet patterns; coding index information of the optimal Wedgelet pattern, and performing depth modeling mode coding on depth data according to the optimal Wedgelet pattern, so as to be transmitted to a video decoder.
  • an embodiment of the invention discloses a depth modeling mode decoding method, including: constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit; decoding coded index information of an optimal Wedgelet pattern of the depth prediction unit received from a video coder; selecting, according to the index information, the optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns; and performing depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
  • an embodiment of the invention discloses a video coder, including: a coarse Wedgelet pattern set constructing module, configured to construct a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit; an optimal Wedgelet pattern selecting module, configured to select an optimal Wedgelet pattern from the coarse set of Wedgelet patterns; and a coding module, configured to code index information of the optimal Wedgelet pattern, and perform depth modeling mode coding on depth data according to the optimal Wedgelet pattern, so as to be transmitted to a video decoder.
  • an embodiment of the invention discloses a video decoder, including: a coarse Wedgelet pattern set constructing module, configured to construct a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit; an index information decoding module, configured to decode coded index information of an optimal Wedgelet pattern of the depth prediction unit received from a video coder; an optimal Wedgelet pattern selecting module, configured to select, according to the index information, the optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns; and a decoding module, configured to perform depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
  • Fig. l is a schematic diagram of division of a depth prediction unit according to the Wedgelet method in the background art of the invention.
  • Fig.2 is a schematic diagram of a coarse set of Wedgelet patterns in the background art of the invention.
  • Fig.3 is a flow chart of a depth modeling mode coding method according to an embodiment of the invention.
  • Fig.4 is a flow chart of a depth modeling mode decoding method according to an embodiment of the invention.
  • Fig.5 is a structural schematic diagram of a video coder according to an embodiment of the invention.
  • Fig.6 is a structural schematic diagram of a video decoder according to an embodiment of the invention.
  • An embodiment of the invention provides a simplification scheme for coding and decoding based on Depth Modeling Mode when a CTLB is inter mode coding or when a CTLB is intra mode coding but the intra coding mode is 0 or 1, mainly for the Third Depth Modeling Mode (DMM3) of the 3D High Efficiency Video Coding (3DV-HEVC).
  • Fig.3 is a flow chart of a depth modeling mode coding method. As shown in Fig.3, the method may include the following steps: step 301 : constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
  • step 303 coding index information of the optimal Wedgelet pattern, and performing depth modeling mode coding on depth data according to the optimal Wedgelet pattern, and the coded index information and coded depth data being transmitted to a video decoder.
  • the depth data and coding depth data can be referred to 3DV-HEVC.
  • a coarse set of Wedgelet patterns is constructed from the prediction modes of a depth prediction unit, and then an optimal Wedgelet pattern is selected from the coarse set of Wedgelet patterns, which may make the searching range for the optimal Wedgelet pattern relatively small, reduce the number of bits to be used in coding, and decrease the complexity of coding.
  • a specific method for constructing the coarse set of Wedgelet patterns is as follows.
  • a Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of a line segment.
  • the start points and the end points of the Wedgelet patterns can traverse all the positions within an allowed range by a Wedgelet pattern accuracy.
  • a part of the available Wedgelet patterns are selected to construct the coarse set of Wedgelet patterns, wherein the start points of the selected available Wedgelet patterns can only traverse one point out of every K start points, that is, one point is selected every K points, and the end points of the selected available Wedgelet patterns can only traverse one point out of every K end points.
  • the coarse set of Wedgelet patterns only has a size approximate 1/K 2 of the set of all the available Wedgelet patterns.
  • K can be set as 2 or a positive integer greater than 2.
  • the construction of the coarse set of Wedgelet patterns described herein is only an example; other different ways can be employed according to different demands in implementing, so as to obtain available Wedgelet patterns.
  • the optimal Wedgelet pattern is selected from the coarse set of Wedgelet patterns by: calculating, for each Wedgelet pattern of the coarse set of Wedgelet patterns, a mean square error of an original pixel value and a predicted pixel value of the corresponding depth prediction unit to obtain a distortion cost; and then selecting a Wedgelet pattern with the smallest distortion cost from the coarse set of Wedgelet patterns as the optimal Wedgelet pattern of the depth prediction unit.
  • the embodiment of the invention can ensure the acquisition of the optimal prediction mode.
  • the step of calculating, for each Wedgelet pattern of the coarse set of Wedgelet patterns, the mean square error of the original pixel value and the predicted pixel value of the corresponding depth prediction unit to obtain the distortion cost is performed by the following formula:
  • J is the distortion cost of a Wedgelet pattern of the coarse set of Wedgelet patterns
  • I D is the depth prediction unit
  • I D (i) is the original pixel value of the position i in the depth prediction unit
  • I D (Ri) is the predicted pixel value of the prediction region Rj in the depth prediction unit
  • I D (R 2 ) is the predicted pixel value of the prediction region R 2 in the depth prediction unit.
  • a prediction mode with the smallest distortion cost is selected from the coarse set of Wedgelet patterns, as an optimal Wedgelet pattern of the depth prediction unit:
  • the index information of the optimal Wedgelet pattern is coded to be provided to a decoding end, so that the decoding end does not needs to search a fine set of Wedgelet patterns when performing depth modeling mode decoding as done in the prior art.
  • the optimal prediction mode can be directly obtained from the coarse set of Wedgelet patterns by means of the index information, to be decoded, which greatly decreases the decoding complexity and shortens the decoding time.
  • Fig. 4 is a flowchart of a depth modeling mode decoding method according to an embodiment of the invention. As shown in Fig.4, the method may include the following steps: step 401 : constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
  • step 402 decoding coded index information of a optimal Wedgelet pattern of a depth prediction unit received from a video coder
  • step 403 selecting, according to the index information, the optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns;
  • step 404 performing depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
  • a coarse set of Wedgelet patterns is constructed from the prediction modes of a depth prediction unit, and then an optimal Wedgelet pattern is selected from the coarse set of Wedgelet patterns, which may make the searching range for the optimal Wedgelet pattern relatively small, reduce the number of bits to be used in coding, and decrease the complexity of coding.
  • the index information of the optimal Wedgelet pattern provided by the coding end can be directly decoded, and the optimal Wedgelet pattern can be obtained from the available Wedgelet patterns according to the index information, thereby not necessarily searching for the optimal Wedgelet pattern, which greatly decreases the decoding complexity and the decoding time.
  • the decoding end selects the available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns.
  • a specific method for constructing the coarse set of Wedgelet patterns is as follows.
  • a Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of a line segment.
  • the start points and the end points of the Wedgelet patterns can traverse all the positions within an allowed range by a Wedgelet pattern accuracy.
  • a part of the available Wedgelet patterns are selected to construct the coarse set of Wedgelet patterns, wherein the start points of the selected available Wedgelet patterns can only traverse one point out of every K start points, that is, one point is selected every K points, and the end points of the selected available Wedgelet patterns can only traverse one point out of every K end points.
  • the coarse set of Wedgelet patterns only has a size approximate 1/K 2 of the set of all the available Wedgelet patterns.
  • K can be set as 2 or a positive integer greater than 2. It will be appreciated by persons skilled in the art that the construction of the coarse set of Wedgelet patterns described herein is only an example; other different ways can be employed according to different demands in implementing, so as to obtain available Wedgelet patterns.
  • embodiments of the invention provide a video coder and a video decoder as described in the following embodiments.
  • Fig.5 is a structural schematic diagram of a video coder according to an embodiment of the invention. As shown in Fig.5, the video coder may include:
  • a coarse Wedgelet pattern set constructing module 501 configured to construct a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
  • a coding module 503 configured to code index information of the optimal Wedgelet pattern, and perform depth modeling mode coding on depth data according to the optimal Wedgelet pattern, and the coded index information and the coded depth data being transmitted to a video decoder.
  • the coarse Wedgelet pattern set constructing model 501 may be further configured to:
  • a specific method for constructing the coarse set of Wedgelet patterns is as follows.
  • a Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of a line segment.
  • the start points and the end points of the Wedgelet patterns can traverse all the positions within an allowed range by a Wedgelet pattern accuracy.
  • a part of the available Wedgelet patterns are selected to construct the coarse set of Wedgelet patterns, wherein the start points of the selected available Wedgelet patterns can only traverse one point out of every K start points, that is, one point is selected every K points, and the end points of the selected available Wedgelet patterns can only traverse one point out of every K end points.
  • the coarse set of Wedgelet patterns only has a size approximate 1/K 2 of the set of all the available Wedgelet patterns.
  • K can be set as 2 or a positive integer greater than 2. It will be appreciated by persons skilled in the art that the construction of the coarse set of Wedgelet patterns described herein is only an example; other different ways can be employed according to different demands in implementing, so as to obtain the available Wedgelet patterns.
  • the optimal Wedgelet pattern selecting mode 502 may be further configured to: calculate, for each Wedgelet pattern of the coarse set of Wedgelet patterns, a mean square error of an original pixel value and a predicted pixel value of the depth prediction unit to obtain a distortion cost; and select a Wedgelet pattern with the smallest distortion cost from the coarse set of Wedgelet patterns as the optimal Wedgelet pattern of the depth prediction unit.
  • the mean square error of the original pixel value and the predicted pixel value of the corresponding depth prediction unit is calculated to obtain the distortion cost by the following formula:
  • J is the distortion cost of each Wedgelet pattern of the coarse set of Wedgelet patterns
  • I D is the depth prediction unit
  • I D (i) is the original pixel value of the position i in the depth prediction unit
  • I D (R I ) is the predicted pixel value of the prediction region R] in the depth prediction unit
  • I D (R 2 ) is the predicted pixel value of the prediction region R 2 in the depth prediction unit.
  • a prediction mode with the smallest distortion cost is selected from the coarse set of Wedgelet patterns, as an optimal Wedgelet pattern of the depth prediction unit:
  • Fig.6 is a structural schematic diagram of a video decoder according to an embodiment of the invention.
  • the video decoder may include: a coarse Wedgelet pattern set constructing module 601, configured to construct a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
  • an index information decoding module 602 configured to decode coded index information of an optimal Wedgelet pattern of the depth prediction unit received from a video coder;
  • an optimal Wedgelet pattern selecting module 603, configured to select, according to the index information, the optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns;
  • a decoding module 604 configured to perform depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
  • the coarse Wedgelet pattern set constructing module 601 may be further configured to: select the available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns.
  • a specific method for constructing the coarse set of Wedgelet patterns is as follows.
  • a Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of a line segment.
  • the start points and the end points of the Wedgelet patterns can traverse all the positions within an allowed range by a Wedgelet pattern accuracy.
  • a part of the available Wedgelet patterns are selected to construct the coarse set of Wedgelet patterns, wherein the start points of the selected available Wedgelet patterns can only traverse one point out of every K start points, that is, one point is selected every K points, and the end points of the selected available Wedgelet patterns can only traverse one point out of every K end points.
  • the coarse set of Wedgelet patterns only has a size approximate 1/K 2 of the set of all the available Wedgelet patterns.
  • K can be set as 2 or a positive integer greater than 2. It will be appreciated by persons skilled in the art that the construction of the coarse set of Wedgelet patterns described herein is only an example; other different ways can be employed according to different demands in implementing, so as to obtain the available Wedgelet patterns.
  • a coarse set of Wedgelet patterns is firstly determined from the prediction modes of a depth prediction unit, and an optimal Wedgelet pattern is obtained from the coarse set of Wedgelet patterns, which may reduce the number of bits to be used in coding and decoding, and decrease the complexity of coding and decoding; moreover, after the acquisition of the optimal Wedgelet pattern, a coding end codes the index information of the optimal Wedgelet pattern to be provided to a decoding end, so that the decoding end does not need to search for the optimal Wedgelet pattern when performing the depth modeling mode decoding.
  • the optimal Wedgelet pattern can be directly obtained from the coarse set of Wedgelet patterns by means of the index information, to be decoded, and it is not necessary to search a fine set of Wedgelet patterns, which greatly decreases the decoding complexity, shortens the decoding time by 2% and has little influence on compression efficiency.
  • the embodiments of the invention can be applied to 3D video coding and decoding as well as the multi-view video coding and decoding, for example, more specifically, to the inter-frame mode coding and decoding and part of the intra mode coding and decoding of the depth image in 3D-HEVC.
  • the invention is described with reference to the flow charts and/or block diagrams of the method, apparatus (system) and computer program product according to the embodiment of the invention. It is to understand that each process and/or block in the flow chart and/or block diagram, and combination of the flow and/or block in the flow chart and/or block diagram can be implemented by the computer program instructions.
  • the computer program instructions can be provided to processors of the general-purpose computers, special purpose computers, embedded processors or other programmable data processing apparatuses to generate one machine, so as to generate a device for implementing the function specified in one or more flows in the flow chart and/or one or more blocks in the block diagrams by the instructions executed by the processors of the computers or other programmable data processing apparatuses.
  • the computer program instructions can be also stored in the computer readable memory capable of leading the computer or other programmable data processing apparatuses to operate in a specific way, so that the instructions stored in the computer readable memory generate the manufacturing product containing the instruction device which implements the functions specified in one or more flows in the flow charts and/or one or more blocks in the block diagrams.
  • the computer program instructions can be also loaded onto the computer or other programmable data processing apparatuses, such that the computer or other programmable apparatuses execute a series of operation steps to generate the processing implemented by the computer, so as to provide the step for implementing the functions specified in one or more flows in the flow charts and/or one or more blocks in the block diagrams by the instructions executed on the computers or other programmable apparatuses.

Abstract

The invention discloses a depth modeling mode coding and decoding method as well as a video codec, for the case where the CTLB is inter mode coding or when the CTLB is intra mode coding but an intra mode coding mode is 0 or 1. The coding method comprises: constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit; selecting an optimal Wedgelet pattern from the coarse set of Wedgelet patterns; coding index information of the optimal Wedgelet pattern, and performing depth modeling mode coding according to the optimal Wedgelet pattern. The embodiments of the invention greatly decrease decoding complexity, shortens decoding time by 2% and has little influence on compression efficiency.

Description

DEPTH MODELING MODE CODING AND DECODING METHOD AND VIDEO
CODEC
The present application claims the right of priority of Chinese Patent Application No. 201310032018.5, entitled "DEPTH MODELING MODE CODING AND DECODING METHOD AND VIDEO CODEC", and filed on Jan.28, 2013 before the State Intellectual Property Office of the P. .C, the entire contents of which are herein incorporated by reference.
FIELD OF THE INVENTION
The invention relates to the field of image video coding and decoding, particularly to a depth modeling mode coding and decoding method, more particularly to a depth modeling mode coding and decoding method as well as a video codec when a CTLB is inter mode coding or when the CTLB is intra mode coding but an intra coding mode is 0 or 1. BACKGROUND OF THE INVENTION
A depth image is generally composed of a sharp edge object and a smooth non-edge object. However, a conventional intra prediction method based on a texture image cannot well describe edge information. Therefore, at the 98th MPEG meeting, HHI (Heinrich Hertz Institute) proposed an intra prediction method based on depth modeling to code the depth image (H. Schwarz, K. Wegner, "Test Model under Consideration for HEVC based 3D video coding, ISO/IEC JTC1/SC29/WG11 MPEG, Doc. M12350, Nov. 2011, Geneva, Switzerland). The method totally includes four intra prediction modes, wherein in the DMM3 (Depth Modeling Mode), each depth prediction unit (PU) is divided into two parts by a straight line for prediction. As shown in Fig.l, the prediction method is called as a Wedgelet method. For the two divided regions, each of regions is predicted by a constant value.
For the Wedgelet method, each PU size corresponds to a number of prediction modes. The correspondence between PU sizes and the numbers N of prediction modes is shown in Table 1.
Table 1 : correspondence between PU sizes and the numbers of DMM3 prediction modes
PU size N
4x 4 86
8x 8 782
16x 16 1394
32x 32 1503
64 x 64 None In order to obtain an optimal Wedgelet pattern, a coder takes a Co-located Texture Luma Block (CTLB) at the same position corresponding to a current depth PU as an original depth image block, calculates a rate distortion cost for each of Wedgelet patterns of a depth prediction unit based thereon, and selects the mode with the smallest rate distortion cost as an optimal Wedgelet pattern.
In Depth Modeling Mode 3 (DMM3) of a prior art 3D high efficiency video coding (3D-HEVC), the searching for the optimal Wedgelet pattern is performed in two cases:
1. When the CTLB is intra mode coding and the coding mode is 2-34, the prediction of the CTLB has directionality. Firstly, intra mode direction information is used to define a relatively small set of Wedgelet patterns, and then an optimal Wedgelet pattern is searched for in this set. This relatively small set of Wedgelet patterns is a set of available Wedgelet patterns defined according to the intra mode direction information. If the 4 x 4 block at the top left of the CTLB adopts the intra prediction mode, when a difference between the intra prediction direction to which a Wedgelet pattern is mapped and the intra prediction direction of a texture block is within a predetermined range, the Wedgelet pattern is taken as an available pattern; otherwise, the Wedgelet pattern cannot be taken as an available pattern. Such a limitation can effectively narrow down the searching space and decrease the time complexity.
2. When the CTLB is inter mode coding or when the CLTB is intra mode coding but the coding mode is 0 or 1 , the mode of the CTLB does not have obvious directionality. In order to decrease the searching complexity, G. Tech et al. proposed a concept of a coarse set of Wedgelet patterns (G. Tech, K. Wegner, Y. Chen, S.Yea, "3D-HEVC Test Model 2", ISO/TEC JTC1/SC29/WG11 MPEG, Doc.JCT3V-B1005_dO.DOCX, Oct.2012, Shanghai, China and joint Collaborative Team on 3D Video Coding Extension Development of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 2nd Meeting: Shanghai, CN, 13-19 Oct. 2012). In this scheme, the method for searching for an optimal Wedgelet pattern is to firstly construct a coarse set of Wedgelet patterns, and search for the optimal pattern therein, and then search for at most 8 most-adjacent patterns (referred to as a set of fine Wedgelet patterns) around the optimal pattern, thereby obtaining the final optimal Wedgelet pattern. It is stated in the two above reference documents that as shown in Fig.l and Fig.2, a Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of the line segment. In a set composed of all of available Wedgelet patterns, the start points and the end points of the Wedgelet patterns can traverse all the positions (within an allowed range by the Wedgelet pattern accuracy). For example, both a hollow point and a solid point in Fig.2 can be taken as the start point and the end point of a Wedgelet pattern. However, a coarse set of Wedgelet patterns only comprises a part of available Wedgelet patterns, the start points of which can only traverse one point out of every K start points, that is, one point is selected every K points. Also, the end points of the Wedgelet patterns can only traverse one point out of every K end points. For example, as shown in the solid points of Fig.2, the start points or the end points are selected every one point. Therefore, the coarse set of Wedgelet patterns in Fig.2 is a set defined by all the solid points aligned in a straight line. From Fig.2, it can be seen that the coarse set of Wedgelet patterns only has a size approximate 1/K2 of the set of all the available Wedgelet patterns. Conventionally, K is set as 2.
When the CTLB is the inter mode coding or when the CTLB is the intra mode coding but the coding mode is 0 or 1, in such a scheme, it is still necessary for a decoder to perform searching although the scheme has a concept of limiting the prediction mode to be searched and defining a smaller available prediction mode set, which greatly increases the complexity of the decoder and the decoding time.
SUMMARY OF THE INVENTION
An object of the invention is to overcome the shortcomings that a decoder needs to perform searching when a CTLB is inter mode coding or when a CTLB is the intra mode coding but the intra coding mode is 0 or 1, which increases the complexity of the decoder and decoding time.
In order to achieve the above object, an embodiment of the invention discloses a depth modeling mode coding method, including: constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit; selecting an optimal Wedgelet pattern from the coarse set of Wedgelet patterns; coding index information of the optimal Wedgelet pattern, and performing depth modeling mode coding on depth data according to the optimal Wedgelet pattern, so as to be transmitted to a video decoder.
In order to achieve the above object, an embodiment of the invention discloses a depth modeling mode decoding method, including: constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit; decoding coded index information of an optimal Wedgelet pattern of the depth prediction unit received from a video coder; selecting, according to the index information, the optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns; and performing depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
In order to achieve the above object, an embodiment of the invention discloses a video coder, including: a coarse Wedgelet pattern set constructing module, configured to construct a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit; an optimal Wedgelet pattern selecting module, configured to select an optimal Wedgelet pattern from the coarse set of Wedgelet patterns; and a coding module, configured to code index information of the optimal Wedgelet pattern, and perform depth modeling mode coding on depth data according to the optimal Wedgelet pattern, so as to be transmitted to a video decoder.
In order to achieve the above object, an embodiment of the invention discloses a video decoder, including: a coarse Wedgelet pattern set constructing module, configured to construct a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit; an index information decoding module, configured to decode coded index information of an optimal Wedgelet pattern of the depth prediction unit received from a video coder; an optimal Wedgelet pattern selecting module, configured to select, according to the index information, the optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns; and a decoding module, configured to perform depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
It can be realized according to the embodiments of the invention that when a CTLB is inter mode coding or when a CTLB is the intra mode coding but the intra coding mode is 0 or 1, a coarse set of Wedgelet patterns is firstly determined from the prediction modes of a depth prediction unit, and an optimal Wedgelet pattern is obtained from the coarse set of Wedgelet patterns, which may reduce the number of bits to be used in coding and decoding, and decrease the complexity of coding and decoding; moreover, after the acquisition of the optimal Wedgelet pattern, a coding end codes the index information of the optimal Wedgelet pattern to be provided to a decoding end, so that the decoding end does not need to search for the optimal Wedgelet pattern when performing depth modeling mode decoding, but can obtain the optimal Wedgelet pattern directly from the coarse set of Wedgelet patterns by means of the index information to be decoded, and it is not necessary to search a fine set of Wedgelet patterns, which greatly decreases decoding complexity, shortens decoding time by 2% and has little influence on compression efficiency.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to more clearly explain the technical solutions of the embodiments of the invention or of the prior art, the drawings to be used necessary for the description of the embodiments or prior art are briefly introduced below. It is appreciated that the drawings described in the following are only some embodiments of the invention. It is also possible for persons skilled in the art to obtain other drawings according to these drawings without creative work.
Fig. l is a schematic diagram of division of a depth prediction unit according to the Wedgelet method in the background art of the invention;
Fig.2 is a schematic diagram of a coarse set of Wedgelet patterns in the background art of the invention;
Fig.3 is a flow chart of a depth modeling mode coding method according to an embodiment of the invention;
Fig.4 is a flow chart of a depth modeling mode decoding method according to an embodiment of the invention;
Fig.5 is a structural schematic diagram of a video coder according to an embodiment of the invention; and
Fig.6 is a structural schematic diagram of a video decoder according to an embodiment of the invention.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Hereinafter, the technical solutions according to the embodiments of the invention will be clearly and fully described in connection with the drawings of the embodiments of the invention. It is appreciated that the described embodiments are only a part of embodiments of the invention, instead of all the embodiments. All the other embodiments obtained by persons skilled in the art based on the described embodiments of the invention without creative work fall within the protection scope of the invention.
An embodiment of the invention provides a simplification scheme for coding and decoding based on Depth Modeling Mode when a CTLB is inter mode coding or when a CTLB is intra mode coding but the intra coding mode is 0 or 1, mainly for the Third Depth Modeling Mode (DMM3) of the 3D High Efficiency Video Coding (3DV-HEVC). Fig.3 is a flow chart of a depth modeling mode coding method. As shown in Fig.3, the method may include the following steps: step 301 : constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
step 302: selecting an optimal Wedgelet pattern from the coarse set of Wedgelet patterns; and
step 303: coding index information of the optimal Wedgelet pattern, and performing depth modeling mode coding on depth data according to the optimal Wedgelet pattern, and the coded index information and coded depth data being transmitted to a video decoder. The depth data and coding depth data can be referred to 3DV-HEVC.
It can be seen from the flowchart as shown in Fig.3 that according to the embodiment of the invention, in depth modeling inter mode coding or intra mode coding, a coarse set of Wedgelet patterns is constructed from the prediction modes of a depth prediction unit, and then an optimal Wedgelet pattern is selected from the coarse set of Wedgelet patterns, which may make the searching range for the optimal Wedgelet pattern relatively small, reduce the number of bits to be used in coding, and decrease the complexity of coding.
Particularly, it is possible to select available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns. A specific method for constructing the coarse set of Wedgelet patterns is as follows.
A Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of a line segment. For a set of all of the available Wedgelet patterns, the start points and the end points of the Wedgelet patterns can traverse all the positions within an allowed range by a Wedgelet pattern accuracy. A part of the available Wedgelet patterns are selected to construct the coarse set of Wedgelet patterns, wherein the start points of the selected available Wedgelet patterns can only traverse one point out of every K start points, that is, one point is selected every K points, and the end points of the selected available Wedgelet patterns can only traverse one point out of every K end points. Therefore, the coarse set of Wedgelet patterns only has a size approximate 1/K2 of the set of all the available Wedgelet patterns. In the present embodiment, K can be set as 2 or a positive integer greater than 2. However, it will be appreciated by persons skilled in the art that the construction of the coarse set of Wedgelet patterns described herein is only an example; other different ways can be employed according to different demands in implementing, so as to obtain available Wedgelet patterns.
It can be further seen from the flowchart as shown in Fig.3 that according to the depth modeling mode coding of the embodiment of the invention, the optimal Wedgelet pattern is selected from the coarse set of Wedgelet patterns by: calculating, for each Wedgelet pattern of the coarse set of Wedgelet patterns, a mean square error of an original pixel value and a predicted pixel value of the corresponding depth prediction unit to obtain a distortion cost; and then selecting a Wedgelet pattern with the smallest distortion cost from the coarse set of Wedgelet patterns as the optimal Wedgelet pattern of the depth prediction unit. Compared to a prior art method in which a mean square error of a pixel reconstructed value and a predicted pixel value of a texture image unit is calculated to obtain a distortion cost, the embodiment of the invention can ensure the acquisition of the optimal prediction mode.
Particularly, the step of calculating, for each Wedgelet pattern of the coarse set of Wedgelet patterns, the mean square error of the original pixel value and the predicted pixel value of the corresponding depth prediction unit to obtain the distortion cost is performed by the following formula:
j
Figure imgf000008_0001
(jD (o - (*2 ))2 wherein, J is the distortion cost of a Wedgelet pattern of the coarse set of Wedgelet patterns, ID is the depth prediction unit, ID (i) is the original pixel value of the position i in the depth prediction unit, ID (Ri) is the predicted pixel value of the prediction region Rj in the depth prediction unit, and ID (R2) is the predicted pixel value of the prediction region R2 in the depth prediction unit.
After the acquisition of the distortion costs of the Wedgelet patterns of the coarse set, a prediction mode with the smallest distortion cost is selected from the coarse set of Wedgelet patterns, as an optimal Wedgelet pattern of the depth prediction unit:
min{J} .
It can further be seen from the flowchart as shown in Fig.3 that according to depth modeling mode coding of the embodiment of the invention, after the acquisition of the optimal Wedgelet pattern, the index information of the optimal Wedgelet pattern is coded to be provided to a decoding end, so that the decoding end does not needs to search a fine set of Wedgelet patterns when performing depth modeling mode decoding as done in the prior art. The optimal prediction mode can be directly obtained from the coarse set of Wedgelet patterns by means of the index information, to be decoded, which greatly decreases the decoding complexity and shortens the decoding time.
Fig. 4 is a flowchart of a depth modeling mode decoding method according to an embodiment of the invention. As shown in Fig.4, the method may include the following steps: step 401 : constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
step 402: decoding coded index information of a optimal Wedgelet pattern of a depth prediction unit received from a video coder;
step 403: selecting, according to the index information, the optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns; and
step 404: performing depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
It can be seen from the flowchart as shown in Fig.4 that according to depth modeling mode decoding of the embodiment of the invention, a coarse set of Wedgelet patterns is constructed from the prediction modes of a depth prediction unit, and then an optimal Wedgelet pattern is selected from the coarse set of Wedgelet patterns, which may make the searching range for the optimal Wedgelet pattern relatively small, reduce the number of bits to be used in coding, and decrease the complexity of coding. In depth modeling mode decoding of the embodiment of the invention, differently from the prior art in which the decoding complexity is relatively high due to the necessity for the decoding end to search a fine set of Wedgelet patterns, the index information of the optimal Wedgelet pattern provided by the coding end can be directly decoded, and the optimal Wedgelet pattern can be obtained from the available Wedgelet patterns according to the index information, thereby not necessarily searching for the optimal Wedgelet pattern, which greatly decreases the decoding complexity and the decoding time.
Similarly to the processing flow of the depth modeling mode coding method, particularly, it is possible for the decoding end to select the available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns. A specific method for constructing the coarse set of Wedgelet patterns is as follows.
A Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of a line segment. For a set of all of the available Wedgelet patterns, the start points and the end points of the Wedgelet patterns can traverse all the positions within an allowed range by a Wedgelet pattern accuracy. A part of the available Wedgelet patterns are selected to construct the coarse set of Wedgelet patterns, wherein the start points of the selected available Wedgelet patterns can only traverse one point out of every K start points, that is, one point is selected every K points, and the end points of the selected available Wedgelet patterns can only traverse one point out of every K end points. Therefore, the coarse set of Wedgelet patterns only has a size approximate 1/K2 of the set of all the available Wedgelet patterns. In the present embodiment, K can be set as 2 or a positive integer greater than 2. It will be appreciated by persons skilled in the art that the construction of the coarse set of Wedgelet patterns described herein is only an example; other different ways can be employed according to different demands in implementing, so as to obtain available Wedgelet patterns.
Based on the same invention concept, embodiments of the invention provide a video coder and a video decoder as described in the following embodiments.
Fig.5 is a structural schematic diagram of a video coder according to an embodiment of the invention. As shown in Fig.5, the video coder may include:
a coarse Wedgelet pattern set constructing module 501, configured to construct a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
an optimal Wedgelet pattern selecting module 502, configured to select an optimal Wedgelet pattern from the coarse set of Wedgelet patterns; and
a coding module 503, configured to code index information of the optimal Wedgelet pattern, and perform depth modeling mode coding on depth data according to the optimal Wedgelet pattern, and the coded index information and the coded depth data being transmitted to a video decoder.
Particularly, the coarse Wedgelet pattern set constructing model 501 may be further configured to:
select the available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns. A specific method for constructing the coarse set of Wedgelet patterns is as follows.
A Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of a line segment. In a set composed of all of the available Wedgelet patterns, the start points and the end points of the Wedgelet patterns can traverse all the positions within an allowed range by a Wedgelet pattern accuracy. A part of the available Wedgelet patterns are selected to construct the coarse set of Wedgelet patterns, wherein the start points of the selected available Wedgelet patterns can only traverse one point out of every K start points, that is, one point is selected every K points, and the end points of the selected available Wedgelet patterns can only traverse one point out of every K end points. Therefore, the coarse set of Wedgelet patterns only has a size approximate 1/K2 of the set of all the available Wedgelet patterns. In the present embodiment, K can be set as 2 or a positive integer greater than 2. It will be appreciated by persons skilled in the art that the construction of the coarse set of Wedgelet patterns described herein is only an example; other different ways can be employed according to different demands in implementing, so as to obtain the available Wedgelet patterns.
Particularly, the optimal Wedgelet pattern selecting mode 502 may be further configured to: calculate, for each Wedgelet pattern of the coarse set of Wedgelet patterns, a mean square error of an original pixel value and a predicted pixel value of the depth prediction unit to obtain a distortion cost; and select a Wedgelet pattern with the smallest distortion cost from the coarse set of Wedgelet patterns as the optimal Wedgelet pattern of the depth prediction unit.
Particularly, for each Wedgelet pattern of the coarse set of Wedgelet patterns, the mean square error of the original pixel value and the predicted pixel value of the corresponding depth prediction unit is calculated to obtain the distortion cost by the following formula:
j
Figure imgf000011_0001
(jD (o - (*2 ))2 wherein, J is the distortion cost of each Wedgelet pattern of the coarse set of Wedgelet patterns, ID is the depth prediction unit, ID (i) is the original pixel value of the position i in the depth prediction unit, ID (RI) is the predicted pixel value of the prediction region R] in the depth prediction unit, and ID (R2) is the predicted pixel value of the prediction region R2 in the depth prediction unit.
After the acquisition of the distortion costs of the Wedgelet patterns of the coarse set, a prediction mode with the smallest distortion cost is selected from the coarse set of Wedgelet patterns, as an optimal Wedgelet pattern of the depth prediction unit:
minjJ} .
Fig.6 is a structural schematic diagram of a video decoder according to an embodiment of the invention. As shown in Fig.6, the video decoder may include: a coarse Wedgelet pattern set constructing module 601, configured to construct a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
an index information decoding module 602, configured to decode coded index information of an optimal Wedgelet pattern of the depth prediction unit received from a video coder;
an optimal Wedgelet pattern selecting module 603, configured to select, according to the index information, the optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns; and
a decoding module 604, configured to perform depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
Particularly, the coarse Wedgelet pattern set constructing module 601 may be further configured to: select the available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns. A specific method for constructing the coarse set of Wedgelet patterns is as follows.
A Wedgelet pattern is defined by a line segment, that is, a Wedgelet pattern is defined by a start point and an end point of a line segment. For a set composed of all of the available Wedgelet patterns, the start points and the end points of the Wedgelet patterns can traverse all the positions within an allowed range by a Wedgelet pattern accuracy. A part of the available Wedgelet patterns are selected to construct the coarse set of Wedgelet patterns, wherein the start points of the selected available Wedgelet patterns can only traverse one point out of every K start points, that is, one point is selected every K points, and the end points of the selected available Wedgelet patterns can only traverse one point out of every K end points. Therefore, the coarse set of Wedgelet patterns only has a size approximate 1/K2 of the set of all the available Wedgelet patterns. In the present embodiment, K can be set as 2 or a positive integer greater than 2. It will be appreciated by persons skilled in the art that the construction of the coarse set of Wedgelet patterns described herein is only an example; other different ways can be employed according to different demands in implementing, so as to obtain the available Wedgelet patterns.
To sum up, it can be realized according to the embodiments of the invention that when a CTLB is inter mode coding or when a CTLB is the intra mode coding but the coding mode is 0 or 1 , a coarse set of Wedgelet patterns is firstly determined from the prediction modes of a depth prediction unit, and an optimal Wedgelet pattern is obtained from the coarse set of Wedgelet patterns, which may reduce the number of bits to be used in coding and decoding, and decrease the complexity of coding and decoding; moreover, after the acquisition of the optimal Wedgelet pattern, a coding end codes the index information of the optimal Wedgelet pattern to be provided to a decoding end, so that the decoding end does not need to search for the optimal Wedgelet pattern when performing the depth modeling mode decoding. The optimal Wedgelet pattern can be directly obtained from the coarse set of Wedgelet patterns by means of the index information, to be decoded, and it is not necessary to search a fine set of Wedgelet patterns, which greatly decreases the decoding complexity, shortens the decoding time by 2% and has little influence on compression efficiency.
The embodiments of the invention can be applied to 3D video coding and decoding as well as the multi-view video coding and decoding, for example, more specifically, to the inter-frame mode coding and decoding and part of the intra mode coding and decoding of the depth image in 3D-HEVC.
Persons skilled in the art shall understand that the embodiments of the invention can be provided as a method, system, or computer program product. Therefore, the invention can adopt the form of hardware-only embodiment, software-only embodiment, or the software-hardware combined embodiment. Moreover, the invention can adopt the form of computer program product implemented on one or more computer applicable storage media (comprising, but not be limited to, disc storage, CD-ROM and optical memory, etc.) containing the computer applicable program codes therein.
The invention is described with reference to the flow charts and/or block diagrams of the method, apparatus (system) and computer program product according to the embodiment of the invention. It is to understand that each process and/or block in the flow chart and/or block diagram, and combination of the flow and/or block in the flow chart and/or block diagram can be implemented by the computer program instructions. The computer program instructions can be provided to processors of the general-purpose computers, special purpose computers, embedded processors or other programmable data processing apparatuses to generate one machine, so as to generate a device for implementing the function specified in one or more flows in the flow chart and/or one or more blocks in the block diagrams by the instructions executed by the processors of the computers or other programmable data processing apparatuses.
The computer program instructions can be also stored in the computer readable memory capable of leading the computer or other programmable data processing apparatuses to operate in a specific way, so that the instructions stored in the computer readable memory generate the manufacturing product containing the instruction device which implements the functions specified in one or more flows in the flow charts and/or one or more blocks in the block diagrams.
The computer program instructions can be also loaded onto the computer or other programmable data processing apparatuses, such that the computer or other programmable apparatuses execute a series of operation steps to generate the processing implemented by the computer, so as to provide the step for implementing the functions specified in one or more flows in the flow charts and/or one or more blocks in the block diagrams by the instructions executed on the computers or other programmable apparatuses.
The above embodiments further explain the objects, technical solution and advantageous effects of the invention in detail. It is to understand that the above is only specific embodiments of the invention but not to limit the protection scope of the invention. Any amendments, equivalents, improvements and so on made within the spirit and principle of the invention are included inside the protection scope of the invention.

Claims

CLAIMS What is claimed is:
1. A depth modeling mode coding method, comprising:
constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
selecting an optimal Wedgelet pattern from the coarse set of Wedgelet patterns; and coding index information of the optimal Wedgelet pattern, and performing depth modeling mode coding on depth data according to the optimal Wedgelet pattern, so as to be transmitted to a video decoder.
2. The coding method according to claim 1, wherein the step of constructing a coarse set of
Wedgelet patterns from prediction modes of a depth prediction unit comprises:
selecting available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns,
wherein a Wedgelet pattern is defined by a start point and an end point of a line segment, and the start points of the line segments of the selected available Wedgelet patterns traverse one point out of every K start points, the end points of the line segments of the selected available Wedgelet patterns traverse one point out of every K end points, wherein K is a positive integer greater than or equal to 2.
3. The coding method according to claim 1, wherein the step of selecting an optimal Wedgelet pattern from the coarse set of Wedgelet patterns comprises:
calculating, for each Wedgelet pattern of the coarse set of Wedgelet patterns, a mean square error of an original pixel value and a predicted pixel value of the respective depth prediction unit to obtain a distortion cost; and
selecting a Wedgelet pattern with the smallest distortion cost from the coarse set of Wedgelet patterns as the optimal Wedgelet pattern for the depth prediction unit.
4. The coding method according to claim 3, wherein for each Wedgelet pattern of the coarse set of Wedgelet patterns, the mean square error of the original pixel value and the predicted pixel value of the corresponding depth prediction unit is calculated to obtain the distortion cost by the following formula: J =∑(iD (0 - iD ( ! ))2 +∑ (iD (0 - iD (R2 )f wherein, J is the distortion cost for the coarse Wedgelet pattern, ID is the depth prediction unit, ID (i) is the original pixel value of the position i in the depth prediction unit, ID (RI) is the predicted pixel value of the prediction region R] in the depth prediction unit, ID (R2) is the predicted pixel value of the prediction region R2 in the depth prediction unit.
5. A depth modeling mode decoding method, comprising:
constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
decoding coded index information of an optimal Wedgelet pattern of the depth prediction unit received from a video coder;
selecting, according to the index information, an optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns; and
performing depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
6. The decoding method according to claim 5, wherein the step of constructing a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit comprises:
selecting available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns,
wherein a Wedgelet pattern is defined by a start point and an end point of a line segment, and the start points of the line segments of the selected available Wedgelet patterns traverse one point out of every K start points, the end points of the line segments of the selected available Wedgelet patterns traverse one point out of every K end points, wherein K is a positive integer greater than or equal to 2.
7. A video coder, comprising:
a coarse Wedgelet pattern set constructing module, configured to construct a coarse set of
Wedgelet patterns from prediction modes of a depth prediction unit;
an optimal Wedgelet pattern selecting module, configured to select an optimal Wedgelet pattern from the coarse set of Wedgelet patterns; and
a coding module, configured to code index information of the optimal Wedgelet pattern, and perform depth modeling mode coding on depth data according to the optimal Wedgelet pattern, so as to be transmitted to a video decoder.
8. The video coder according to claim 7, wherein the coarse Wedgelet pattern set constructing module is further configured to:
select available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns,
wherein a Wedgelet pattern is defined by a start point and an end point of a line segment, and the start points of the line segments of the selected available Wedgelet patterns traverse one point out of every K start points, the end points of the line segments of the selected available Wedgelet patterns traverse one point out of every K end points, wherein K is a positive integer greater than or equal to 2.
9. The video coder according to claim 7, wherein the optimal Wedgelet pattern selecting module is further configured to:
calculate, for each Wedgelet pattern of the coarse set of Wedgelet patterns, a mean square error of an original pixel value and a predicted pixel value of the corresponding depth prediction unit to obtain the distortion cost; and
select a Wedgelet pattern with the smallest distortion cost from the coarse set of Wedgelet patterns as the optimal Wedgelet pattern of the depth prediction unit.
10. The video coder according to claim 9, wherein the optimal Wedgelet pattern selecting module is configured to calculate, for each Wedgelet pattern of the coarse set of Wedgelet patterns, the mean square error of the original pixel value and the predicted pixel value of the corres nding depth prediction unit to obtain the distortion cost by the following formula:
Figure imgf000017_0001
wherein, J is the distortion cost of the coarse Wedgelet pattern, ID is the depth prediction unit, ID (i) is the original pixel value of the position i in the depth prediction unit, ID (RI) is the predicted pixel value of the prediction region Rj in the depth prediction unit, ID (R2) is the predicted pixel value of the prediction region R2 in the depth prediction unit.
11. A video decoder, comprising:
a coarse Wedgelet pattern set constructing module, configured to construct a coarse set of Wedgelet patterns from prediction modes of a depth prediction unit;
an index information decoding module, configured to decode coded index information of an optimal Wedgelet pattern of the depth prediction unit received from a video coder;
an optimal Wedgelet pattern selecting module, configured to select, according to the index information, the optimal Wedgelet pattern of the depth prediction unit from the coarse set of Wedgelet patterns; and
a decoding module, configured to perform depth modeling mode decoding on coded depth data received from the video coder according to the optimal Wedgelet pattern.
12. The video decoder according to claim 11, wherein the coarse Wedgelet pattern set constructing module is further configured to:
select available Wedgelet patterns from the prediction modes of the depth prediction unit so as to construct the coarse set of Wedgelet patterns,
wherein a Wedgelet pattern is defined by a start point and an end point of a line segment, and the start points of the line segments of the selected available Wedgelet patterns traverse one point out of every K start points, the end points of the line segments of the selected available Wedgelet patterns traverse one point out of every K end points, wherein K is a positive integer greater than or equal to 2.
PCT/CN2013/090691 2013-01-28 2013-12-27 Depth modeling mode coding and decoding method and video codec WO2014114168A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310032018.5 2013-01-28
CN201310032018.5A CN103974063A (en) 2013-01-28 2013-01-28 Encoding and decoding method of depth model and video coder decoder

Publications (1)

Publication Number Publication Date
WO2014114168A1 true WO2014114168A1 (en) 2014-07-31

Family

ID=51226908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/090691 WO2014114168A1 (en) 2013-01-28 2013-12-27 Depth modeling mode coding and decoding method and video codec

Country Status (2)

Country Link
CN (1) CN103974063A (en)
WO (1) WO2014114168A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016056550A1 (en) * 2014-10-08 2016-04-14 シャープ株式会社 Image decoding device
CN106331727A (en) * 2016-08-26 2017-01-11 天津大学 Simplified search method for depth modeling modes
CN109587503A (en) * 2018-12-30 2019-04-05 北京工业大学 A kind of 3D-HEVC depth map intra-frame encoding mode high-speed decision method based on edge detection

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9860562B2 (en) 2014-09-30 2018-01-02 Hfi Innovation Inc. Method of lookup table size reduction for depth modelling mode in depth coding
EP3007448A1 (en) * 2014-10-07 2016-04-13 Canon Kabushiki Kaisha Disparity data encoding apparatus and method of controlling the same for
CN104320656B (en) * 2014-10-30 2019-01-11 上海交通大学 Interframe encoding mode fast selecting method in x265 encoder
CN105007494B (en) * 2015-07-20 2018-11-13 南京理工大学 Wedge-shaped Fractionation regimen selection method in a kind of frame of 3D video depths image
CN116506597A (en) * 2016-08-03 2023-07-28 株式会社Kt Video decoding method, video encoding method, and video data transmission method
CN114157863B (en) * 2022-02-07 2022-07-22 浙江智慧视频安防创新中心有限公司 Video coding method, system and storage medium based on digital retina

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102742277A (en) * 2010-02-02 2012-10-17 三星电子株式会社 Method and apparatus for encoding video based on scanning order of hierarchical data units, and method and apparatus for decoding video based on scanning order of hierarchical data units
CN102790892A (en) * 2012-07-05 2012-11-21 清华大学 Depth map coding method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102742277A (en) * 2010-02-02 2012-10-17 三星电子株式会社 Method and apparatus for encoding video based on scanning order of hierarchical data units, and method and apparatus for decoding video based on scanning order of hierarchical data units
CN102790892A (en) * 2012-07-05 2012-11-21 清华大学 Depth map coding method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GERHARD TECH ET AL.: "3D-HEVC Test Model 2.", JOINT COLLABORATIVE TEAM ON 3D VIDEO CODING EXTENSION DEVELOPMENT OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG112ND MEETING, 13 October 2012 (2012-10-13), SHANGHAI, CN ., pages 30 - 31 *
HEIKO SCHWARZ ET AL.: "Test Model under Consideration for HEVC based 3D video coding.", ISO/IEC JTC1/SC29/WG11MPEG2011/M12350, November 2011 (2011-11-01), GENEVA, SWITZERLAND . *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016056550A1 (en) * 2014-10-08 2016-04-14 シャープ株式会社 Image decoding device
CN106331727A (en) * 2016-08-26 2017-01-11 天津大学 Simplified search method for depth modeling modes
CN106331727B (en) * 2016-08-26 2019-03-08 天津大学 A kind of simplified searching method of depth modelling mode
CN109587503A (en) * 2018-12-30 2019-04-05 北京工业大学 A kind of 3D-HEVC depth map intra-frame encoding mode high-speed decision method based on edge detection
CN109587503B (en) * 2018-12-30 2022-10-18 北京工业大学 3D-HEVC depth map intra-frame coding mode fast decision method based on edge detection

Also Published As

Publication number Publication date
CN103974063A (en) 2014-08-06

Similar Documents

Publication Publication Date Title
WO2014114168A1 (en) Depth modeling mode coding and decoding method and video codec
US11234002B2 (en) Method and apparatus for encoding and decoding a texture block using depth based block partitioning
TWI736906B (en) Mv precision refine
JP7237874B2 (en) Image prediction method and apparatus
WO2014036848A1 (en) Depth picture intra coding /decoding method and video coder/decoder
CN106797464B (en) Method and apparatus for vector coding in video encoding and decoding
KR101846762B1 (en) Method for sub-pu motion information inheritance in 3d video coding
US10218957B2 (en) Method of sub-PU syntax signaling and illumination compensation for 3D and multi-view video coding
WO2019234607A1 (en) Interaction between ibc and affine
KR20160148005A (en) Method of block vector prediction for intra block copy mode coding
US20220385915A1 (en) Transform Bypass Coded Residual Blocks in Digital Video
US20150264356A1 (en) Method of Simplified Depth Based Block Partitioning
CN112204980B (en) Method and apparatus for inter prediction in video coding system
US9986257B2 (en) Method of lookup table size reduction for depth modelling mode in depth coding
US20140184739A1 (en) Foreground extraction method for stereo video
BR112021003946A2 (en) video encoder, video decoder and corresponding methods
US20150358643A1 (en) Method of Depth Coding Compatible with Arbitrary Bit-Depth
Tsang et al. Standard compliant light field lenslet image coding model using enhanced screen content coding framework
CN105637871B (en) Three-dimensional or multi-view coding method
KR20230003061A (en) Entropy Coding for Motion Precise Syntax
Zhang et al. Simplified reference pixel selection for constant partition value coding in 3D-HEVC
US20230276044A1 (en) Constraints on intra block copy using non-adjacent neighboring blocks
WO2015103747A1 (en) Motion parameter hole filling
Jäger Segment-wise prediction in 3D video coding
EP3110156B1 (en) A system and a method for disoccluded region coding in a multiview video data stream

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13872300

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13872300

Country of ref document: EP

Kind code of ref document: A1