WO2016123774A1 - 编解码方法和编解码器 - Google Patents

编解码方法和编解码器 Download PDF

Info

Publication number
WO2016123774A1
WO2016123774A1 PCT/CN2015/072301 CN2015072301W WO2016123774A1 WO 2016123774 A1 WO2016123774 A1 WO 2016123774A1 CN 2015072301 W CN2015072301 W CN 2015072301W WO 2016123774 A1 WO2016123774 A1 WO 2016123774A1
Authority
WO
WIPO (PCT)
Prior art keywords
luma
coordinate
block
sampling point
disparity vector
Prior art date
Application number
PCT/CN2015/072301
Other languages
English (en)
French (fr)
Inventor
陈旭
郑萧桢
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201580000242.3A priority Critical patent/CN104995915B/zh
Priority to PCT/CN2015/072301 priority patent/WO2016123774A1/zh
Publication of WO2016123774A1 publication Critical patent/WO2016123774A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Definitions

  • Embodiments of the present invention relate to the field of video image coding and decoding, and more particularly, to a codec method and codec.
  • hybrid coding structures are commonly used for encoding and decoding video sequences.
  • the coding end of the hybrid coding structure generally includes: a prediction module, a transformation module, a quantization module, and an entropy coding module;
  • the decoding end of the hybrid coding structure generally includes: an entropy decoding module, an inverse quantization module, an inverse transform module, and a prediction compensation module.
  • the combination of these encoding and decoding modules can effectively remove redundant information of the video sequence and ensure that the encoded image of the video sequence is obtained at the decoding end.
  • images of a video sequence are typically divided into image blocks for encoding.
  • An image is divided into blocks of images that are encoded and decoded using the above modules.
  • the prediction module is used by the encoding end to obtain the prediction block information of the image block of the video sequence coded image, thereby obtaining the residual of the image block
  • the prediction compensation module is used by the decoding end to obtain the prediction block information of the current decoded image block, and then
  • the current decoded image block is obtained from the decoded image block residual.
  • the prediction module usually includes two techniques of intra prediction and inter prediction.
  • the intra prediction technique uses the spatial pixel information of the current image block to remove redundant information of the current image block to obtain a residual;
  • the inter prediction technique uses the encoded or decoded image pixel information adjacent to the current image to remove the redundancy of the current image block. The remaining information is used to obtain the residual.
  • an image adjacent to a current image for inter prediction is referred to as a reference image.
  • a commonly used block division method includes dividing a square image block into two rectangular regions (rectangular partition) in a horizontal or vertical direction, as shown by A and B in FIG. 1, in which the square image blocks are horizontally and vertically respectively. Divided into two rectangular areas.
  • a square image block can also divide an image block into two non-rectangular partitions, as shown in Figure 2.
  • the block division technique described above can also be used for the 3D video codec technology.
  • Texture map in 3D video In codec technology depth-based block segmentation is a common method.
  • the principle is to generate a binarized partition template by using the depth value information corresponding to each sample point in the luma coding block, and divide the luma coding block by using the binarization partition template. This method is also called depth-based block partitioning (DBBP).
  • DBBP depth-based block partitioning
  • the depth value information corresponding to each sampling point in the current luma coding block is used to implement the division of the current luma coding block.
  • the depth value information corresponding to each sampling point in the current luma coding block is determined.
  • the disparity vector DV, Disparity vector
  • Embodiments of the present invention provide a codec method and a codec to improve coding efficiency.
  • an encoding method including: determining a current luma encoding block from a texture map; determining coordinates of an upper left luma sampling point of the luma encoding block, the coordinates being used to indicate an upper left of the luma encoding block a position of a luminance sampling point relative to an upper left luminance sampling point of the texture map, the coordinate including an X coordinate and a Y coordinate; acquiring a disparity vector between a current view and a reference view corresponding to the texture map; Determining, by the X coordinate of the upper left luma sample point of the block, the X coordinate of the target sample point in the depth map corresponding to the reference view point, wherein the target sample point is the brightness in the depth map a sampling point corresponding to an upper left luma sampling point of the encoding block; determining a Y coordinate of an upper left luma sampling point of the luma encoding block as a Y coordinate of a target sampling point of the
  • the X coordinate of the sampling point comprising: determining the brightness coding according to the disparity vector An offset between an X coordinate of an upper left luma sample point of the code block and an X coordinate of the target sample point in the depth map; an X coordinate of the upper left luma sample point of the luma coding block and the offset And determining an X coordinate of the target sampling point in the depth map.
  • the offset between the X coordinates of the target sample points in the depth map includes rounding down the result of dividing the horizontal component of the disparity vector by 2 and dividing by 4 to obtain the offset.
  • the acquiring a disparity vector between a current view and a reference view corresponding to the texture map includes: according to the depth The identification bit is refined to determine the disparity vector.
  • the determining, according to the depth refinement identifier, the disparity vector including: when the depth refinement identifier When the bit is 0, the neighboring block disparity vector NBDV is determined as the disparity vector; when the depth refinement flag is 1, the depth-based neighbor block disparity vector DoNBDV is determined as the disparity vector.
  • a decoding method including: determining a current luma decoding block from a texture map; determining coordinates of an upper left luma sampling point of the luma decoding block, the coordinates being used to indicate an upper left of the luma decoding block a position of a luminance sampling point relative to an upper left luminance sampling point of the texture map, the coordinate including an X coordinate and a Y coordinate; acquiring a disparity vector between a current view and a reference view corresponding to the texture map; decoding according to the brightness Determining, by the X coordinate of the upper left luma sample point of the block, the X coordinate of the target sample point in the depth map corresponding to the reference view point, wherein the target sample point is the brightness in the depth map Decoding a sampling point corresponding to an upper left luma sampling point of the block; determining a Y coordinate of an upper left luma sampling point of the luma decoding block as a Y coordinate of a target sampling point of
  • the X coordinate of the sampling point includes: determining, according to the disparity vector, an X coordinate of an upper left luma sampling point of the luma decoding block and an X coordinate of a target sampling point in the depth map
  • An offset between the X-coordinates of the target sample points in the depth map is determined according to an X coordinate of the upper left luma sample point of the luma decoding block and the offset.
  • the offset between the X coordinates of the target sample points in the depth map includes rounding down the result of dividing the horizontal component of the disparity vector by 2 and dividing by 4 to obtain the offset.
  • the acquiring a disparity vector between the current view and the reference view corresponding to the texture map includes: according to the depth The identification bit is refined to determine the disparity vector.
  • the determining, according to the depth refinement identifier, the disparity vector includes: when the depth refinement identifier When the bit is 0, the neighboring block disparity vector NBDV is determined as the disparity vector; when the depth refinement flag is 1, the depth-based neighbor block disparity vector DoNBDV is determined as the disparity vector.
  • an encoder including: a first determining unit, configured to determine a current luma encoding block from a texture map; and a second determining unit, configured to determine coordinates of an upper left luma sampling point of the luma encoding block And the coordinate is used to indicate a position of an upper left luma sample point of the luma coding block relative to an upper left luma sample point of the texture map, the coordinate includes an X coordinate and a Y coordinate; and an acquiring unit, configured to acquire the texture a disparity vector between the current view and the reference view corresponding to the map; a third determining unit, configured to determine a depth map corresponding to the reference view according to an X coordinate of the upper left luma sample point of the luma coding block and the disparity vector The X coordinate of the target sample point in which the target sample point is a sample point corresponding to the upper left luminance sample point of the luma coding block in the depth map; and a fourth determining unit
  • the third determining unit is specifically configured to determine, according to the disparity vector, an X coordinate of the upper left luma sampling point of the luma coding block and the depth map. An offset between the X coordinates of the target sample points in the medium; The X coordinate of the upper left luminance sampling point and the offset determine the X coordinate of the target sampling point in the depth map.
  • the third determining unit is specifically configured to add the horizontal component of the disparity vector to 2 and divide by 4 The result is rounded down to get the offset.
  • the acquiring unit is specifically configured to determine the disparity vector according to the depth refinement identifier bit.
  • the acquiring unit is specifically configured to: when the depth refinement flag is 0, the adjacent block disparity The vector NBDV is determined as the disparity vector; when the depth refinement flag is 1, the depth-based neighbor block disparity vector DoNBDV is determined as the disparity vector.
  • a decoder including: a first determining unit, configured to determine a current luma decoding block from a texture map; and a second determining unit, configured to determine coordinates of an upper left luma sampling point of the luma decoding block And the coordinate is used to indicate a position of an upper left luma sampling point of the luma decoding block relative to an upper left luma sampling point of the texture map, the coordinate includes an X coordinate and a Y coordinate; and an acquiring unit, configured to acquire the texture a disparity vector between the current view and the reference view corresponding to the map; a third determining unit, configured to determine a depth map corresponding to the reference view according to an X coordinate of the upper left luma sample point of the luma decoding block and the disparity vector The X coordinate of the target sample point in which the target sample point is a sample point corresponding to the upper left luma sample point of the luma decoding block in the depth map; and a fourth determining
  • the third determining unit is configured to determine, according to the disparity vector, an X coordinate of the upper left luma sampling point of the luma decoding block and the depth map. An offset between the X coordinates of the target sampling point in the medium; determining an X coordinate of the target sampling point in the depth map according to an X coordinate of the upper left luminance sampling point of the luminance decoding block and the offset.
  • the third determining unit is specifically configured to round down the result of dividing the horizontal component of the disparity vector by 2 and dividing by 4 to obtain the offset.
  • the acquiring unit is specifically configured to determine the disparity vector according to the depth refinement identifier bit.
  • the acquiring unit is specifically configured to: when the depth refinement flag is 0, the adjacent block disparity The vector NBDV is determined as the disparity vector; when the depth refinement flag is 1, the depth-based neighbor block disparity vector DoNBDV is determined as the disparity vector.
  • the embodiment of the present invention directly determines the Y coordinate of the upper left luma sampling point of the luma coding block as the Y coordinate of the corresponding sample point in the depth map of the reference view point, and omits the Y coordinate and the disparity vector according to the upper left luma sample point of the luma coding block.
  • the computational overhead of calculating the Y coordinate of the corresponding pixel in the depth map improves the coding efficiency.
  • FIG. 1 is an exemplary diagram of a block division manner.
  • FIG. 2 is an exemplary diagram of a block division manner.
  • Figure 3 is a schematic diagram of the principle of a DBBP.
  • FIG. 4 is a schematic flowchart of an encoding method according to an embodiment of the present invention.
  • Fig. 5 is a diagram showing the coordinates of the upper left sampling point of the luma coding block.
  • FIG. 6 is a schematic flowchart of a decoding method according to an embodiment of the present invention.
  • FIG. 7 is a diagram showing an example of an encoding method of an embodiment of the present invention.
  • FIG. 8 is a diagram showing an example of an encoding method of an embodiment of the present invention.
  • Figure 9 is a schematic block diagram of an encoder in accordance with an embodiment of the present invention.
  • Figure 10 is a schematic block diagram of a decoder in accordance with an embodiment of the present invention.
  • Figure 11 is a schematic block diagram of an encoder in accordance with an embodiment of the present invention.
  • Figure 12 is a schematic block diagram of a decoder in accordance with an embodiment of the present invention.
  • the texture camera and the depth camera are arranged horizontally.
  • the depth value corresponding to each sampling point in the luma coding block is used in the embodiment of the present invention.
  • the calculation of information has been simplified.
  • the encoding method according to an embodiment of the present invention will be described in detail below with reference to specific FIG.
  • FIG. 4 is a schematic flowchart of an encoding method according to an embodiment of the present invention.
  • the method of Figure 4 includes:
  • the resolution of the texture map is 168 ⁇ 1024, and the size of each luma coding block is 32 ⁇ 32.
  • the coordinates of the upper left luminance sampling point of block 1 are (0, 0)
  • the coordinates of the upper left luminance sampling point of block 2 are (32, 0)
  • the coordinates of the upper left luminance sampling point of block 3 are (0, 32)
  • the coordinates of the upper left luminance sampling point of block 4 are (32, 32).
  • the disparity vector may be vector information that locates a corresponding block position in another encoded view point for a current prediction unit (PU, Prediction Unit) or a coding unit (CU, Coding Unit) by using other view-coded information at the current time.
  • PU current prediction unit
  • CU Coding Unit
  • the step 440 may include: determining, according to the disparity vector, an offset between an X coordinate of an upper left luma sampling point of the luma encoding block and an X coordinate of the target sampling point in the depth map; and an upper left luma sampling according to the luma encoding block
  • the X coordinate of the point and the offset determine the X coordinate of the target sample point in the depth map.
  • the X coordinate of the target sample point in the depth map can be defined by the following two definitions: the target sample point in the depth map
  • the X coordinate is not less than 0 and is not larger than the image width of the texture map.
  • the width of the luma coding block is subtracted; or the X coordinate of the target sample point in the depth map is not less than 0, and is not larger than the image width of the texture map minus 1.
  • the determining, according to the disparity vector, the offset between the X coordinate of the upper left luma sample point of the luma coding block and the X coordinate of the corresponding sample point in the depth map may include: adding the horizontal component of the disparity vector by 2 and dividing by 4 The result is rounded down to get the offset.
  • the X coordinate and the Y coordinate of the target sampling point of the depth map may be used to indicate the position of the target sampling point relative to the upper left sampling point of the depth map.
  • the X coordinate and the Y coordinate of the target sampling point in the depth map may be the upper left corner point, and a block area equal to the brightness coding block is divided in the depth map, and the depth value information in the block area is obtained.
  • the depth information corresponding to the luma coding block is determined.
  • the depth value information herein may include a depth value corresponding to each sample point in the luma coding block.
  • the step 470 may include: generating a binarized partitioning template according to the comparison between the depth value corresponding to each sampling point recorded in the depth value information and the depth threshold.
  • the luma coding block is then partitioned according to the binarization partitioning template. For example, the average value of the depth values corresponding to the four corner points of the luma coding block is used as the depth threshold, and then the relationship between the depth value corresponding to each sampling point in the luma coding block and the depth threshold is determined, and the corresponding depth value is greater than the above.
  • the sampling point of the depth threshold is recorded as 1, and the sampling point whose corresponding depth value is smaller than the depth threshold is recorded as 0, a binarized dividing template composed of 0 and 1 is generated, and then the corresponding depth value in the luminance encoding block is obtained.
  • the sampling points of 0 are divided into one block, and the corresponding sampling points with the depth value of 1 are divided into another block, thereby realizing the division of the luminance coding block.
  • the divided luma coding block is subjected to subsequent encoding operations such as motion compensation, filtering and combining.
  • the X and Y coordinates of the upper left luma sample point of the luma coding block and the horizontal and vertical components of the disparity vector are respectively used for Clipping and shifting.
  • Bit operation operation to obtain the depth map corresponding to the reference viewpoint The X and Y coordinates of the target pixel.
  • the texture camera and the depth camera are arranged in a horizontal manner, the Y coordinates of the corresponding sample points in the texture map and the depth map collected by different viewpoints should be the same, that is, the Y of the upper left luminance sampling point of the luminance encoding block is used.
  • the operation of the vertical component of the coordinate and the disparity vector is performed, and the operation of obtaining the Y coordinate of the corresponding sampling point in the depth map of the reference view is redundant.
  • the Y coordinate of the upper left luma sample point of the luma coding block is directly determined as Referring to the Y coordinate of the corresponding sampling point in the depth map of the reference point, the calculation overhead of the process of calculating the Y coordinate of the corresponding pixel point in the depth map according to the Y coordinate of the upper left luminance sampling point of the luminance encoding block and the disparity vector is omitted, and the calculation is improved. The efficiency of the coding.
  • step 430 may include determining a disparity vector according to the depth refinement identifier bit.
  • the disparity vector may include: a neighbor block disparity vector (NBDV, Neighboring Block Disparity Vector) and a depth-based neighbor block disparity vector (DoNBDV, Depth oriented NBDV).
  • NBDV Neighboring Block Disparity Vector
  • DoNBDV Depth oriented NBDV
  • the NBDV is a disparity vector obtained by using a neighboring block of a time-space domain or a motion compensated prediction (MCP);
  • MCP motion compensated prediction
  • the DoNBDV is a corresponding depth block information obtained by using the NBDV to obtain a reference view, and according to the depth block information.
  • the resulting disparity vector is used to indicate whether to use NBDV or DoNBDV for the current encoding.
  • the DoNBDV may be determined as a disparity vector; when the depth refinement flag is 0, the NBDV may be determined as a disparity vector.
  • the encoding method of the embodiment of the present invention is described in detail from the perspective of the encoding end.
  • the decoding method of the embodiment of the present invention is described in detail from the perspective of the decoding end. It should be understood that the steps and operations of the encoding end and the decoding end correspond to each other. To avoid repetition, details are not described herein.
  • FIG. 6 is a schematic flowchart of a decoding method according to an embodiment of the present invention.
  • the method of Figure 6 includes:
  • the X and Y coordinates of the upper left luma sampling point of the luma decoding block and the horizontal and vertical components of the disparity vector are respectively used for Clipping and shifting.
  • the bit operation operation is performed to obtain the X coordinate and the Y coordinate of the target pixel point in the depth map corresponding to the reference viewpoint.
  • the texture camera and the depth camera are arranged in a horizontal manner, the Y coordinates of the corresponding sample points in the texture map and the depth map collected by different viewpoints should be the same, that is, the Y of the upper left luminance sampling point of the luminance decoding block is utilized.
  • the operation of the coordinate and the vertical component of the disparity vector to obtain the Y coordinate of the corresponding sampling point in the depth map of the reference view is redundant.
  • the Y coordinate of the upper left luma sample point of the luma decoding block is directly determined as Referring to the Y coordinate of the corresponding sampling point in the depth map of the reference point, the calculation overhead of the process of calculating the Y coordinate of the corresponding pixel point in the depth map according to the Y coordinate of the upper left luminance sampling point of the luminance decoding block and the disparity vector is omitted, and the calculation is improved. The efficiency of decoding.
  • determining the X coordinate of the target sampling point in the depth map corresponding to the reference view according to the X coordinate and the disparity vector of the upper left luma sampling point of the luma decoding block may include: determining the brightness according to the disparity vector. An offset between the X coordinate of the upper left luma sample point of the decoding block and the X coordinate of the target sample point in the depth map; determining the X coordinate and the offset of the upper left luma sample point of the luma decoding block to determine the depth map The X coordinate of the target sample point.
  • determining, by the foregoing, the offset between the X coordinate of the upper left luma sample point of the luma decoding block and the X coordinate of the target sample point in the depth map may include: adding the horizontal component of the disparity vector to 2 The result of dividing by 4 is rounded down to get the offset.
  • acquiring the disparity vector between the current view and the reference view corresponding to the texture map may include: determining a disparity vector according to the depth refinement identifier.
  • the determining, according to the depth refinement identifier, the disparity vector may include: when the depth refinement flag is 0, determining the neighbor block disparity vector NBDV as a disparity vector; when the depth refinement identifier When the bit is 1, the depth-based neighbor block disparity vector DoNBDV Determined as a disparity vector.
  • FIG. 7 to FIG. 8 are merely for facilitating the understanding of the embodiments of the present invention, and the embodiments of the present invention are not limited to the specific numerical values or specific examples illustrated. A person skilled in the art will be able to make various modifications or changes in the embodiments according to the examples of FIG. 7 to FIG. 8 which are within the scope of the embodiments of the present invention.
  • FIG. 7 is a flowchart for determining depth value information corresponding to each pixel point in a 16 ⁇ 16-size luma coding block in the process of using DBBP. It is assumed that the coordinates of the upper left luminance sampling point of the luma coding block are (368, 64), and DepthRefinementFlag is 1, DoNBDV is (-250, 0), and NBDV is (-156, 0).
  • DoNBDV is selected as the disparity vector for coordinate calculation
  • the Y coordinate of the target sample point in the depth map is 64, that is, the coordinates of the target sample point in the depth map are (306, 64).
  • a 16 ⁇ 16 depth block is divided in the depth map, and the depth block takes the target sample point as the upper left corner point.
  • the sampling points between the two blocks have a one-to-one correspondence.
  • the depth value of the luma sample point in the luma coding block is the depth value of the sample point corresponding to the luma sample point in the depth block.
  • the Y coordinate is 96, that is, the coordinates of the pixel in the upper left corner of the depth block are (276, 96).
  • a 32 ⁇ 32 depth block is divided in the depth map, and the depth block takes the target sample point as the upper left corner point. Determining the depth value information in the depth block as the depth value information corresponding to the luma coding block, that is, since the depth block is the same size as the luma coding block, the sampling points between the two blocks have a one-to-one correspondence.
  • the depth value of the luma sample point in the luma coding block is the depth value of the sample point corresponding to the luma sample point in the depth block.
  • Figures 7 and 8 are specific embodiments described from the perspective of coding, and the embodiments described in Figures 7 and 8 are equally applicable to the decoding side.
  • FIG. 9 is a schematic block diagram of an encoder in accordance with an embodiment of the present invention. It should be understood that the encoder 900 of FIG. 9 can implement the various steps in FIG. 4, and to avoid repetition, it will not be described in detail herein. Encoder 900 includes:
  • a first determining unit 910 configured to determine a current luma coding block from the texture map
  • a second determining unit 920 configured to determine coordinates of an upper left luma sampling point of the luma encoding block, where the coordinates are used to indicate an upper left luma sampling point of the luma encoding block relative to an upper left luma sampling point of the texture map Position, the coordinates including an X coordinate and a Y coordinate;
  • An obtaining unit 930 configured to acquire a disparity vector between a current view and a reference view corresponding to the texture map
  • a third determining unit 940 configured to determine an X coordinate of a target sampling point in a depth map corresponding to the reference view according to an X coordinate of an upper left luma sampling point of the luma coding block and the disparity vector, where the target The sampling point is a sampling point in the depth map corresponding to an upper left luminance sampling point of the luma coding block;
  • a fourth determining unit 950 configured to determine a Y coordinate of an upper left luma sampling point of the luma encoding block as a Y coordinate of a target sampling point of the depth map;
  • a fifth determining unit 960 configured to determine, according to an X coordinate and a Y coordinate of the target sampling point in the depth map, and a size of the luma encoding block, depth value information corresponding to each sampling point in the luma encoding block ;
  • the block dividing unit 970 is configured to obtain a block division manner of the luma coding block according to the depth value information, and divide the luma coding block.
  • the encoding unit 980 is configured to encode the divided luma coding block.
  • the X and Y coordinates of the upper left luma sample point of the luma coding block and the horizontal and vertical components of the disparity vector are respectively used for Clipping and shifting.
  • the bit operation operation is performed to obtain the X coordinate and the Y coordinate of the target pixel point in the depth map corresponding to the reference viewpoint.
  • the texture camera and the depth camera are arranged in a horizontal manner, the Y coordinates of the corresponding sample points in the texture map and the depth map collected by different viewpoints should be the same, that is, the Y of the upper left luminance sampling point of the luminance encoding block is used.
  • the operation is redundant.
  • the Y coordinate of the upper left luma sampling point of the luma coding block is directly determined as the Y coordinate of the corresponding sample point in the depth map of the reference view point, and the upper left luma sampling according to the luma coding block is omitted.
  • the calculation cost of the Y coordinate of the point and the disparity vector to calculate the Y coordinate of the corresponding pixel in the depth map improves the coding efficiency.
  • the third determining unit 940 is specifically configured to determine, according to the disparity vector, an X coordinate of an upper left luma sampling point of the luma coding block and a target sampling point in the depth map. An offset between the X coordinates; determining an X coordinate of the target sample point in the depth map according to an X coordinate of the upper left luminance sample point of the luma coding block and the offset.
  • the third determining unit 940 is specifically configured to round down the result of dividing the horizontal component of the disparity vector by 2 and dividing by 4 to obtain the offset.
  • the acquiring unit 930 is specifically configured to determine the disparity vector according to the depth refinement identifier bit.
  • the acquiring unit 930 is specifically configured to: when the depth refinement flag is 0, determine a neighbor block disparity vector NBDV as the disparity vector; when the depth refinement identifier When the bit is 1, the depth-based neighbor block disparity vector DoNBDV is determined as the disparity vector.
  • Figure 10 is a schematic block diagram of a decoder in accordance with an embodiment of the present invention. It should be understood that the decoder 1000 of FIG. 10 can implement the various steps in FIG. 6, and to avoid repetition, it will not be described in detail herein.
  • the decoder 1000 can include:
  • a first determining unit 1010 configured to determine a current luma decoding block from the texture map
  • a second determining unit 1020 configured to determine coordinates of an upper left luma sampling point of the luma decoding block, where the coordinates are used to indicate an upper left luma sampling point of the luma decoding block relative to an upper left luma sampling point of the texture map Position, the coordinates including an X coordinate and a Y coordinate;
  • An obtaining unit 1030 configured to acquire a disparity vector between a current view point and a reference view point corresponding to the texture map
  • a third determining unit 1040 configured to determine an X coordinate of a target sampling point in a depth map corresponding to the reference view according to an X coordinate of an upper left luma sampling point of the luma decoding block and the disparity vector, where the target The sampling point is a sampling point in the depth map corresponding to an upper left luminance sampling point of the luminance decoding block;
  • a fourth determining unit 1050 configured to determine a Y coordinate of an upper left luma sampling point of the luma decoding block as a Y coordinate of a target sampling point of the depth map;
  • a fifth determining unit 1060 configured to determine, according to an X coordinate and a Y coordinate of the target sampling point in the depth map, and a size of the luma decoding block, depth value information corresponding to each sampling point in the luma decoding block.
  • the block dividing unit 1070 is configured to obtain a block division manner of the luma decoding block according to the depth value information, and divide the luma decoding block.
  • the decoding unit 1080 is configured to decode the divided luma decoding block.
  • the X and Y coordinates of the upper left luma sampling point of the luma decoding block and the horizontal and vertical components of the disparity vector are respectively used for Clipping and shifting.
  • the bit operation operation is performed to obtain the X coordinate and the Y coordinate of the target pixel point in the depth map corresponding to the reference viewpoint.
  • the texture camera and the depth camera are arranged in a horizontal manner, the Y coordinates of the corresponding sample points in the texture map and the depth map collected by different viewpoints should be the same, that is, the Y of the upper left luminance sampling point of the luminance decoding block is utilized.
  • the operation of the coordinate and the vertical component of the disparity vector to obtain the Y coordinate of the corresponding sampling point in the depth map of the reference view is redundant.
  • the Y coordinate of the upper left luma sample point of the luma decoding block is directly determined as Referring to the Y coordinate of the corresponding sampling point in the depth map of the reference point, the calculation overhead of the process of calculating the Y coordinate of the corresponding pixel point in the depth map according to the Y coordinate of the upper left luminance sampling point of the luminance decoding block and the disparity vector is omitted, and the calculation is improved. The efficiency of decoding.
  • the third determining unit 1040 is specifically configured to determine, according to the disparity vector, an X coordinate of an upper left luma sampling point of the luma decoding block and a target sampling point in the depth map. An offset between the X coordinates; determining an X coordinate of the target sample point in the depth map according to an X coordinate of the upper left luminance sample point of the luminance decoding block and the offset.
  • the third determining unit 1040 is specifically configured to round down the result of dividing the horizontal component of the disparity vector by 2 and dividing by 4 to obtain the offset.
  • the acquiring unit 1030 is specifically configured to determine the disparity vector according to the depth refinement identifier bit.
  • the acquiring unit 1030 is specifically configured to: when the depth refinement flag is 0, determine a neighbor block disparity vector NBDV as the disparity vector; when the depth refinement identifier When the bit is 1, the depth-based neighbor block disparity vector DoNBDV is determined as the disparity vector.
  • FIG. 11 is a schematic block diagram of an encoder in accordance with an embodiment of the present invention. It should be understood that the encoder 1100 of FIG. 11 can implement the various steps in FIG. 4, and to avoid repetition, it will not be described in detail herein. Encoder 1100 include:
  • a memory 1110 configured to store an instruction
  • the processor 1120 is configured to execute an instruction, when the instruction is executed, the processor 1120 is specifically configured to determine a current luma coding block from a texture map, and determine coordinates of an upper left luma sampling point of the luma coding block, The coordinates are used to indicate a position of an upper left luma sample point of the luma coding block relative to an upper left luma sample point of the texture map, the coordinate includes an X coordinate and a Y coordinate; and the current view point corresponding to the texture map is acquired Parallax vector between reference viewpoints
  • an X coordinate of a target sample point in a depth map corresponding to the reference view point according to an X coordinate of an upper left luma sample point of the luma coding block and the disparity vector, wherein the target sample point is in the depth map a sampling point corresponding to an upper left luma sampling point of the luma encoding block; determining a Y coordinate of an upper left luma sampling point of the luma encoding block as a Y coordinate of a target sampling point of the depth map; according to the depth map Determining the depth value information corresponding to each sampling point in the luma coding block, and determining the luma coding according to the depth value information, and the X coordinate and the Y coordinate of the target sampling point, and the size of the luma coding block a block division manner of the block, and dividing the luma coding block; and encoding the divided luma coding block.
  • the X and Y coordinates of the upper left luma sample point of the luma coding block and the horizontal and vertical components of the disparity vector are respectively used for Clipping and shifting.
  • the bit operation operation is performed to obtain the X coordinate and the Y coordinate of the target pixel point in the depth map corresponding to the reference viewpoint.
  • the texture camera and the depth camera are arranged in a horizontal manner, the Y coordinates of the corresponding sample points in the texture map and the depth map collected by different viewpoints should be the same, that is, the Y of the upper left luminance sampling point of the luminance encoding block is used.
  • the operation of the vertical component of the coordinate and the disparity vector is performed, and the operation of obtaining the Y coordinate of the corresponding sampling point in the depth map of the reference view is redundant.
  • the Y coordinate of the upper left luma sample point of the luma coding block is directly determined as Referring to the Y coordinate of the corresponding sampling point in the depth map of the reference point, the calculation overhead of the process of calculating the Y coordinate of the corresponding pixel point in the depth map according to the Y coordinate of the upper left luminance sampling point of the luminance encoding block and the disparity vector is omitted, and the calculation is improved. The efficiency of the coding.
  • the processor 1120 is specifically configured to determine, according to the disparity vector, an X coordinate of an upper left luma sampling point of the luma coding block and an X coordinate of a target sampling point in the depth map.
  • An offset between the X-coordinates of the target sample points in the depth map is determined according to an X coordinate of the upper left luma sample point of the luma coding block and the offset.
  • the processor 1120 is specifically configured to use the disparity vector The result of dividing the horizontal component by 2 and dividing by 4 is rounded down to obtain the offset.
  • the processor 1120 is specifically configured to determine the disparity vector according to the depth refinement identifier.
  • the processor 1120 is specifically configured to: when the depth refinement flag is 0, determine a neighbor block disparity vector NBDV as the disparity vector; when the depth refinement identifier When the bit is 1, the depth-based neighbor block disparity vector DoNBDV is determined as the disparity vector.
  • Figure 12 is a schematic block diagram of a decoder in accordance with an embodiment of the present invention. It should be understood that the decoder 1200 of FIG. 12 can implement the various steps in FIG. 6, and to avoid repetition, it will not be described in detail herein.
  • the decoder 1200 can include:
  • a memory 1210 configured to store an instruction
  • the processor 1220 is configured to execute an instruction, when the instruction is executed, the processor 1220 is specifically configured to determine a current luma decoding block from a texture map, and determine coordinates of an upper left luma sampling point of the luma decoding block, The coordinates are used to indicate a position of an upper left luma sample point of the luma decoding block relative to an upper left luma sample point of the texture map, the coordinate including an X coordinate and a Y coordinate
  • the X and Y coordinates of the upper left luma sampling point of the luma decoding block and the horizontal and vertical components of the disparity vector are respectively used for Clipping and shifting.
  • the bit operation operation is performed to obtain the X coordinate and the Y coordinate of the target pixel point in the depth map corresponding to the reference viewpoint.
  • the texture camera and the depth camera are arranged in a horizontal manner, the Y coordinates of the corresponding sample points in the texture map and the depth map collected by different viewpoints should be the same, that is, the Y of the upper left luminance sampling point of the luminance decoding block is utilized.
  • the Y coordinate of the upper left luma sampling point of the luma decoding block is directly determined as the Y coordinate of the corresponding sampling point in the depth map of the reference view, and the upper left luma sampling according to the luma decoding block is omitted.
  • the calculation cost of the Y coordinate of the point and the disparity vector to calculate the Y coordinate of the corresponding pixel in the depth map improves the decoding efficiency.
  • the processor 1220 is specifically configured to determine, according to the disparity vector, an X coordinate of an upper left luma sampling point of the luma decoding block and an X coordinate of a target sampling point in the depth map.
  • An offset between the X-coordinates of the target sample points in the depth map is determined according to an X coordinate of the upper left luma sample point of the luma decoding block and the offset.
  • the processor 1220 is specifically configured to round down the result of dividing the horizontal component of the disparity vector by 2 and dividing by 4 to obtain the offset.
  • the processor 1220 is specifically configured to determine the disparity vector according to the depth refinement identifier.
  • the processor 1220 is specifically configured to: when the depth refinement flag is 0, determine a neighbor block disparity vector NBDV as the disparity vector; when the depth refinement identifier When the bit is 1, the depth-based neighbor block disparity vector DoNBDV is determined as the disparity vector.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明实施例提供一种编解码方法和编解码器,该编码方法包括:从纹理图中确定当前的亮度编码块;确定亮度编码块的左上亮度采样点的坐标,坐标用于指示亮度编码块的左上亮度采样点相对于纹理图的左上亮度采样点的位置;获取纹理图对应的当前视点与参考视点之间的视差矢量;根据亮度编码块的左上亮度采样点的X坐标和视差矢量,确定参考视点对应的深度图中的目标采样点的X坐标;将亮度编码块的左上亮度采样点的Y坐标确定为深度图的目标采样点的Y坐标;根据深度图中的目标采样点的X坐标和Y坐标,以及亮度编码块的大小,确定亮度编码块中各采样点所对应的深度值信息;根据深度值信息,得到亮度编码块的块划分方式,并对亮度编码块进行划分;对划分后的亮度编码块进行编码。利用本发明实施例的编码方法能够提高编码效率。

Description

编解码方法和编解码器 技术领域
本发明实施例涉及视频图像编解码领域,并且更具体地,涉及一种编解码方法和编解码器。
背景技术
在视频编码和解码框架中,混合编码结构通常用于视频序列的编码和解码。混合编码结构的编码端通常包括:预测模块、变换模块、量化模块和熵编码模块;混合编码结构的解码端通常包括:熵解码模块、反量化模块、反变换模块和预测补偿模块。这些编码和解码模块的组合可以有效去除视频序列的冗余信息,并能保证在解码端得到视频序列的编码图像。
在视频编码和解码框架中,视频序列的图像通常划分成图像块进行编码。一幅图像被划分成若干图像块,这些图像块使用上述模块进行编码和解码。
在上述模块中,预测模块用于编码端获得视频序列编码图像的图像块的预测块信息,进而得到图像块的残差,预测补偿模块用于解码端获得当前解码图像块的预测块信息,再根据解码得到的图像块残差获得当前解码图像块。预测模块通常包含帧内预测和帧间预测两种技术。其中,帧内预测技术利用当前图像块的空间像素信息去除当前图像块的冗余信息以获得残差;帧间预测技术利用当前图像邻近的已编码或已解码图像像素信息去除当前图像块的冗余信息以获得残差。在帧间预测技术里,用于帧间预测的当前图像邻近的图像被称为参考图像。
上述帧内预测或帧间预测技术均涉及块划分(block partitioning)技术,即将一个图像块划分成多于一个的区域(partition),然后再以所述区域为单位进行帧内预测或帧间预测。常用的块划分方法包括:将一个方形图像块沿水平或垂直方向划分成两个矩形区域(rectangular partition),如图1中的A和B所示,图中方形图像块分别沿水平与垂直方向划分成两个矩形区域。除此以外,一个方形图像块还可以将一个图像块划分成两个非矩形区域(non-rectangular partition),如图2所示。
三维视频编解码技术也可使用上述的块划分技术。在三维视频的纹理图 编解码技术中,基于深度的块分割是一种常用的方法。其原理在于利用亮度编码块中各采样点对应的深度值信息生成二值化划分模板,利用二值化划分模板对亮度编码块进行划分。这种方法也被称为基于深度的块分割模式(DBBP,depth-based block partitioning)。
现有技术中,要利用当前亮度编码块中各采样点对应的深度值信息实现当前亮度编码块的划分,首先要确定当前亮度编码块中各采样点对应的深度值信息。但是,由于当前视点的深度编码还未开始,因此无法直接从当前视点的纹理图对应的深度图中得到当前亮度编码块中各采样点对应的深度值信息,因此,需要利用视差矢量(DV,disparity vector)从已编码的参考视点对应的深度图中获取当前亮度编码块中各采样点对应的深度值信息(如图3所示),由于视点之间的视差,从参考视点对应的深度图中寻找当前视点的亮度编码块中各采样点的深度值信息的过程需要大量的Clip和移位等操作,降低了编码的效率。
发明内容
本发明实施例提供一种编解码方法和编解码器,以提高编码的效率。
第一方面,提供一种编码方法,包括:从纹理图中确定当前的亮度编码块;确定所述亮度编码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度编码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;获取所述纹理图对应的当前视点与参考视点之间的视差矢量;根据所述亮度编码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度编码块的左上亮度采样点对应的采样点;将所述亮度编码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度编码块的大小,确定所述亮度编码块中各采样点所对应的深度值信息;根据所述深度值信息,得到所述亮度编码块的块划分方式,并对所述亮度编码块进行划分;对划分后的所述亮度编码块进行编码。
结合第一方面,在第一方面的一种实现方式中,所述根据所述亮度编码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,包括:根据所述视差矢量,确定所述亮度编 码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;根据所述亮度编码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
结合第一方面或其上述实现方式的任一种,在第一方面的另一种实现方式中,所述根据所述视差矢量,确定所述亮度编码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量,包括:将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
结合第一方面或其上述实现方式的任一种,在第一方面的另一种实现方式中,所述获取所述纹理图对应的当前视点与参考视点之间的视差矢量,包括:根据深度精细化标识位,确定所述视差矢量。
结合第一方面或其上述实现方式的任一种,在第一方面的另一种实现方式中,所述根据深度精细化标识位,确定所述视差矢量,包括:当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
第二方面,提供一种解码方法,包括:从纹理图中确定当前的亮度解码块;确定所述亮度解码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度解码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;获取所述纹理图对应的当前视点与参考视点之间的视差矢量;根据所述亮度解码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度解码块的左上亮度采样点对应的采样点;将所述亮度解码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度解码块的大小,确定所述亮度解码块中各采样点所对应的深度值信息;根据所述深度值信息,得到所述亮度解码块的块划分方式,并对所述亮度解码块进行划分;对划分后的所述亮度解码块进行解码。
结合第二方面,在第二方面的一种实现方式中,所述根据所述亮度解码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,包括:根据所述视差矢量,确定所述亮度解码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之 间的偏移量;根据所述亮度解码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
结合第二方面或其上述实现方式的任一种,在第二方面的另一种实现方式中,所述根据所述视差矢量,确定所述亮度解码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量,包括:将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
结合第二方面或其上述实现方式的任一种,在第二方面的另一种实现方式中,所述获取所述纹理图对应的当前视点与参考视点之间的视差矢量,包括:根据深度精细化标识位,确定所述视差矢量。
结合第二方面或其上述实现方式的任一种,在第二方面的另一种实现方式中,所述根据深度精细化标识位,确定所述视差矢量,包括:当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
第三方面,提供一种编码器,包括:第一确定单元,用于从纹理图中确定当前的亮度编码块;第二确定单元,用于确定所述亮度编码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度编码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;获取单元,用于获取所述纹理图对应的当前视点与参考视点之间的视差矢量;第三确定单元,用于根据所述亮度编码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度编码块的左上亮度采样点对应的采样点;第四确定单元,用于将所述亮度编码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;第五确定单元,用于根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度编码块的大小,确定所述亮度编码块中各采样点所对应的深度值信息;块划分单元,用于根据所述深度值信息,得到所述亮度编码块的块划分方式,并对所述亮度编码块进行划分;编码单元,用于对划分后的所述亮度编码块进行编码。
结合第三方面,在第三方面的一种实现方式中,所述第三确定单元具体用于根据所述视差矢量,确定所述亮度编码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;根据所述亮度编码块的 左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
结合第三方面或其上述实现方式的任一种,在第三方面的另一种实现方式中,所述第三确定单元具体用于将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
结合第三方面或其上述实现方式的任一种,在第三方面的另一种实现方式中,所述获取单元具体用于根据深度精细化标识位,确定所述视差矢量。
结合第三方面或其上述实现方式的任一种,在第三方面的另一种实现方式中,所述获取单元具体用于当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
第四方面,提供一种解码器,包括:第一确定单元,用于从纹理图中确定当前的亮度解码块;第二确定单元,用于确定所述亮度解码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度解码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;获取单元,用于获取所述纹理图对应的当前视点与参考视点之间的视差矢量;第三确定单元,用于根据所述亮度解码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度解码块的左上亮度采样点对应的采样点;第四确定单元,用于将所述亮度解码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;第五确定单元,用于根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度解码块的大小,确定所述亮度解码块中各采样点所对应的深度值信息;块划分单元,用于根据所述深度值信息,得到所述亮度解码块的块划分方式,并对所述亮度解码块进行划分;解码单元,用于对划分后的所述亮度解码块进行解码。
结合第四方面,在第四方面的一种实现方式中,所述第三确定单元具体用于根据所述视差矢量,确定所述亮度解码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;根据所述亮度解码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
结合第四方面或其上述实现方式的任一种,在第四方面的另一种实现方 式中,所述第三确定单元具体用于将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
结合第四方面或其上述实现方式的任一种,在第四方面的另一种实现方式中,所述获取单元具体用于根据深度精细化标识位,确定所述视差矢量。
结合第四方面或其上述实现方式的任一种,在第四方面的另一种实现方式中,所述获取单元具体用于当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
本发明实施例直接将亮度编码块的左上亮度采样点的Y坐标确定为参考视点的深度图中的对应采样点的Y坐标,省略了根据亮度编码块的左上亮度采样点的Y坐标和视差矢量计算深度图中对应像素点的Y坐标这一过程的计算开销,提高了编码的效率。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是一种块划分方式的示例图。
图2是一种块划分方式的示例图。
图3是一种DBBP的原理示意图。
图4是本发明实施例的编码方法的示意性流程图。
图5是亮度编码块的左上采样点的坐标的示意图。
图6是本发明实施例的解码方法的示意性流程图。
图7是本发明实施例的编码方法的示例图。
图8是本发明实施例的编码方法的示例图。
图9是本发明实施例的编码器的示意性框图。
图10是本发明实施例的解码器的示意性框图。
图11是本发明实施例的编码器的示意性框图。
图12是本发明实施例的解码器的示意性框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明的一部分实施例,而不是全部实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都应属于本发明保护的范围。
目前,三维高效视频编码(3D-HEVC,3Dimension–High Efficiency Video Coding)中,纹理摄像机和深度摄像机采用水平排列,利用这一特点,本发明实施例对亮度编码块中各采样点对应的深度值信息的计算进行了简化。下面结合具体附图4,详细描述根据本发明实施例的编码方法。
图4是本发明实施例的编码方法的示意性流程图。图4的方法包括:
410、从纹理图中确定当前的亮度编码块。
420、确定亮度编码块的左上亮度采样点的坐标,该坐标用于指示该亮度编码块的左上亮度采样点相对于纹理图的左上亮度采样点的位置,该坐标包括X坐标和Y坐标。
以图5为例,纹理图的分辨率为168×1024,每个亮度编码块的大小为32×32。在图5中,块1的左上亮度采样点的坐标为(0,0),块2的左上亮度采样点的坐标为(32,0),块3的左上亮度采样点的坐标为(0,32),块4的左上亮度采样点的坐标为(32,32)。
430、获取纹理图对应的当前视点与参考视点之间的视差矢量。
需要说明的是,参考视点对应的深度图已经编码。视差矢量可以是利用当前时刻其他视点已编码信息,对当前预测单元(PU,Prediction Unit)或编码单元(CU,Coding Unit)定位其他已编码视点中对应块位置的矢量信息。
440、根据亮度编码块的左上亮度采样点的X坐标和所述视差矢量,确定参考视点对应的深度图中的目标采样点的X坐标,其中目标采样点为深度图中的与亮度编码块的左上亮度采样点对应的采样点。
具体地,步骤440可包括:根据视差矢量,确定亮度编码块的左上亮度采样点的X坐标与深度图中的目标采样点的X坐标之间的偏移量;根据亮度编码块的左上亮度采样点的X坐标和偏移量,确定深度图中的目标采样点的X坐标,其中,可以采用以下两种限定方式限定深度图中的目标采样点的X坐标:深度图中的目标采样点的X坐标不小于0,不大于纹理图的图像宽 度减去亮度编码块的宽度;或者,深度图中的目标采样点的X坐标不小于0,不大于纹理图的图像宽度减1。
上述根据视差矢量,确定亮度编码块的左上亮度采样点的X坐标与深度图中的对应采样点的X坐标之间的偏移量可包括:将视差矢量的水平分量加2后除以4的结果向下取整,得到偏移量。
450、将亮度编码块的左上亮度采样点的Y坐标确定为深度图的目标采样点的Y坐标。
具体地,深度图的目标采样点的X坐标和Y坐标可用于指示目标采样点相对于深度图的左上采样点的位置。
460、根据深度图中的目标采样点的X坐标和Y坐标,以及亮度编码块的大小,确定亮度编码块中各采样点所对应的深度值信息。
具体地,可以以深度图中的目标采样点的X坐标和Y坐标为左上角点,在该深度图中划分一块与亮度编码块一样大小的块区域,并将这个块区域中的深度值信息确定为该亮度编码块对应的深度信息。
470、根据深度值信息,得到亮度编码块的块划分方式,并对亮度编码块进行划分。
这里的深度值信息可包括亮度编码块中各采样点对应的深度值。步骤470可包括:根据深度值信息中记录的各采样点对应的深度值和深度阈值的比较,生成二值化划分模板。然后根据该二值化划分模板对亮度编码块进行划分。例如,先将亮度编码块四个角点对应的深度值的平均值作为上述深度阈值,然后确定亮度编码块中各采样点对应的深度值与上述深度阈值的关系,将对应的深度值大于上述深度阈值的采样点记为1,将对应的深度值小于上述深度阈值的采样点记为0,生成由0和1组成的二值化划分模板,然后将亮度编码块中的对应的深度值为0的采样点划分至一块,对应的深度值为1的采样点划分至另一块,从而实现亮度编码块的分割。此处具体可参照现有技术。
480、对划分后的亮度编码块进行编码。
例如,对划分后的亮度编码块进行运动补偿、滤波合并等后续编码操作。
现有技术中,在确定亮度编码块中各采样点对应的深度值信息时,要分别利用亮度编码块的左上亮度采样点的X坐标和Y坐标与视差矢量的水平和垂直分量进行Clip和移位等运算操作,以得到参考视点对应的深度图中的 目标像素点的X坐标和Y坐标。但是,由于纹理摄像机和深度摄像机按照水平方式排列,不同视点采集到的纹理图和深度图中对应采样点的Y坐标应该是相同的,也就是说,利用亮度编码块的左上亮度采样点的Y坐标和视差矢量的垂直分量进行运算,得到参考视点的深度图中的对应采样点的Y坐标的操作是冗余的,本发明实施例直接将亮度编码块的左上亮度采样点的Y坐标确定为参考视点的深度图中的对应采样点的Y坐标,省略了根据亮度编码块的左上亮度采样点的Y坐标和视差矢量计算深度图中对应像素点的Y坐标这一过程的计算开销,提高了编码的效率。
可选地,作为一个实施例,步骤430可包括:根据深度精细化标识位,确定视差矢量。
需要说明的是,视差矢量可以包括:相邻块视差矢量(NBDV,Neighboring Block Disparity Vector)和基于深度的相邻块视差矢量(DoNBDV,Depth oriented NBDV)两种。具体地而言,NBDV是利用时空域或运动预测补偿(MCP,Motion compensated prediction)的相邻块计算获得到的视差矢量;DoNBDV是利用NBDV获取参考视点的对应深度块信息,并根据深度块信息转化所获得的视差矢量。深度精细化标识位,即DepthRefinementFlag,用于指示当前编码时使用NBDV还是DoNBDV。
具体而言,当深度精细化标识位为1时,可以将DoNBDV确定为视差矢量;当深度精细化标识位为0时,可以将NBDV确定为视差矢量。
上文结合图4,从编码端的角度详细描述了本发明实施例的编码方法,下文结合图6,从解码端的角度详细描述本发明实施例的解码方法。应理解,编码端和解码端的步骤和操作相互对应,为避免重复,此处不再详述。
图6是本发明实施例的解码方法的示意性流程图。图6的方法包括:
610、从纹理图中确定当前的亮度解码块;
620、确定亮度解码块的左上亮度采样点的坐标,坐标用于指示亮度解码块的左上亮度采样点相对于纹理图的左上亮度采样点的位置,坐标包括X坐标和Y坐标;
630、获取纹理图对应的当前视点与参考视点之间的视差矢量;
640、根据亮度解码块的左上亮度采样点的X坐标和视差矢量,确定参考视点对应的深度图中的目标采样点的X坐标,其中目标采样点为深度图中的与亮度解码块的左上亮度采样点对应的采样点;
650、将亮度解码块的左上亮度采样点的Y坐标确定为深度图的目标采样点的Y坐标;
660、根据深度图中的目标采样点的X坐标和Y坐标,以及亮度解码块的大小,确定亮度解码块中各采样点所对应的深度值信息;
670、根据深度值信息,得到亮度解码块的块划分方式,并对亮度解码块进行划分;
680、对划分后的亮度解码块进行解码。
现有技术中,在确定亮度解码块中各采样点对应的深度值信息时,要分别利用亮度解码块的左上亮度采样点的X坐标和Y坐标与视差矢量的水平和垂直分量进行Clip和移位等运算操作,以得到参考视点对应的深度图中的目标像素点的X坐标和Y坐标。但是,由于纹理摄像机和深度摄像机按照水平方式排列,不同视点采集到的纹理图和深度图中对应采样点的Y坐标应该是相同的,也就是说,利用亮度解码块的左上亮度采样点的Y坐标和视差矢量的垂直分量进行运算,得到参考视点的深度图中的对应采样点的Y坐标的操作是冗余的,本发明实施例直接将亮度解码块的左上亮度采样点的Y坐标确定为参考视点的深度图中的对应采样点的Y坐标,省略了根据亮度解码块的左上亮度采样点的Y坐标和视差矢量计算深度图中对应像素点的Y坐标这一过程的计算开销,提高了解码的效率。
可选地,作为一个实施例,上述根据亮度解码块的左上亮度采样点的X坐标和视差矢量,确定参考视点对应的深度图中的目标采样点的X坐标可包括:根据视差矢量,确定亮度解码块的左上亮度采样点的X坐标与深度图中的目标采样点的X坐标之间的偏移量;根据亮度解码块的左上亮度采样点的X坐标和偏移量,确定深度图中的目标采样点的X坐标。
可选地,作为一个实施例,上述确定亮度解码块的左上亮度采样点的X坐标与深度图中的目标采样点的X坐标之间的偏移量可包括:将视差矢量的水平分量加2后除以4的结果向下取整,得到偏移量。
可选地,作为一个实施例,上述获取纹理图对应的当前视点与参考视点之间的视差矢量可包括:根据深度精细化标识位,确定视差矢量。
可选地,作为一个实施例,上述根据深度精细化标识位,确定视差矢量可包括:当深度精细化标识位为0时,将相邻块视差矢量NBDV确定为视差矢量;当深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV 确定为视差矢量。
下面结合具体例子,更加详细地描述本发明实施例。应注意,图7至图8的例子仅仅是为了帮助本领域技术人员理解本发明实施例,而非要将本发明实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的图7至图8的例子,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本发明实施例的范围内。
图7描述的是使用DBBP的过程中确定16×16大小的亮度编码块中各像素点对应的深度值信息的流程。假设亮度编码块的左上亮度采样点的坐标为(368,64),且DepthRefinementFlag为1,DoNBDV为(-250,0),NBDV为(-156,0)。首先,根据DepthRefinementFlag选取DoNBDV作为视差矢量进行坐标计算,得到参考视点对应的深度图中的目标采样点的X坐标为:368+((-250+2)>>2)=306(>>2表示移2位,相当于除以4),深度图中的目标采样点的Y坐标为64,即深度图中的目标采样点的坐标为(306,64)。基于深度图中的目标采样点的坐标和亮度编码块的大小,在深度图中划分出一块16×16的深度块,该深度块以该目标采样点为左上角点。将该深度块中的深度值信息确定为该亮度编码块对应的深度值信息,也就是说,由于该深度块与亮度编码块大小相同,两个块之间的采样点具有一一对应关系,亮度编码块中的亮度采样点的深度值就是该深度块中与该亮度采样点对应的采样点的深度值。
图8描述的是在DBBP过程中确定32×32大小的亮度编码块中各像素点对应的深度值信息的流程。假设亮度编码块的左上亮度采样点的坐标为(320,96),且DepthRefinementFlag为0,DoNBDV为(-248,0),NBDV为(-179,4)。首先,根据DepthRefinementFlag选取NBDV作为视差矢量进行坐标计算,得到深度图中的对应采样点的X坐标为:320+((-179+2)>>2)=276,深度块的左上角的像素点的Y坐标为96,即深度块的左上角的像素点的坐标为(276,96)。基于深度图中的对应采样点的坐标和亮度编码块的大小,在深度图中划分出一块32×32的深度块,该深度块以该目标采样点为左上角点。将该深度块中的深度值信息确定为该亮度编码块对应的深度值信息,也就是说,由于该深度块与亮度编码块大小相同,两个块之间的采样点具有一一对应关系,亮度编码块中的亮度采样点的深度值就是该深度块中与该亮度采样点对应的采样点的深度值。
应理解,图7和图8是从编码的角度描述的具体实施例,图7和图8描述的实施例同样可以应用于解码端。
上文结合图1-图8,详细描述了本发明实施例的编解码方法,下文结合图9-图12,详细描述本发明实施例的编解码器。
图9是本发明实施例的编码器的示意性框图。应理解,图9的编码器900能够实现图4中的各个步骤,为避免重复,此处不再详述。编码器900包括:
第一确定单元910,用于从纹理图中确定当前的亮度编码块;
第二确定单元920,用于确定所述亮度编码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度编码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;
获取单元930,用于获取所述纹理图对应的当前视点与参考视点之间的视差矢量;
第三确定单元940,用于根据所述亮度编码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度编码块的左上亮度采样点对应的采样点;
第四确定单元950,用于将所述亮度编码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;
第五确定单元960,用于根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度编码块的大小,确定所述亮度编码块中各采样点所对应的深度值信息;
块划分单元970,用于根据所述深度值信息,得到所述亮度编码块的块划分方式,并对所述亮度编码块进行划分;
编码单元980,用于对划分后的所述亮度编码块进行编码。
现有技术中,在确定亮度编码块中各采样点对应的深度值信息时,要分别利用亮度编码块的左上亮度采样点的X坐标和Y坐标与视差矢量的水平和垂直分量进行Clip和移位等运算操作,以得到参考视点对应的深度图中的目标像素点的X坐标和Y坐标。但是,由于纹理摄像机和深度摄像机按照水平方式排列,不同视点采集到的纹理图和深度图中对应采样点的Y坐标应该是相同的,也就是说,利用亮度编码块的左上亮度采样点的Y坐标和视差矢量的垂直分量进行运算,得到参考视点的深度图中的对应采样点的Y坐标 的操作是冗余的,本发明实施例直接将亮度编码块的左上亮度采样点的Y坐标确定为参考视点的深度图中的对应采样点的Y坐标,省略了根据亮度编码块的左上亮度采样点的Y坐标和视差矢量计算深度图中对应像素点的Y坐标这一过程的计算开销,提高了编码的效率。
可选地,作为一个实施例,所述第三确定单元940具体用于根据所述视差矢量,确定所述亮度编码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;根据所述亮度编码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
可选地,作为一个实施例,所述第三确定单元940具体用于将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
可选地,作为一个实施例,所述获取单元930具体用于根据深度精细化标识位,确定所述视差矢量。
可选地,作为一个实施例,所述获取单元930具体用于当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
图10是本发明实施例的解码器的示意性框图。应理解,图10的解码器1000能够实现图6中的各个步骤,为避免重复,此处不再详述。解码器1000可包括:
第一确定单元1010,用于从纹理图中确定当前的亮度解码块;
第二确定单元1020,用于确定所述亮度解码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度解码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;
获取单元1030,用于获取所述纹理图对应的当前视点与参考视点之间的视差矢量;
第三确定单元1040,用于根据所述亮度解码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度解码块的左上亮度采样点对应的采样点;
第四确定单元1050,用于将所述亮度解码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;
第五确定单元1060,用于根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度解码块的大小,确定所述亮度解码块中各采样点所对应的深度值信息;
块划分单元1070,用于根据所述深度值信息,得到所述亮度解码块的块划分方式,并对所述亮度解码块进行划分;
解码单元1080,用于对划分后的所述亮度解码块进行解码。
现有技术中,在确定亮度解码块中各采样点对应的深度值信息时,要分别利用亮度解码块的左上亮度采样点的X坐标和Y坐标与视差矢量的水平和垂直分量进行Clip和移位等运算操作,以得到参考视点对应的深度图中的目标像素点的X坐标和Y坐标。但是,由于纹理摄像机和深度摄像机按照水平方式排列,不同视点采集到的纹理图和深度图中对应采样点的Y坐标应该是相同的,也就是说,利用亮度解码块的左上亮度采样点的Y坐标和视差矢量的垂直分量进行运算,得到参考视点的深度图中的对应采样点的Y坐标的操作是冗余的,本发明实施例直接将亮度解码块的左上亮度采样点的Y坐标确定为参考视点的深度图中的对应采样点的Y坐标,省略了根据亮度解码块的左上亮度采样点的Y坐标和视差矢量计算深度图中对应像素点的Y坐标这一过程的计算开销,提高了解码的效率。
可选地,作为一个实施例,所述第三确定单元1040具体用于根据所述视差矢量,确定所述亮度解码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;根据所述亮度解码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
可选地,作为一个实施例,所述第三确定单元1040具体用于将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
可选地,作为一个实施例,所述获取单元1030具体用于根据深度精细化标识位,确定所述视差矢量。
可选地,作为一个实施例,所述获取单元1030具体用于当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
图11是本发明实施例的编码器的示意性框图。应理解,图11的编码器1100能够实现图4中的各个步骤,为避免重复,此处不再详述。编码器1100 包括:
存储器1110,用于存储指令;
处理器1120,用于执行指令,当所述指令被执行时,所述处理器1120具体用于从纹理图中确定当前的亮度编码块;确定所述亮度编码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度编码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;获取所述纹理图对应的当前视点与参考视点之间的视差矢量
根据所述亮度编码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度编码块的左上亮度采样点对应的采样点;将所述亮度编码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度编码块的大小,确定所述亮度编码块中各采样点所对应的深度值信息;根据所述深度值信息,得到所述亮度编码块的块划分方式,并对所述亮度编码块进行划分;对划分后的所述亮度编码块进行编码。
现有技术中,在确定亮度编码块中各采样点对应的深度值信息时,要分别利用亮度编码块的左上亮度采样点的X坐标和Y坐标与视差矢量的水平和垂直分量进行Clip和移位等运算操作,以得到参考视点对应的深度图中的目标像素点的X坐标和Y坐标。但是,由于纹理摄像机和深度摄像机按照水平方式排列,不同视点采集到的纹理图和深度图中对应采样点的Y坐标应该是相同的,也就是说,利用亮度编码块的左上亮度采样点的Y坐标和视差矢量的垂直分量进行运算,得到参考视点的深度图中的对应采样点的Y坐标的操作是冗余的,本发明实施例直接将亮度编码块的左上亮度采样点的Y坐标确定为参考视点的深度图中的对应采样点的Y坐标,省略了根据亮度编码块的左上亮度采样点的Y坐标和视差矢量计算深度图中对应像素点的Y坐标这一过程的计算开销,提高了编码的效率。
可选地,作为一个实施例,所述处理器1120具体用于根据所述视差矢量,确定所述亮度编码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;根据所述亮度编码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
可选地,作为一个实施例,所述处理器1120具体用于将所述视差矢量 的水平分量加2后除以4的结果向下取整,得到所述偏移量。
可选地,作为一个实施例,所述处理器1120具体用于根据深度精细化标识位,确定所述视差矢量。
可选地,作为一个实施例,所述处理器1120具体用于当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
图12是本发明实施例的解码器的示意性框图。应理解,图12的解码器1200能够实现图6中的各个步骤,为避免重复,此处不再详述。解码器1200可包括:
存储器1210,用于存储指令;
处理器1220,用于执行指令,当所述指令被执行时,所述处理器1220具体用于从纹理图中确定当前的亮度解码块;确定所述亮度解码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度解码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标
获取所述纹理图对应的当前视点与参考视点之间的视差矢量;根据所述亮度解码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度解码块的左上亮度采样点对应的采样点;将所述亮度解码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度解码块的大小,确定所述亮度解码块中各采样点所对应的深度值信息;根据所述深度值信息,得到所述亮度解码块的块划分方式,并对所述亮度解码块进行划分;对划分后的所述亮度解码块进行解码。
现有技术中,在确定亮度解码块中各采样点对应的深度值信息时,要分别利用亮度解码块的左上亮度采样点的X坐标和Y坐标与视差矢量的水平和垂直分量进行Clip和移位等运算操作,以得到参考视点对应的深度图中的目标像素点的X坐标和Y坐标。但是,由于纹理摄像机和深度摄像机按照水平方式排列,不同视点采集到的纹理图和深度图中对应采样点的Y坐标应该是相同的,也就是说,利用亮度解码块的左上亮度采样点的Y坐标和视差矢量的垂直分量进行运算,得到参考视点的深度图中的对应采样点的Y坐标 的操作是冗余的,本发明实施例直接将亮度解码块的左上亮度采样点的Y坐标确定为参考视点的深度图中的对应采样点的Y坐标,省略了根据亮度解码块的左上亮度采样点的Y坐标和视差矢量计算深度图中对应像素点的Y坐标这一过程的计算开销,提高了解码的效率。
可选地,作为一个实施例,所述处理器1220具体用于根据所述视差矢量,确定所述亮度解码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;根据所述亮度解码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
可选地,作为一个实施例,所述处理器1220具体用于将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
可选地,作为一个实施例,所述处理器1220具体用于根据深度精细化标识位,确定所述视差矢量。
可选地,作为一个实施例,所述处理器1220具体用于当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。

Claims (20)

  1. 一种编码方法,其特征在于,包括:
    从纹理图中确定当前的亮度编码块;
    确定所述亮度编码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度编码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;
    获取所述纹理图对应的当前视点与参考视点之间的视差矢量;
    根据所述亮度编码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度编码块的左上亮度采样点对应的采样点;
    将所述亮度编码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;
    根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度编码块的大小,确定所述亮度编码块中各采样点所对应的深度值信息;
    根据所述深度值信息,得到所述亮度编码块的块划分方式,并对所述亮度编码块进行划分;
    对划分后的所述亮度编码块进行编码。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述亮度编码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,包括:
    根据所述视差矢量,确定所述亮度编码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;
    根据所述亮度编码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述视差矢量,确定所述亮度编码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量,包括:
    将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
  4. 如权利要求1-3中任一项所述的方法,其特征在于,所述获取所述纹理图对应的当前视点与参考视点之间的视差矢量,包括:
    根据深度精细化标识位,确定所述视差矢量。
  5. 如权利要求4所述的方法,其特征在于,所述根据深度精细化标识位,确定所述视差矢量,包括:
    当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;
    当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
  6. 一种解码方法,其特征在于,包括:
    从纹理图中确定当前的亮度解码块;
    确定所述亮度解码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度解码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;
    获取所述纹理图对应的当前视点与参考视点之间的视差矢量;
    根据所述亮度解码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度解码块的左上亮度采样点对应的采样点;
    将所述亮度解码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;
    根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度解码块的大小,确定所述亮度解码块中各采样点所对应的深度值信息;
    根据所述深度值信息,得到所述亮度解码块的块划分方式,并对所述亮度解码块进行划分;
    对划分后的所述亮度解码块进行解码。
  7. 如权利要求6所述的方法,其特征在于,所述根据所述亮度解码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,包括:
    根据所述视差矢量,确定所述亮度解码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;
    根据所述亮度解码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
  8. 如权利要求7所述的方法,其特征在于,所述根据所述视差矢量, 确定所述亮度解码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量,包括:
    将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
  9. 如权利要求6-8中任一项所述的方法,其特征在于,所述获取所述纹理图对应的当前视点与参考视点之间的视差矢量,包括:
    根据深度精细化标识位,确定所述视差矢量。
  10. 如权利要求9所述的方法,其特征在于,所述根据深度精细化标识位,确定所述视差矢量,包括:
    当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;
    当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
  11. 一种编码器,其特征在于,包括:
    第一确定单元,用于从纹理图中确定当前的亮度编码块;
    第二确定单元,用于确定所述亮度编码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度编码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;
    获取单元,用于获取所述纹理图对应的当前视点与参考视点之间的视差矢量;
    第三确定单元,用于根据所述亮度编码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度编码块的左上亮度采样点对应的采样点;
    第四确定单元,用于将所述亮度编码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;
    第五确定单元,用于根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度编码块的大小,确定所述亮度编码块中各采样点所对应的深度值信息;
    块划分单元,用于根据所述深度值信息,得到所述亮度编码块的块划分方式,并对所述亮度编码块进行划分;
    编码单元,用于对划分后的所述亮度编码块进行编码。
  12. 如权利要求11所述的编码器,其特征在于,所述第三确定单元具体用于根据所述视差矢量,确定所述亮度编码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;根据所述亮度编码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
  13. 如权利要求12所述的编码器,其特征在于,所述第三确定单元具体用于将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
  14. 如权利要求11-13中任一项所述的编码器,其特征在于,所述获取单元具体用于根据深度精细化标识位,确定所述视差矢量。
  15. 如权利要求14所述的编码器,其特征在于,所述获取单元具体用于当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
  16. 一种解码器,其特征在于,包括:
    第一确定单元,用于从纹理图中确定当前的亮度解码块;
    第二确定单元,用于确定所述亮度解码块的左上亮度采样点的坐标,所述坐标用于指示所述亮度解码块的左上亮度采样点相对于所述纹理图的左上亮度采样点的位置,所述坐标包括X坐标和Y坐标;
    获取单元,用于获取所述纹理图对应的当前视点与参考视点之间的视差矢量;
    第三确定单元,用于根据所述亮度解码块的左上亮度采样点的X坐标和所述视差矢量,确定所述参考视点对应的深度图中的目标采样点的X坐标,其中所述目标采样点为所述深度图中的与所述亮度解码块的左上亮度采样点对应的采样点;
    第四确定单元,用于将所述亮度解码块的左上亮度采样点的Y坐标确定为所述深度图的目标采样点的Y坐标;
    第五确定单元,用于根据所述深度图中的目标采样点的X坐标和Y坐标,以及所述亮度解码块的大小,确定所述亮度解码块中各采样点所对应的深度值信息;
    块划分单元,用于根据所述深度值信息,得到所述亮度解码块的块划分方式,并对所述亮度解码块进行划分;
    解码单元,用于对划分后的所述亮度解码块进行解码。
  17. 如权利要求16所述的解码器,其特征在于,所述第三确定单元具体用于根据所述视差矢量,确定所述亮度解码块的左上亮度采样点的X坐标与所述深度图中的目标采样点的X坐标之间的偏移量;根据所述亮度解码块的左上亮度采样点的X坐标和所述偏移量,确定所述深度图中的目标采样点的X坐标。
  18. 如权利要求17所述的解码器,其特征在于,所述第三确定单元具体用于将所述视差矢量的水平分量加2后除以4的结果向下取整,得到所述偏移量。
  19. 如权利要求16-18中任一项所述的解码器,其特征在于,所述获取单元具体用于根据深度精细化标识位,确定所述视差矢量。
  20. 如权利要求19所述的解码器,其特征在于,所述获取单元具体用于当所述深度精细化标识位为0时,将相邻块视差矢量NBDV确定为所述视差矢量;当所述深度精细化标识位为1时,将基于深度的邻块视差矢量DoNBDV确定为所述视差矢量。
PCT/CN2015/072301 2015-02-05 2015-02-05 编解码方法和编解码器 WO2016123774A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201580000242.3A CN104995915B (zh) 2015-02-05 2015-02-05 编解码方法和编解码器
PCT/CN2015/072301 WO2016123774A1 (zh) 2015-02-05 2015-02-05 编解码方法和编解码器

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/072301 WO2016123774A1 (zh) 2015-02-05 2015-02-05 编解码方法和编解码器

Publications (1)

Publication Number Publication Date
WO2016123774A1 true WO2016123774A1 (zh) 2016-08-11

Family

ID=54306447

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/072301 WO2016123774A1 (zh) 2015-02-05 2015-02-05 编解码方法和编解码器

Country Status (2)

Country Link
CN (1) CN104995915B (zh)
WO (1) WO2016123774A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665902B (zh) 2017-03-31 2020-12-01 华为技术有限公司 多声道信号的编解码方法和编解码器

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101483770A (zh) * 2008-01-08 2009-07-15 华为技术有限公司 一种编解码方法及装置
US20130342644A1 (en) * 2012-06-20 2013-12-26 Nokia Corporation Apparatus, a method and a computer program for video coding and decoding
CN103916652A (zh) * 2013-01-09 2014-07-09 浙江大学 视差矢量生成方法及装置
CN104104933A (zh) * 2013-04-12 2014-10-15 浙江大学 一种视差矢量生成方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101483770A (zh) * 2008-01-08 2009-07-15 华为技术有限公司 一种编解码方法及装置
US20130342644A1 (en) * 2012-06-20 2013-12-26 Nokia Corporation Apparatus, a method and a computer program for video coding and decoding
CN103916652A (zh) * 2013-01-09 2014-07-09 浙江大学 视差矢量生成方法及装置
CN104104933A (zh) * 2013-04-12 2014-10-15 浙江大学 一种视差矢量生成方法及装置

Also Published As

Publication number Publication date
CN104995915B (zh) 2018-11-30
CN104995915A (zh) 2015-10-21

Similar Documents

Publication Publication Date Title
KR102173475B1 (ko) 픽처 예측 방법 및 픽처 예측 장치
JP5970609B2 (ja) 3dビデオ符号化における統一された視差ベクトル導出の方法と装置
CN110249628B (zh) 用于预测分区的视频编码器和解码器
JP5872676B2 (ja) 3dビデオコーディングにおけるテクスチャーイメージ圧縮方法および装置
WO2015192706A1 (en) Method of coding for depth based block partitioning mode in three-dimensional or multi-view video coding
JP6154643B2 (ja) 動画像符号化装置、動画像符号化装置のデプスイントラ予測方法およびプログラム、ならびに動画像復号装置、動画像復号装置のデプスイントラ予測方法およびプログラム
JP2015533038A5 (zh)
Gao et al. Encoder-driven inpainting strategy in multiview video compression
CN111295881A (zh) 用于图像和视频编解码的画面内预测融合的方法和装置
Liu et al. Generic segment-wise DC for 3D-HEVC depth intra coding
US20150264356A1 (en) Method of Simplified Depth Based Block Partitioning
CN109005412B (zh) 运动矢量获取的方法及设备
US11240512B2 (en) Intra-prediction for video coding using perspective information
WO2016123774A1 (zh) 编解码方法和编解码器
US20140184739A1 (en) Foreground extraction method for stereo video
US10091485B2 (en) Method for encoding and reconstructing depth image using color image information
US10477230B2 (en) Method and apparatus of disparity vector derivation for three-dimensional and multi-view video coding
US20150358643A1 (en) Method of Depth Coding Compatible with Arbitrary Bit-Depth
EP2966867A1 (en) Methods and devices for encoding and decoding a sequence of frames representing a 3D scene, and corresponding computer program products and computer-readable medium
WO2016123783A1 (zh) 图像预测处理方法和相关设备
WO2019182468A1 (en) Apparatus and method for coding an image
WO2016123782A1 (zh) 模板处理方法和相关设备
KR20130037843A (ko) 예측 픽셀 생성 장치 및 그 동작 방법
US20170019683A1 (en) Video encoding apparatus and method and video decoding apparatus and method
WO2015101251A1 (zh) 非对称运动分割方式编码的方法和编码设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15880730

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15880730

Country of ref document: EP

Kind code of ref document: A1