WO2020200052A1 - 视频编码方法、视频解码方法及相关设备 - Google Patents

视频编码方法、视频解码方法及相关设备 Download PDF

Info

Publication number
WO2020200052A1
WO2020200052A1 PCT/CN2020/081486 CN2020081486W WO2020200052A1 WO 2020200052 A1 WO2020200052 A1 WO 2020200052A1 CN 2020081486 W CN2020081486 W CN 2020081486W WO 2020200052 A1 WO2020200052 A1 WO 2020200052A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
sub
side length
length
coding unit
Prior art date
Application number
PCT/CN2020/081486
Other languages
English (en)
French (fr)
Inventor
余全合
郑建铧
魏紫威
王力强
牛犇犇
何芸
Original Assignee
华为技术有限公司
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司, 清华大学 filed Critical 华为技术有限公司
Publication of WO2020200052A1 publication Critical patent/WO2020200052A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • This application relates to the field of multimedia, and in particular to a video encoding method, video decoding method and related equipment.
  • Video codec is a common operation for processing video data.
  • Video coding and decoding are usually performed in units of coding units (CUs).
  • the CU is obtained by dividing the image in the video into blocks.
  • a frame of image Before a frame of image is encoded and decoded, it may be divided into a plurality of continuous and non-overlapping largest coding units (LCU) according to the video encoding and decoding standard.
  • the video codec standard specifies that the LCU is a 128*128 pixel area. Since the total number of horizontal and/or vertical pixels of the frame image may not be an integer multiple of 128, the LCU in the last row and/or the LCU in the rightmost column (also called boundary image blocks in the art), each image block is both Contains pixel areas, but also includes blank areas. As shown in Figure 1, the shaded part in the boundary image block indicates the pixel area, and the unshaded part indicates the blank area. Based on this, the boundary image block needs to be continuously divided to obtain the CU.
  • Existing image block division methods include quadtree division methods and binary tree division methods.
  • quadtree division method the binary tree division method, or the combination of the two division methods are used, the process of dividing the boundary image block to obtain the CU requires a relatively large number of divisions, which leads to a relatively high division algorithm.
  • the present application provides a video encoding method, a video decoding method, and related equipment, which can solve the problem of a large number of divisions in the process of dividing a boundary image block to obtaining a CU.
  • the present application provides a video decoding method, the method includes: detecting whether the ratio of the side length of the first sub-side of the current boundary image block of the current video frame to the side length of the first side is less than or equal to the first side.
  • Threshold wherein the first edge is an edge of the current boundary image block, the first sub edge is an edge of a pixel area in the current boundary image block, and the first edge and the first sub edge Are perpendicular to the boundary of the current video frame where the current boundary image block is located, and the first threshold is a value greater than 0 and less than 1; when the side length of the first sub-side is equal to that of the first side When the side length ratio is less than or equal to the first threshold, the current boundary image block is divided in a direction perpendicular to the first side to obtain a first block and a second block, and the first block includes all The pixel area; when the area of the first block is equal to the area of the pixel area, the first block is used as a coding unit, and the reconstruction block of the coding unit is obtained according to the coding information of the coding unit , Or continue to divide the first block to obtain at least two coding units, and obtain reconstruction blocks of the at least two coding units according to the coding information of the at least
  • the current boundary is divided in a direction perpendicular to the first side
  • the image block obtains a first block and a second block, and the first block includes the pixel area.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the pixel area in the boundary image block is divided into one sub-block, thereby reducing the division of the boundary image block.
  • the number of divisions in the process of obtaining the CU can further reduce the complexity of the division algorithm.
  • the method further includes: when the ratio of the side length of the first sub-side to the side length of the first side is greater than the first threshold, using a direction perpendicular to the first side
  • the current boundary image block is divided to obtain a first block and a second block.
  • the first block is a non-boundary image block
  • the second block is a boundary image block and includes a sub-pixel area, and the sub-pixel area Is a partial area of the pixel area; continue to divide the second block to obtain a coding unit, and obtain a reconstruction block of the coding unit according to the coding information of the coding unit.
  • the decoder divides the pixel area in the boundary image block into the first sub-block.
  • the decoder can divide the current boundary image block in the direction perpendicular to the first side to obtain the first block and the second block .
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located. .
  • continuing to divide the first sub-block includes: when the side length of the first sub-side When the ratio of the side length of one side is greater than the second threshold, the first sub-block is divided in a direction perpendicular to the first side to obtain a first sub-block and a second sub-block, and the first sub-block Is a non-boundary image block, the second sub-block includes a sub-pixel area, and the sub-pixel area is a partial area of the pixel area.
  • the decoder can divide the boundary image block according to the relationship between the side length of the boundary pixel area and the side length of the boundary image block in which the pixel area is located, so that the process of dividing the LCU to obtaining the CU ,
  • the number of divisions is relatively small, and furthermore, the complexity of the division algorithm can be reduced.
  • the side length described in this embodiment is the length of the side perpendicular to the boundary of the current video frame where the current boundary image block is located among the sides of the pixel area and the boundary image block.
  • continuing to divide the first sub-block includes: when the side length of the first sub-side When the side length ratio of one side is greater than the second threshold, perform binary tree BT division on the first block in a direction perpendicular to the first side, or perform quadtree QT division on the first block .
  • the related equipment can maintain multiple DT division modes, so that when dividing the boundary image block and the lower right corner image block, the division mode can be selected from the multiple DT division modes, and further, the boundary image block can be divided And/or the process of obtaining the CU from the lower right corner image block is relatively small.
  • the related equipment can maintain multiple DT division modes, so that when dividing the boundary image block and the lower right corner image block, the division mode can be selected from the multiple DT division modes, and further, the boundary image block can be divided And/or the process of obtaining the CU from the lower right corner image block is relatively small.
  • the direction perpendicular to the first side Dividing the current boundary image block to obtain the first block and the second block includes: when the ratio of the side length of the first sub-side to the side length of the first side is greater than a second threshold and the first sub-side When the ratio of the side length of the side to the side length of the first side is less than or equal to the first threshold, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block And the second block.
  • the first threshold is 0.25 and the second threshold is zero, when the ratio of the side length of the first sub-side to the side length of the first side is greater than the second threshold and When the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the first threshold, the current boundary image block is divided in a direction perpendicular to the first side to obtain the
  • the first block and the second block include: when the ratio of the side length of the first sub-side to the side length of the first side is greater than zero and the side length of the first sub-side is larger than the When the ratio of the side length of the first side is less than or equal to 0.25, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side of a block and the side length of the third side of the second block satisfy 1:3, and the second side and the third side are both perpendicular to the current boundary image block.
  • the first threshold is 0.5 and the second threshold is 0.25, when the ratio of the side length of the first sub-side to the side length of the first side is greater than the second threshold and When the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the first threshold, the current boundary image block is divided in a direction perpendicular to the first side to obtain the
  • the first block and the second block include: when the ratio of the side length of the first sub-side to the side length of the first side is greater than 0.25 and the side length of the first sub-side is larger than the When the ratio of the side length of the first side is less than or equal to 0.5, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side of a block and the side length of the third side of the second block satisfy 1:1, and both the second side and the third side are perpendicular to all the current boundary image blocks.
  • the first threshold is 0.75 and the second threshold is 0.5, when the ratio of the side length of the first sub-side to the side length of the first side is greater than the second threshold and When the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the first threshold, the current boundary image block is divided in a direction perpendicular to the first side to obtain the
  • the first block and the second block include: when the ratio of the side length of the first sub-side to the side length of the first side is greater than 0.5 and the side length of the first sub-side is larger than the When the ratio of the side length of the first side is less than or equal to 0.75, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side of a block and the side length of the third side of the second block satisfy 3 to 1, and the second side and the third side are both perpendicular to the current boundary image block.
  • the sub-side when the ratio of the side length of the first sub-side to the side length of the first side is greater than the first threshold value, the sub-side is divided in a direction perpendicular to the first side.
  • the obtaining of the first block and the second block by the current boundary image block includes: when the ratio of the side length of the first sub-side to the side length of the first side is greater than the first threshold and the first sub-side When the ratio of the side length of the side to the side length of the first side is less than or equal to the third threshold, divide the current boundary image block in a direction perpendicular to the first side to obtain the first block and the The second sub-block.
  • the first threshold is 0.75
  • the third threshold is 1, when the ratio of the side length of the first sub-side to the side length of the first side is greater than that of the first side. Threshold and the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the third threshold, the current boundary image block is divided in a direction perpendicular to the first side to obtain the
  • the first block and the second block include: when the ratio of the side length of the first sub-side to the side length of the first side is greater than or equal to 0.75 and the side length of the first sub-side is equal to When the ratio of the side lengths of the first side is less than 1, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side of a block and the side length of the third side of the second block satisfy 3 to 1, and the second side and the third side are both perpendicular to the current boundary image block.
  • the decoder can maintain multiple DT division modes, so that when dividing the boundary image block and the lower-right corner image block, the division mode can be selected from multiple DT division modes, and further, the boundary image block and/or The image block in the lower right corner is divided into relatively few times until the CU is obtained.
  • the present application provides a video decoding method, the method comprising: detecting whether the ratio of the side length of the first sub-side of the current boundary image block of the current video frame to the side length of the first side lies in a preset interval, Wherein, the first side is the side of the current boundary image block, the first sub-side is the side of the pixel area in the current boundary image block, and the first side and the first sub-side are both vertical On the boundary of the current video frame where the current boundary image block is located; when the ratio of the side length of the first sub-side to the side length of the first side is in the preset interval, it is perpendicular to the The current boundary image block is divided in the direction of the first side to obtain a first block and a second block; the block that is a non-boundary block in the first block and the second block is used as the coding unit, and according to Obtain the reconstruction block of the coding unit from the coding information of the coding unit, or continue to divide the first block or the second block to
  • the first block may include all pixel areas in the current boundary image block, while the second block does not include any pixel areas.
  • the decoding device can perform subsequent operations in the manner described in the first aspect.
  • the first sub-block may be a non-boundary image block, and the second sub-block is a boundary image block.
  • the pixel area included in the second block is a part of the pixel area of the current boundary image block.
  • the decoding device may use the first block as a coding unit, and obtain a reconstruction block of the coding unit according to the coding information of the coding unit, or continue to divide the first block to obtain at least two coding units, and according to the at least two coding units The coding information of each coding unit obtains the reconstruction block of the at least two coding units.
  • the decoding device may continue to divide the second block to obtain coding units.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the video decoding device performs block division, it is not limited to the existing BT and/or QT division method, so that the number of divisions can be reduced during the process of dividing the boundary image block to obtain the coding unit, and in turn, the number of divisions can be reduced. Algorithm complexity.
  • the value range of the preset interval is greater than the second threshold and less than the first threshold.
  • the first threshold value is 0.25
  • the second threshold value is zero
  • the ratio of the side length of the first side to the side length of the first side is located in the In the preset interval
  • dividing the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block includes: when the side length of the first sub-side is equal to that of the first side
  • the current boundary image block is divided in a direction perpendicular to the first side
  • the first block and the second block, the side length of the second side of the first block and the side length of the third side of the second block satisfy 1:3, the second side And the third side are both perpendicular to the boundary of the current video frame where the current boundary image block is located, and the first block includes the pixel area.
  • the first threshold value is 0.5
  • the second threshold value is 0.25
  • the ratio of the side length of the first side to the side length of the first side is located in the In the preset interval
  • dividing the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block includes: when the side length of the first sub-side is equal to that of the first side
  • the ratio of the side length of is greater than 0.25 and the ratio of the side length of the first sub-side to the side length of the first side is less than 0.5
  • the current boundary image block is divided in a direction perpendicular to the first side
  • the first block and the second block, the side length of the second side of the first block and the side length of the third side of the second block satisfy 1:1
  • the second side And the third side are both perpendicular to the boundary of the current video frame where the current boundary image block is located
  • the first block includes the pixel area.
  • the continuing to divide the first block or the second block includes: performing a binary tree division on the first block in a direction perpendicular to the first side or performing a binary tree division on the first block.
  • the first block performs quadtree division.
  • the first threshold value is 0.75
  • the second threshold value is 0.5
  • the ratio of the side length of the first sub-side to the side length of the first side is located in the In the preset interval
  • dividing the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block includes: when the side length of the first sub-side is equal to that of the first side
  • the current boundary image block is divided in a direction perpendicular to the first side
  • the continuing to divide the first block or the second block includes: dividing the first block in a direction perpendicular to the first side to obtain the first Sub-block and second sub-block, the side length of the second sub-side of the first sub-block and the side length of the third sub-side of the second sub-block satisfy 2 to 1, the second sub-side And the third sub-edge are both perpendicular to the boundary of the current video frame where the current boundary image block is located, and the first sub-block is a non-boundary image block.
  • the continuing to divide the first block or the second block includes: performing a binary tree division on the first block in a direction perpendicular to the first side or performing a binary tree division on the first block.
  • the first block performs quadtree division.
  • the first threshold is 1
  • the second threshold is 0.75
  • the ratio of the side length of the first sub-side to the side length of the first side is located in the In the preset interval
  • dividing the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block includes: when the side length of the first sub-side is equal to that of the first side
  • the side length ratio of is greater than 0.75 and the ratio of the side length of the first sub-side to the side length of the first side is less than 1
  • the current boundary image block is divided in a direction perpendicular to the first side
  • the first block and the second block, the side length of the second side of the first block and the side length of the third side of the second block satisfy 3 to 1
  • the second side And the third side are both perpendicular to the boundary of the current video frame where the current boundary image block is located, and the first block is a non-boundary block.
  • the first threshold is 1
  • the second threshold is 0.5
  • the ratio of the side length of the first sub-side to the side length of the first side is located in the In the preset interval
  • dividing the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block includes: when the side length of the first sub-side is equal to that of the first side
  • the side length ratio of is greater than 0.5 and the ratio of the side length of the first sub-side to the side length of the first side is less than 1
  • the current boundary image block is divided in a direction perpendicular to the first side
  • the first block and the second block, the side length of the second side of the first block and the side length of the third side of the second block satisfy 3 to 1, the second side And the third side are both perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the present application provides a video decoding method, the method comprising: determining that the ratio of the side length of the first subside of the lower right corner image block of the current video frame to the side length of the first side is less than or equal to a preset Threshold, and the ratio of the side length of the second sub-side of the lower right corner image block to the side length of the second side is greater than the preset threshold, the first side includes the first sub-side, and the second The edge includes the second sub edge, the first edge is perpendicular to the second edge, the first sub edge and the second sub edge are the edges of the pixel area in the lower right corner image block; QT The derived division mode divides the lower right corner image block to obtain a first block, a second block, and a third block.
  • the first block includes the first sub-pixel area of the pixel area, and the first sub-block
  • the block is located in the upper left corner of the lower right corner image block
  • the second block includes the second sub-pixel area of the pixel area
  • the area of the first block and the area of the second block are all One quarter of the area of the image block in the lower right corner
  • the area of the third sub-block is one-half of the area of the boundary image block
  • the first sub-pixel area and the second sub-pixel area form The pixel area; continue to divide the second block to obtain the coding unit corresponding to the second block, and obtain the second block corresponding to the coding information of the coding unit corresponding to the second block
  • the reconstruction block of the coding unit when the area of the first block is equal to the area of the first sub-pixel area, the first block is used as the coding unit, and the result is obtained according to the coding information of the coding unit
  • the reconstruction block of the coding unit or continue to divide the first block to obtain the coding unit
  • the video decoding device can divide the corresponding image block according to the relationship between the side length of the pixel area in the lower right corner image block and the side length of the image block according to the DT division method, the BT division method or the QT division method, thereby reducing the division of boundary images
  • the number of divisions in the process of obtaining the CU from the block and further, can reduce the complexity of the division algorithm.
  • the preset threshold is 0.5.
  • continuing to divide the first sub-block includes: detecting a third sub-edge of the first sub-block Whether the ratio of the side length of the side length to the side length of the third side is less than or equal to the first threshold, the third sub-side is the side of the first sub-pixel area, and the third side is perpendicular to the third sub-side At the boundary of the current video frame corresponding to the first sub-block; when the ratio of the side length of the third sub-side to the side length of the third side is less than or equal to the first threshold value, from the vertical
  • the first sub-block is divided in the direction of the first side to obtain a first sub-block and a second sub-block, the first sub-block includes the first sub-pixel area; when the first When the area of the sub-block is equal to the area of the first sub-pixel area, the first sub-block is used as a coding unit, and the reconstruction block of the coding
  • the present application also provides a video encoding method, the method comprising: detecting whether the ratio of the side length of the first side of the current boundary image block of the current video frame to the side length of the first side is less than or equal to the first side A threshold, wherein the first side is the side of the current boundary image block, the side of the first sub-side pixel region, the pixel region is a pixel region in the current boundary image block, and the first One side and the first sub-side are both perpendicular to the boundary of the current video frame where the current boundary image block is located, and the first threshold is a value greater than 0 and less than 1; when the first sub-side When the ratio of the side length to the side length of the first side is less than or equal to the first threshold, the current boundary image block is divided in a direction perpendicular to the first side to obtain a first block and a second block.
  • the first block includes the pixel area; when the area of the first block is equal to the area of the pixel area, the first block is used as a coding unit, and the Image information obtains the coding information of the coding unit, or continues to divide the first block to obtain the coding unit, and obtains the coding information of the coding unit according to the image information of the coding unit; or, when the first block When the area of a block is greater than the area of the pixel area, continue to divide the first block to obtain a coding unit, and obtain the coding information of the coding unit according to the image information of the coding unit.
  • the current boundary is divided in the direction perpendicular to the first side
  • the image block obtains a first block and a second block, and the first block includes the pixel area.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the pixel area in the boundary image block is divided into one sub-block, thereby reducing the division of the boundary image block.
  • the number of divisions in the process of obtaining the CU can further reduce the complexity of the division algorithm.
  • the method further includes: when the ratio of the side length of the first sub-side to the side length of the first side is greater than the first threshold, using a direction perpendicular to the first side
  • the current boundary image block is divided above to obtain a first block and a second block, the first block is a non-boundary image block, the second block is a boundary image block and includes a first sub-pixel area, the The first sub-pixel area is a partial area of the pixel area; continue to divide the second block to obtain a coding unit, and obtain the coding information of the coding unit according to the image information of the coding unit.
  • the encoder can maintain multiple DT division modes, so that when dividing the boundary image block and the lower right corner image block, the division mode can be selected from multiple DT division modes, so that the boundary image block and/or The image block in the lower right corner is divided into relatively few times until the CU is obtained.
  • the present application also provides a video encoding method, the method comprising: detecting whether the ratio of the side length of the first sub-side of the current boundary image block of the current video frame to the side length of the first side is within a preset interval , Wherein the first side is a side of the current boundary image block, the first sub-side is a side of a pixel area in the current boundary image block, and the first side and the first sub-side are both Is perpendicular to the boundary of the current video frame where the current boundary image block is located; when the ratio of the side length of the first sub-side to the side length of the first side is in the preset interval, it is perpendicular to Divide the current boundary image block in the direction of the first side to obtain a first block and a second block; use a block that is a non-boundary block in the first block and the second block as an encoding unit, and Obtain the coding information of the coding unit according to the image information of the coding unit, or continue to divide
  • this application also provides a video encoding method, the method comprising: determining that the ratio of the side length of the first sub-side of the lower right corner image block of the current video frame to the side length of the first side is less than or equal to the preset Set a threshold, and the ratio of the side length of the second sub-side of the lower right corner image block to the side length of the second side is greater than the preset threshold, the first side includes the first sub-side, and the first side The two sides include the second side, the first side is perpendicular to the second side, the first side and the second side are sides of a pixel area, and the pixel area is the right side.
  • the image information obtains the coding information of the coding unit corresponding to the second block; when the area of the first block is equal to the area of the first sub-pixel area, the
  • the video encoding device can divide the corresponding image block according to the DT division method, BT division method or QT division method according to the relationship between the side length of the pixel area in the lower right corner image block and the side length of the image block, thereby reducing the division of boundary images
  • the number of divisions in the process of obtaining the CU from the block and further, can reduce the complexity of the division algorithm.
  • the preset threshold is 0.5.
  • continuing to divide the first sub-block includes: detecting a third sub-edge of the first sub-block Whether the ratio of the side length of the side length to the side length of the third side is less than or equal to the first threshold, the third sub-side is the side of the first sub-pixel area, and the third side is perpendicular to the third sub-side At the boundary of the current video frame corresponding to the first sub-block; when the ratio of the side length of the third sub-side to the side length of the third side is less than or equal to the first threshold value, from the vertical
  • the first sub-block is divided in the direction of the first side to obtain a first sub-block and a second sub-block, the first sub-block includes the first sub-pixel area; when the first When the area of the sub-block is equal to the area of the first sub-pixel area, the first sub-block is used as a coding unit, and the coding information of the first sub-block
  • the present application provides a video decoding device that has a function of realizing the behavior of the video decoding device in the foregoing method.
  • the function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the structure of the video decoding device includes a processor and a transceiver, the transceiver is configured to receive and send image data with the video encoding device, and the processor is configured to process the video.
  • the decoding device executes the corresponding functions in the above method.
  • the video decoding device may further include a memory, which is configured to be coupled with the processor and stores necessary program instructions and data of the video decoding device.
  • this application provides a video encoding device, which has a function of realizing the behavior of the video encoding device in the foregoing method.
  • the function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the structure of the video encoding device includes a processor and a transceiver, the transceiver is configured to receive and send image data with the video decoding device, and the processor is configured to process the video.
  • the encoding device performs the corresponding functions in the above method.
  • the video encoding device may further include a memory, which is configured to be coupled with the processor and stores the program instructions and data necessary for the video encoding device.
  • the present application also provides a chip, the chip includes a processor and an interface, the interface is coupled with the processor, and the interface is used to communicate with modules other than the chip.
  • the processor is used to execute computer programs or instructions to implement the first aspect, the second aspect, the third aspect, any possible design of the first aspect, any possible design of the second aspect, and any possible design of the third aspect Video decoding method.
  • the present application also provides a chip, the chip includes a processor and an interface, the interface is coupled with the processor, and the interface is used to communicate with modules other than the chip.
  • the processor is used to execute computer programs or instructions to implement the video encoding method in any possible design of the third aspect, the fourth aspect, the fifth aspect, the third aspect, and any possible design of the fourth aspect.
  • this application provides a computer-readable storage medium with instructions stored in the computer-readable storage medium, which when run on a computer, cause the computer to execute the first, second, and third aspects.
  • Aspect, fourth aspect, fifth aspect, sixth aspect, any possible design of the first aspect, any possible design of the second aspect, any possible design of the third aspect, any possible design of the fourth aspect, and The sixth aspect is any possible design method.
  • the technical solution of the present application can detect whether the ratio of the side length of the first sub-side of the current boundary image block of the current video frame to the side length of the first side is less than or equal to the first threshold.
  • the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the first threshold, divide the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block,
  • the first block includes the pixel area. Further, when the area of the first block is equal to the area of the pixel area, the first block is used as the coding unit, or the first block is continuously divided to obtain the coding unit.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • FIG. 1 is a schematic diagram of the LCU provided by this application.
  • FIG. 2 is a schematic diagram of the division corresponding to the QT division method provided in this application;
  • FIG. 3A is a schematic diagram of a division of an implementation corresponding to the BT division method provided by the present application.
  • FIG. 3B is a schematic diagram of another embodiment corresponding to the BT division method provided by the present application.
  • 4A is an exemplary schematic diagram of dividing boundary image blocks using a QT division method provided by the present application.
  • 4B is an exemplary schematic diagram of dividing boundary image blocks using the BT division method provided by the present application.
  • 5A is a schematic diagram of an exemplary structure of a video encoding and decoding system 10 for implementing the encoding method and decoding method of the present application;
  • 5B is a schematic diagram of an exemplary structure of a video decoding system 40 for implementing the encoding method and decoding method of the present application;
  • FIG. 5C is a schematic diagram of an exemplary structure of an encoder 20 for implementing the encoding method of the present application.
  • FIG. 5D is a schematic diagram of an exemplary structure of a decoder 30 for implementing the decoding method of the present application
  • Fig. 6A is a first exemplary schematic diagram of a boundary pixel block provided by the present application.
  • FIG. 6B is a second exemplary schematic diagram of the boundary pixel block provided by the present application.
  • FIG. 6C is a schematic diagram of the lower right pixel block provided by this application.
  • FIG. 7A is an exemplary method flowchart of the video decoding method 100 provided by the present application.
  • FIG. 7B is an exemplary method flowchart of a video decoding method 200 provided by the present application.
  • FIG. 7C is an exemplary method flowchart of the video decoding method 300 provided by the present application.
  • FIG. 8A is an exemplary method flowchart of a video encoding method 400 provided by this application.
  • FIG. 8B is an exemplary method flowchart of a video encoding method 500 provided in this application.
  • FIG. 8C is an exemplary method flowchart of a video encoding method 600 provided in this application.
  • FIG. 9 is an exemplary block diagram of the division mode provided by the present application.
  • FIG. 10 is a schematic diagram of an exemplary division mode of the DT division method provided by the present application.
  • FIG. 11A-1 is a schematic diagram of the boundary image block 111 provided by this application.
  • FIG. 11A-2 is a schematic diagram of a boundary image block 1111 provided by this application.
  • FIG. 11B is a schematic diagram of the boundary image block 112 provided by the present application.
  • FIG. 11C is a schematic diagram of the boundary image block 113 provided by the present application.
  • FIG. 11D is a schematic diagram of the boundary image block 114 provided by the present application.
  • FIG. 12 is a schematic diagram of the boundary image block 121 provided by the present application.
  • FIG. 13A-1 is a schematic diagram of a first implementation manner of the image block 131 in the lower right corner provided by this application;
  • 13A-2 is a schematic diagram of a second implementation manner of the image block 131 in the lower right corner provided by this application;
  • FIG. 13B is a schematic diagram of the image block 132 in the lower right corner provided by this application.
  • FIG. 14A is a schematic structural diagram of a video decoding device 1400 provided by the present application.
  • FIG. 14B is a schematic structural diagram of a video decoding device 1410 provided by this application.
  • FIG. 15A is a schematic structural diagram of a video decoding device 1500 provided by the present application.
  • FIG. 15B is a schematic structural diagram of a video decoding device 1510 provided by this application.
  • division an image block
  • division should not be limited to these terms. These terms are only used to distinguish multiple different blocks.
  • first, second, etc. may be used to describe other types of objects in the same way, which is not repeated here.
  • plural means two or more.
  • Video can be understood as several frames of images played in a certain order and frame rate (also can be described as images in the art).
  • Video data contains a lot of redundant information such as spatial redundancy, temporal redundancy, visual redundancy, information entropy redundancy, structural redundancy, knowledge redundancy, and importance redundancy.
  • Video encoding is essentially the process of performing encoding operations on each frame of image in the video to obtain the encoding information of each frame of image.
  • Video encoding is performed on the source side.
  • Video decoding is the process of reconstructing each frame of image according to the encoding information of each frame of image.
  • Video decoding is performed on the destination side.
  • the combination of the encoding part and the decoding part is also called codec (encoding and decoding).
  • Video coding and decoding can operate according to a video coding and decoding standard (for example, the high efficiency video coding and decoding H.265 standard), and can comply with the high efficiency video coding standard (HEVC) test model.
  • the video codec can operate according to other proprietary or industry standards, including ITU-TH.261, ISO/IECMPEG-1Visual, ITU-TH.262 or ISO/IECMPEG-2Visual, ITU-TH.263, ISO /IECMPEG-4Visual, ITU-TH.264 (also known as ISO/IECMPEG-4AVC), includes scalable video codec and multi-view video codec extensions. It should be understood that the technology of this application is not limited to any specific codec standard or technology.
  • Both encoding and decoding use coding unit (CU) as a unit.
  • the encoding may be to divide the image into a CU, and then encode pixel data in the CU to obtain the encoding information of the CU.
  • Decoding may be to divide the image to obtain a CU, and then reconstruct the CU according to the coding information corresponding to the CU to obtain a reconstructed block of the CU.
  • CU-related technologies describe
  • the image can be divided into a grid of coding tree blocks.
  • the coding tree block may be referred to as “tree block", "largest coding unit” (LCU) or “coding tree unit”.
  • the coding tree block can also be continuously divided into multiple CUs, and each CU can also be continuously divided into smaller CUs.
  • the video encoder may recursively perform quadtree (QT) division or binary tree (BT) division on the pixel area associated with the coded tree block. It is understandable that QT division and BT division are division methods for any image block, and the use of QT division method and BT division method is not limited to division of CU.
  • the QT division method and the BT division method are introduced below in conjunction with the drawings.
  • the solid line block 01 illustrated in FIG. 2 can be regarded as the image block 01.
  • the quadtree division means that the image block 01 is divided into four blocks of the same size at a time.
  • the same size means that the length and width are the same, and both are half of the size before division.
  • the four blocks are shown in Fig. 2 as block 011, block 012, block 013 and block 014.
  • the binary tree division method is to divide an image block into two blocks of the same size at a time.
  • the video encoder may horizontally divide the image block 02 into two blocks with the same size at a time.
  • the two blocks are, for example, block 021 and block 022 shown in FIG. 3A.
  • the video encoder may vertically divide the image block 02 into two blocks with the same size on the left and right at a time.
  • the two blocks are, for example, block 023 and block 024 shown in FIG. 3B.
  • FIG. 4A illustrates an example of dividing boundary image blocks using the QT division method.
  • the boundary image block 40 is subjected to QT division for the first time to obtain a block 41, a block 42, a block 43, and a block 44.
  • the sub-block 41 and the sub-block 43 still contain both pixel areas and blank areas, which can be regarded as boundary image blocks and can be continued to perform QT division.
  • block 41 is divided to obtain block 411, block 412, block 413, and block 414.
  • the partition 411 does not contain blank areas and can be used as CU411 to continue the coding and decoding operations.
  • the partition 413 does not contain blank areas and can be used as CU413 to continue the coding and decoding operations.
  • the block 412 and block 414 do not contain pixel areas and can be discarded. In other embodiments, if the partition 411 and the partition 413 still contain blank areas, or the partition 412 and the partition 414 still contain pixel areas, the corresponding partition still needs to continue to perform QT division.
  • FIG. 4B illustrates an example of dividing boundary image blocks using the BT division method.
  • the boundary image block 40 is divided into blocks 45 and 46 by performing BT division.
  • the block 45 still contains both the pixel area and the blank area. Therefore, the block 45 continues to be BT divided to obtain the block 451 and the block 452.
  • the partition 451 does not contain a blank area, and can be used as the CU 451 to continue to perform coding and decoding operations.
  • the pixel area is not included in the block 452 and can be discarded.
  • the partition 451 still contains a blank area, or the partition 452 still contains a pixel area, the corresponding partition still needs to continue to perform the BT division.
  • the QT division method and the BT division method have a single division method. Using the QT division method and/or the BT division method to divide the boundary image block to obtain the CU requires multiple divisions, resulting in a relatively high division algorithm.
  • this application provides a video encoding method, a video decoding method, and related equipment, where the ratio of the side length of the first side to the side length of the first side is less than or When it is equal to the first threshold, divide the current boundary image block in a direction perpendicular to the first side to obtain a first block and a second block, and the first block includes the pixel area.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the technical solution of the present application divides the pixel area in the boundary image block into one sub-block according to the relationship between the side length of the pixel area in the boundary image block and the side length of the boundary image block, thereby reducing the division of boundary image blocks.
  • the number of divisions in the process of obtaining the CU can further reduce the complexity of the division algorithm.
  • FIG. 5A exemplarily shows a schematic block diagram of the video encoding and decoding system 10 applied in this application.
  • the video encoding and decoding system 10 may include a source device 12 and a destination device 14.
  • the source device 12 generates encoded video data. Therefore, the source device 12 may be referred to as a video encoding device.
  • the destination device 14 can decode the encoded video data generated by the source device 12, and therefore, the destination device 14 can be referred to as a video decoding device.
  • Various implementations of source device 12, destination device 14, or both may include one or more processors and memory coupled to the one or more processors.
  • the memory may include, but is not limited to, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read only memory, EEPROM), flash memory, or any other medium that can be used to store the desired program code in the form of instructions or data structures accessible by a computer, as described herein.
  • the source device 12 and the destination device 14 may include various devices, including desktop computers, mobile computing devices, notebook (for example, laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones. Computers, televisions, cameras, display devices, digital media players, video game consoles, on-board computers, wireless communication equipment, or the like.
  • the source device 12 and the destination device 14 may communicate with each other via a link 13, and the destination device 14 may receive encoded video data from the source device 12 via the link 13.
  • Link 13 may include one or more media or devices capable of moving encoded video data from source device 12 to destination device 14.
  • link 13 may include one or more communication media that enable source device 12 to transmit encoded video data directly to destination device 14 in real time.
  • the source device 12 may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to the destination device 14.
  • the one or more communication media may include wireless and/or wired communication media, such as a radio frequency (RF) spectrum or one or more physical transmission lines.
  • RF radio frequency
  • the one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the Internet).
  • the one or more communication media may include routers, switches, base stations, or other devices that facilitate communication from source device 12 to destination device 14.
  • the source device 12 includes an encoder 20, and optionally, the source device 12 may further include an image source 16, an image preprocessor 18, and a communication interface 22.
  • the encoder 20, the image source 16, the image preprocessor 18, and the communication interface 22 may be hardware components in the source device 12, or may be software programs in the source device 12. They are described as follows:
  • the image source 16 may include or may be any type of image capture device, for example, to capture real-world images, and/or any type of image or comment (for screen content encoding, some text on the screen is also considered to be encoded Image or part of image) generating equipment, for example, a computer graphics processor for generating computer animation images, or for acquiring and/or providing real world images, computer animation images (for example, screen content, virtual reality, VR) images), and/or any combination thereof (for example, augmented reality (AR) images).
  • the image source 16 may be a camera for capturing images or a memory for storing images, and the image source 16 may also include any type (internal or external) interface for storing previously captured or generated images and/or acquiring or receiving images.
  • the image source 16 can be, for example, a local or integrated camera integrated in the source device; when the image source 16 is a memory, the image source 16 can be local or, for example, an integrated camera integrated in the source device. Memory.
  • the interface may be, for example, an external interface that receives images from an external video source.
  • the external video source is, for example, an external image capture device, such as a camera, an external memory, or an external image generation device, and the external image generation device is, for example, It is an external computer graphics processor, computer or server.
  • the interface can be any type of interface according to any proprietary or standardized interface protocol, such as a wired or wireless interface, and an optical interface.
  • the image can be regarded as a two-dimensional array or matrix of picture elements.
  • the pixel points in the array can also be called sampling points.
  • the number of sampling points of the array or image in the horizontal and vertical directions (or axis) defines the size and/or resolution of the image.
  • three color components are usually used, that is, an image can be represented as or contains three sample arrays.
  • the image includes corresponding red, green, and blue sample arrays.
  • each pixel is usually expressed in a luminance/chrominance format or color space.
  • an image in the YUV format includes the luminance component indicated by Y (which may also be indicated by L) and the two indicated by U and V. Chrominance components.
  • the luma component Y represents brightness or gray level intensity (for example, the two are the same in a grayscale image), and the two chroma components U and V represent chroma or color information components.
  • an image in the YUV format includes a luminance sample array of luminance sample values (Y), and two chrominance sample arrays of chrominance values (U and V).
  • RGB format can be converted or converted to YUV format, and vice versa. This process is also called color conversion or conversion. If the image is black and white, the image may only include the luminance sample array.
  • the image transmitted from the image source 16 to the image processor may also be referred to as original image data 17.
  • the image preprocessor 18 is configured to receive the original image data 17 and perform preprocessing on the original image data 17 to obtain the preprocessed image 19 or the preprocessed image data 19.
  • the preprocessing performed by the image preprocessor 18 may include trimming, color format conversion (for example, conversion from RGB format to YUV format), toning, or denoising.
  • the encoder 20 (or video encoder 20) is used to receive the preprocessed image data 19, and use the prediction mode to process the preprocessed image data 19, thereby providing the encoded image data 21 (which will be further based on the figure below).
  • 5B describes the structural details of the encoder 20).
  • the encoder 20 may be used to execute the embodiments of the various video encoding methods described below to realize the application of the boundary image block division and the lower right corner image block division described in this application on the encoding side.
  • the communication interface 22 can be used to receive the encoded image data 21, and can transmit the encoded image data 21 to the destination device 14 or any other device (such as a memory) via the link 13 for storage or direct reconstruction, so The other device can be any device used for decoding or storage.
  • the communication interface 22 may be used, for example, to encapsulate the encoded image data 21 into a suitable format, such as a data packet, for transmission on the link 13.
  • the destination device 14 includes a decoder 30, and optionally, the destination device 14 may also include a communication interface 28, an image post processor 32, and a display device 34. They are described as follows:
  • the communication interface 28 can be used to receive the encoded image data 21 from the source device 12 or any other source, for example, a storage device, and the storage device is, for example, an encoded image data storage device.
  • the communication interface 28 can be used to transmit or receive the encoded image data 21 via the link 13 between the source device 12 and the destination device 14 or via any type of network.
  • the link 13 is, for example, a direct wired or wireless connection.
  • the type of network is, for example, a wired or wireless network or any combination thereof, or any type of private network and public network, or any combination thereof.
  • the communication interface 28 may be used, for example, to decapsulate the data packet transmitted by the communication interface 22 to obtain the encoded image data 21.
  • Both the communication interface 28 and the communication interface 22 can be configured as a one-way communication interface or a two-way communication interface, and can be used, for example, to send and receive messages to establish connections, confirm and exchange any other communication links and/or, for example, encoded image data Information about the transmission of the transmitted data.
  • the decoder 30 (or called the decoder 30) is used to receive the encoded image data 21 and provide the decoded image data 31 or the decoded image 31 (the structural details of the decoder 30 will be further described based on FIG. 5C below).
  • the decoder 30 may be used to execute the embodiments of the various video decoding methods described below to implement the boundary image block division and the lower right corner image block division described in this application on the decoding side.
  • the image post-processor 32 is configured to perform post-processing on the decoded image data 31 (also referred to as reconstructed image data) to obtain post-processed image data 33.
  • the post-processing performed by the image post-processor 32 may include: color format conversion (for example, conversion from YUV format to RGB format), toning, trimming or resampling, or any other processing, and can also be used to convert post-processed image data 33 is transmitted to the display device 34.
  • the display device 34 is used for receiving the post-processed image data 33 to display the image to, for example, a user or a viewer.
  • the display device 34 may be or may include any type of display for presenting reconstructed images, for example, an integrated or external display or monitor.
  • the display may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS), Digital light processor (digital light processor, DLP) or any other type of display.
  • FIG. 5A shows the source device 12 and the destination device 14 as separate devices
  • the device embodiment may also include the source device 12 and the destination device 14 or the functionality of both, that is, the source device 12 or the corresponding The functionality of the destination device 14 or the corresponding functionality.
  • the same hardware and/or software may be used, or separate hardware and/or software, or any combination thereof may be used to implement the source device 12 or the corresponding functionality and the destination device 14 or the corresponding functionality .
  • the source device 12 and the destination device 14 may include any of a variety of devices, including any type of handheld or stationary device, for example, a notebook or laptop computer, mobile phone, smart phone, tablet or tablet computer, video camera, desktop Computers, set-top boxes, televisions, cameras, in-vehicle devices, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices And so on, and can not use or use any type of operating system.
  • a notebook or laptop computer mobile phone, smart phone, tablet or tablet computer
  • video camera desktop Computers
  • set-top boxes televisions, cameras, in-vehicle devices, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices And so on, and can not use or use any type of operating system.
  • Both the encoder 20 and the decoder 30 can be implemented as any of various suitable circuits, for example, one or more microprocessors, digital signal processors (digital signal processors, DSP), and application-specific integrated circuits (application-specific integrated circuits). circuit, ASIC), field-programmable gate array (FPGA), discrete logic, hardware, or any combination thereof.
  • the device can store the instructions of the software in a suitable non-transitory computer-readable storage medium, and can use one or more processors to execute the instructions in hardware to execute the technology of the present disclosure . Any of the foregoing content (including hardware, software, a combination of hardware and software, etc.) can be regarded as one or more processors.
  • the video encoding and decoding system 10 shown in FIG. 5A is only an example.
  • the technology of the present application can be applied to video encoding settings that do not necessarily include any data communication between encoding and decoding devices (for example, video encoding or video encoding). decoding).
  • the data can be retrieved from local storage, streamed on the network, etc.
  • the video encoding device can encode data and store the data to the memory, and/or the video decoding device can retrieve the data from the memory and decode the data.
  • encoding and decoding are performed by devices that do not communicate with each other but only encode data to the memory and/or retrieve data from the memory and decode the data.
  • FIG. 5B is an explanatory diagram of an example of a video coding system 40 including the encoder 20 of FIG. 5C and/or the decoder 30 of FIG. 5D according to an exemplary embodiment.
  • the video decoding system 40 can implement a combination of various technologies of the present application.
  • the video coding system 40 may include an imaging device 41, an encoder 20, a decoder 30 (and/or a video encoder/decoder implemented by the processing unit 46), an antenna 42, one or more A processor 43, one or more memories 44 and/or a display device 45.
  • the imaging device 41, the antenna 42, the processing unit 46, the encoder 20, the decoder 30, the processor 43, the memory 44, and/or the display device 45 can communicate with each other.
  • the encoder 20 and the decoder 30 are used to illustrate the video coding system 40, in different examples, the video coding system 40 may include only the encoder 20 or only the decoder 30.
  • antenna 42 may be used to transmit or receive an encoded bitstream of video data.
  • the display device 45 may be used to present video data.
  • the processing unit 46 may include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the video decoding system 40 may also include an optional processor 43, and the optional processor 43 may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the processing unit 46 may be implemented by hardware, such as dedicated video encoding hardware, and the processor 43 may be implemented by general software, an operating system, and the like.
  • the memory 44 may be any type of memory, such as volatile memory (for example, static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.) or non-volatile memory. Memory (for example, flash memory, etc.), etc.
  • volatile memory for example, static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
  • Memory for example, flash memory, etc.
  • the storage 44 may be implemented by cache memory.
  • the processing unit 46 may access the memory 44 (eg, to implement an image buffer).
  • the processing unit 46 may include memory (e.g., cache, etc.) for implementing image buffers and the like.
  • the encoder 20 implemented by logic circuits may include an image buffer (e.g., implemented by the processing unit 46 or the memory 44) and a graphics processing unit (e.g., implemented by the processing unit 46).
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include the encoder 20 implemented by the processing unit 46 to implement the various modules discussed with reference to FIG. 5C and/or any other encoder systems or subsystems described herein.
  • Logic circuits can be used to perform the various operations discussed herein.
  • the decoder 30 may be implemented by the processing unit 46 in a similar manner to implement the various modules discussed with reference to the decoder 30 of FIG. 5D and/or any other decoder systems or subsystems described herein.
  • the decoder 30 implemented by logic circuits may include an image buffer (implemented by the processing unit 2820 or the memory 44) and a graphics processing unit (implemented by the processing unit 46, for example).
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include the decoder 30 implemented by the processing unit 46 to implement the various modules discussed with reference to FIG. 5C and/or any other decoder systems or subsystems described herein.
  • antenna 42 may be used to receive an encoded bitstream of video data.
  • the encoded bitstream may include data, indicators, index values, mode selection data, etc., related to the encoded video frame discussed herein, such as data related to coded partitions (e.g., transform coefficients or quantized transform coefficients). , (As discussed) optional indicators, and/or data defining code partitions).
  • the video coding system 40 may also include a decoder 30 coupled to the antenna 42 and used to decode the encoded bitstream.
  • the display device 45 is used to present video frames.
  • the decoder 30 may be used to perform the reverse process.
  • the decoder 30 can be used to receive and parse such syntax elements, and decode related video data accordingly.
  • the encoder 20 may entropy encode the syntax elements into an encoded video bitstream. In such instances, the decoder 30 can parse such syntax elements and decode related video data accordingly.
  • the decoding method described in this application is mainly used for the decoding process, and this process exists in both the encoder 20 and the decoder 30.
  • FIG. 5C shows a schematic/conceptual block diagram of an example for implementing the encoder 20 of the present application.
  • the encoder 20 includes a residual calculation unit 201, a transformation processing unit 202, a quantization unit 203, an inverse quantization unit 204, an inverse transformation processing unit 205, a reconstruction unit 206, a buffer 207, and a loop filter.
  • Unit 208 decoded picture buffer (DPB) 209, prediction processing unit 210, and entropy coding unit 211.
  • the prediction processing unit 210 may include an inter prediction unit 2101, an intra prediction unit 2102, and a mode selection unit 2103.
  • the inter prediction unit 2101 may include a motion estimation unit and a motion compensation unit (not shown).
  • the encoder 20 shown in FIG. 5C may also be referred to as a hybrid video encoder or a video encoder according to a hybrid video codec.
  • the residual calculation unit 201, the transformation processing unit 202, the quantization unit 203, the prediction processing unit 210, and the entropy encoding unit 211 form the forward signal path of the encoder 20, and for example, the inverse quantization unit 204, the inverse transformation processing unit 205, and the The structure unit 206, the buffer 207, the loop filter, the decoded picture buffer (DPB) 209, and the prediction processing unit 210 form the backward signal path of the encoder, wherein the backward signal path of the encoder corresponds to The signal path of the decoder (see decoder 30 in Figure 5D).
  • the encoder 20 receives an image or an image block of an image, for example, an image in an image sequence that forms a video or a video sequence through, for example, input.
  • the image block can also be called the current image block or the image block to be encoded, and the image can be called the current image or the image to be encoded (especially when the current image is distinguished from other images in video encoding, the other images are for example the same video sequence. It also includes previously encoded and/or decoded images in the video sequence of the current image).
  • the embodiment of the encoder 20 may include a segmentation unit (not shown in FIG. 5C) for segmenting the image into a plurality of blocks such as image blocks, usually into a plurality of non-overlapping blocks.
  • the segmentation unit can be used to use the same block size and the corresponding grid that defines the block size for all images in the video sequence, or to change the block size between images or subsets or image groups, and divide each image into The corresponding block.
  • the prediction processing unit 210 of the encoder 20 may be used to perform any combination of the aforementioned segmentation techniques.
  • an image block is also or can be regarded as a two-dimensional array or matrix of sampling points with sample values, although its size is smaller than that of the image.
  • the image block may include, for example, one sampling array (for example, a luminance array in the case of a black-and-white image) or three sampling arrays (for example, a luminance array and two chrominance arrays in the case of a color image) or according to the Any other number and/or array of categories of applied color formats.
  • the number of sampling points in the horizontal and vertical directions (or axis) of the image block defines the size of the image block.
  • the encoder 20 shown in FIG. 5C is used to encode an image block by block, for example, to perform encoding and prediction on each image block.
  • the residual calculation unit 201 is used to calculate the residual block based on the image block and the prediction block (other details of the prediction block are provided below), for example, by subtracting the sample value of the image block from the sample value of the prediction block sample by sample (pixel by pixel), To obtain the residual block in the sample domain.
  • the transform processing unit 202 is configured to apply a transform such as discrete cosine transform (DCT) or discrete sine transform (DST) on the sample values of the residual block to obtain transform coefficients 207 in the transform domain.
  • the transform coefficient 207 may also be referred to as a transform residual coefficient, and represents a residual block in the transform domain.
  • the transformation processing unit 202 may be used to apply an integer approximation of DCT/DST, such as the transformation specified by AVS, AVS2, and AVS3. Compared with the orthogonal DCT transform, this integer approximation is usually scaled by a factor. In order to maintain the norm of the residual block processed by the forward and inverse transformation, an additional scaling factor is applied as part of the transformation process.
  • the scaling factor is usually selected based on certain constraints. For example, the scaling factor is a trade-off between the power of 2 used for the shift operation, the bit depth of the transform coefficient, accuracy, and implementation cost.
  • the inverse transform processing unit 205 for the inverse transform designates a specific scaling factor, and accordingly, the encoder The 20 side uses the transformation processing unit 202 to specify a corresponding scaling factor for the positive transformation.
  • the quantization unit 203 is used to quantize the transform coefficient 207 by applying scalar quantization or vector quantization, for example, to obtain the quantized transform coefficient 209.
  • the quantized transform coefficient 209 may also be referred to as a quantized residual coefficient 209.
  • the quantization process can reduce the bit depth associated with some or all of the transform coefficients 207. For example, n-bit transform coefficients can be rounded down to m-bit transform coefficients during quantization, where n is greater than m.
  • the degree of quantization can be modified by adjusting the quantization parameter (QP). For example, for scalar quantization, different scales can be applied to achieve finer or coarser quantization.
  • QP quantization parameter
  • a smaller quantization step size corresponds to a finer quantization
  • a larger quantization step size corresponds to a coarser quantization.
  • the appropriate quantization step size can be indicated by a quantization parameter (QP).
  • the quantization parameter may be an index of a predefined set of suitable quantization steps.
  • a smaller quantization parameter can correspond to fine quantization (smaller quantization step size)
  • a larger quantization parameter can correspond to coarse quantization (larger quantization step size)
  • Quantization may include division by a quantization step size and corresponding quantization or inverse quantization performed by, for example, inverse quantization 210, or may include multiplication by a quantization step size.
  • Embodiments according to some standards may use quantization parameters to determine the quantization step size.
  • the quantization step size can be calculated based on the quantization parameter using a fixed-point approximation of an equation including division. Additional scaling factors can be introduced for quantization and inverse quantization to restore the norm of the residual block that may be modified due to the scale used in the fixed-point approximation of the equations for the quantization step size and the quantization parameter.
  • the scales of inverse transform and inverse quantization may be combined.
  • a custom quantization table can be used and signaled from the encoder to the decoder in, for example, a bitstream. Quantization is a lossy operation, where the larger the quantization step, the greater the loss.
  • the inverse quantization unit 204 is configured to apply the inverse quantization of the quantization unit 203 on the quantized coefficients to obtain the inverse quantized coefficients 211, for example, based on or use the same quantization step size as the quantization unit 203, apply the quantization scheme applied by the quantization unit 203 The inverse quantification scheme.
  • the inversely quantized coefficient 211 may also be referred to as the inversely quantized residual coefficient, which corresponds to the transform coefficient 207, although the loss due to quantization is usually different from the transform coefficient.
  • the inverse transform processing unit 205 is used to apply the inverse transform of the transform applied by the transform processing unit 202, for example, an inverse discrete cosine transform (DCT) or an inverse discrete sine transform (DST) to be in the sample domain Get the inverse transform block.
  • the inversely transformed block may also be referred to as an inversely transformed inversely quantized block or an inversely transformed residual block.
  • the reconstruction unit 206 (for example, the summer) is used to add the inverse transformed block (ie, the reconstructed residual block) to the prediction block to obtain the reconstructed block in the sample domain, for example, the reconstructed residual
  • the sample value of the block is added to the sample value of the prediction block.
  • a buffer unit such as the line buffer 207 is used to buffer or store the reconstructed block and the corresponding sample value, for example, for intra prediction.
  • the encoder can be used to use the unfiltered reconstructed block and/or the corresponding sample value stored in the buffer unit to perform any type of estimation and/or prediction, such as intra prediction .
  • an embodiment of the encoder 20 may be configured such that the buffer unit is used not only for storing reconstructed blocks for intra prediction, but also for the loop filter unit 208 (not shown in FIG. 5C), And/or, for example, the buffer unit and the decoded image buffer unit form one buffer.
  • Other embodiments may be used to use filtered blocks and/or blocks or samples from the decoded image buffer 209 (neither shown in Figure 5C) as input or basis for intra prediction.
  • the loop filter unit 208 (or “loop filter” for short) is used to filter the reconstructed block to obtain the filtered block, thereby smoothly performing pixel transformation or improving video quality.
  • the loop filter unit 208 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (SAO) filters or other filters, such as bilateral filters, auto Adaptive loop filter (ALF), or sharpening or smoothing filter, or collaborative filter.
  • the loop filter unit 208 is shown as an in-loop filter in FIG. 5C, in other configurations, the loop filter unit 208 may be implemented as a post-loop filter.
  • the filtered block may also be referred to as a filtered reconstructed block.
  • the decoded image buffer 209 may store the reconstructed coded block after the loop filter unit 208 performs a filtering operation on the reconstructed coded block.
  • the embodiment of the encoder 20 may be used to output loop filter parameters (e.g., sample adaptive offset information), for example, directly output or by the entropy encoding unit 211 or any other
  • the entropy coding unit outputs after entropy coding, for example, so that the decoder 30 can receive and apply the same loop filter parameters for decoding.
  • the decoded picture buffer (DPB) 209 may be a reference picture memory that stores reference picture data for the encoder 20 to encode video data.
  • DPB can be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM) (including synchronous DRAM (synchronous DRAM, SDRAM), magnetoresistive RAM (MRAM)), resistive RAM ( resistive RAM, RRAM)) or other types of memory devices.
  • DRAM dynamic random access memory
  • MRAM magnetoresistive RAM
  • RRAM resistive RAM
  • the DPB and buffer 207 can be provided by the same memory device or by separate memory devices.
  • a decoded picture buffer (DPB) 209 is used to store filtered blocks.
  • the decoded image buffer 209 may be further used to store other previous filtered blocks of the same current image or different images, such as previously reconstructed images, such as previously reconstructed and filtered blocks, and may provide a complete previous reconstructed image.
  • the structure is the decoded image (and corresponding reference blocks and samples) and/or part of the reconstructed current image (and corresponding reference blocks and samples), for example, for inter prediction.
  • a decoded picture buffer (DPB) 209 is used to store the reconstructed block.
  • the prediction processing unit 210 also called the block prediction processing unit 210, is used to receive or obtain image blocks (current image blocks of the current image) and reconstructed image data, such as reference samples of the same (current) image from the buffer 207 And/or reference image data of one or more previously decoded images from the decoded image buffer 209, and used to process such data for prediction, that is, to provide an inter-frame prediction block or an intra-frame prediction block Prediction block.
  • the mode selection unit 2103 may be used to select a prediction mode (for example, an intra or inter prediction mode) and/or a corresponding prediction block used as a prediction block to calculate a residual block and reconstruct a reconstructed block.
  • a prediction mode for example, an intra or inter prediction mode
  • a corresponding prediction block used as a prediction block to calculate a residual block and reconstruct a reconstructed block.
  • the embodiment of the mode selection unit 2103 can be used to select a prediction mode (for example, selected from those supported by the prediction processing unit 210) that provides the best match or minimum residual (the minimum residual means Better compression in transmission or storage), or provide minimal signaling overhead (minimum signaling overhead means better compression in transmission or storage), or consider or balance both.
  • the mode selection unit 2103 may be used to determine a prediction mode based on rate distortion optimization (RDO), that is, select a prediction mode that provides the smallest rate-distortion optimization, or select a prediction mode with a relevant rate-distortion that at least meets the prediction mode selection criteria .
  • RDO rate distortion optimization
  • the prediction processing performed by an example of the encoder 20 (for example, by the prediction processing unit 210) and the mode selection performed (for example, by the mode selection unit 2103) will be explained in detail below.
  • the encoder 20 is used to determine or select the best or optimal prediction mode from a set of (predetermined) prediction modes.
  • the prediction mode set may include, for example, an intra prediction mode and/or an inter prediction mode.
  • the set of intra prediction modes may include 35 different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in H.265, or may include 67 Different intra-frame prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in H.266 under development.
  • the set of inter-frame prediction modes depends on the available reference images (ie, for example, the aforementioned at least part of the decoded images stored in the DBP) and other inter-frame prediction parameters, such as whether the entire reference image is used or only A part of the reference image, such as the search window area surrounding the area of the current block, to search for the best matching reference block, and/or for example depending on whether pixel interpolation such as half-pixel and/or quarter-pixel interpolation is applied.
  • the set of inter prediction modes may include, for example, an advanced motion vector (Advanced Motion Vector Prediction, AMVP) mode and a merge mode.
  • AMVP Advanced Motion Vector Prediction
  • the set of inter-frame prediction modes may include the improved AMVP mode based on control points in the present application, and the improved merge mode based on control points.
  • the intra prediction unit 2102 may be used to perform any combination of inter prediction techniques described below.
  • skip mode and/or direct mode can also be applied in this application.
  • the prediction processing unit 210 may be further configured to divide the image block into smaller blocks or sub-blocks, for example, by iteratively using the division method described in this application, and for performing, for example, each of the blocks or sub-blocks. Prediction, where mode selection includes selecting a tree structure of segmented image blocks and selecting a prediction mode applied to each of the sub-blocks or sub-blocks.
  • the inter prediction unit 2101 may include a motion estimation (ME) unit (not shown in FIG. 5C) and a motion compensation (MC) unit (not shown in FIG. 5C).
  • the motion estimation unit is used to receive or obtain image blocks (current image blocks of the current image) and decoded images, or at least one or more previously reconstructed blocks, for example, one or more other/different previously decoded images Reconstruct the block to perform motion estimation.
  • the video sequence may include the current image and the previously decoded image 31, or in other words, the current image and the previously decoded image 31 may be part of the image sequence forming the video sequence, or form the image sequence.
  • the encoder 20 may be used to select a reference block from multiple reference blocks of the same or different images in multiple other images, and provide the reference image and/or provide a reference to the motion estimation unit (not shown in FIG. 5C)
  • the offset (spatial offset) between the position of the block (X, Y coordinates) and the position of the current block is used as an inter prediction parameter. This offset is also called a motion vector (MV).
  • the motion compensation unit is used to obtain inter prediction parameters, and perform inter prediction based on or using the inter prediction parameters to obtain an inter prediction block.
  • the motion compensation performed by the motion compensation unit may include fetching or generating a prediction block based on a motion/block vector determined by motion estimation (interpolation of sub-pixel accuracy may be performed). Interpolation filtering can generate additional pixel samples from known pixel samples, thereby potentially increasing the number of candidate prediction blocks that can be used to encode image blocks.
  • the motion compensation unit can locate the prediction block pointed to by the motion vector in a reference image list.
  • the motion compensation unit may also generate syntax elements associated with the blocks and video slices for use by the decoder 30 when decoding image blocks of the video slices.
  • the aforementioned inter-prediction unit 2101 may transmit syntax elements to the entropy encoding unit 211, and the syntax elements include inter-prediction parameters (for example, after traversing multiple inter-prediction modes, selecting the inter-prediction mode used for prediction of the current block) Instructions).
  • the inter-frame prediction parameter may not be carried in the syntax element.
  • the decoder 30 can directly use the default prediction mode for decoding. It can be understood that the inter prediction unit 2101 may be used to perform any combination of inter prediction techniques.
  • the intra prediction unit 2102 is used to obtain, for example, receive an image block (current image block) of the same image and one or more previously reconstructed blocks, such as reconstructed adjacent blocks, for intra-frame estimation.
  • the encoder 20 may be used to select an intra prediction mode from a plurality of (predetermined) intra prediction modes.
  • the embodiment of the encoder 20 may be used to select an intra prediction mode based on optimization criteria, for example, based on a minimum residual (e.g., an intra prediction mode that provides a prediction block most similar to the current image block) or a minimum rate distortion.
  • a minimum residual e.g., an intra prediction mode that provides a prediction block most similar to the current image block
  • a minimum rate distortion e.g., a minimum rate distortion
  • the intra prediction unit 2102 is further configured to determine an intra prediction block based on the intra prediction parameters of the selected intra prediction mode. In any case, after selecting the intra prediction mode for the block, the intra prediction unit 2102 is also used to provide intra prediction parameters to the entropy encoding unit 211, that is, to provide an indication of the selected intra prediction mode for the block Information. In one example, the intra prediction unit 2102 can be used to perform any combination of intra prediction techniques.
  • the aforementioned intra prediction unit 2102 may transmit syntax elements to the entropy encoding unit 211, where the syntax elements include intra prediction parameters (for example, the intra prediction mode selected for the current block prediction after traversing multiple intra prediction modes). Instructions).
  • the intra prediction parameter may not be carried in the syntax element.
  • the decoder 30 can directly use the default prediction mode for decoding.
  • the entropy coding unit 211 is used to apply entropy coding algorithms or schemes (for example, variable length coding (VLC) scheme, context adaptive VLC (context adaptive VLC, CAVLC) scheme, arithmetic coding scheme, context adaptive binary arithmetic) Coding (context adaptive binary arithmetic coding, CABAC), syntax-based context-adaptive binary arithmetic coding (SBAC), probability interval partitioning entropy (PIPE) coding or other entropy Coding method or technique) applied to quantized residual coefficients 209, inter-frame prediction parameters, intra-frame prediction parameters, and/or loop filter parameters, or all of them (or not applied), to obtain data that can be output for example
  • the encoded image data 21 is output in the form of an encoded bit stream 21.
  • the encoded bitstream can be transmitted to the video decoder 30, or archived for later transmission or retrieval by the video decoder 30.
  • the entropy encoding unit 211 may also be used to entropy encode other syntax elements of the current video slice being encoded.
  • the non-transform-based encoder 20 may directly quantize the residual signal without the transform processing unit 202 for certain blocks or frames.
  • the encoder 20 may have a quantization unit 203 and an inverse quantization unit 204 combined into a single unit.
  • the encoder 20 may be used to implement the encoding method described in the following embodiments.
  • the video encoder 20 may directly quantize the residual signal without processing by the transform processing unit 202, and accordingly does not need to be processed by the inverse transform processing unit 205; or, for some For image blocks or image frames, the video encoder 20 does not generate residual data, and accordingly does not need to be processed by the transform processing unit 202, quantization unit 203, inverse quantization unit 204, and inverse transform processing unit 205; or, the video encoder 20 may The reconstructed image block is directly stored as a reference block without filter processing; or, the quantization unit 203 and the inverse quantization unit 204 in the video encoder 20 may be merged together.
  • the loop filter is optional, and for lossless compression coding, the transform processing unit 202, the quantization unit 203, the inverse quantization unit 204, and the inverse transform processing unit 205 are optional. It should be understood that, according to different application scenarios, the inter prediction unit 2101 and the intra prediction unit 2102 may be selectively activated.
  • FIG. 5D shows a schematic/conceptual block diagram of an example for implementing the decoder 30 of the present application.
  • the video decoder 30 is used to receive, for example, encoded image data (e.g., an encoded bit stream) 21 encoded by the encoder 20 to obtain a decoded image.
  • video decoder 30 receives video data from video encoder 20, such as an encoded video bitstream and associated syntax elements that represent image blocks of an encoded video slice.
  • the decoder 30 includes an entropy decoding unit 304, an inverse quantization unit 310, an inverse transform processing unit 312, a reconstruction unit 314 (such as a summer 314), a buffer 316, a loop filter 320, and The decoded image buffer 330 and the prediction processing unit 360.
  • the prediction processing unit 360 may include an inter prediction unit 344, an intra prediction unit 354, and a mode selection unit 362.
  • video decoder 30 may perform decoding passes that are substantially reciprocal of the encoding passes described with video encoder 20 of FIG. 5C.
  • the entropy decoding unit 304 is configured to perform entropy decoding on the encoded image data 21 to obtain, for example, quantized coefficients 309 and/or decoded encoding parameters (not shown in FIG. 5D), for example, inter prediction, intra prediction parameters , Loop filter parameters and/or any one or all of other syntax elements (decoded).
  • the entropy decoding unit 304 is further configured to forward the inter prediction parameters, intra prediction parameters and/or other syntax elements to the prediction processing unit 360.
  • the video decoder 30 may receive syntax elements at the video slice level and/or the video block level.
  • the inverse quantization unit 310 can be functionally the same as the inverse quantization unit 110, the inverse transformation processing unit 312 can be functionally the same as the inverse transformation processing unit 205, the reconstruction unit 314 can be functionally the same as the reconstruction unit 206, and the buffer 316 can be functionally identical.
  • the loop filter 320 may be functionally the same as the loop filter, and the decoded image buffer 330 may be functionally the same as the decoded image buffer 209.
  • the prediction processing unit 360 may include an inter prediction unit 344 and an intra prediction unit 354.
  • the inter prediction unit 344 may be functionally similar to the inter prediction unit 2101
  • the intra prediction unit 354 may be functionally similar to the intra prediction unit 2102.
  • the prediction processing unit 360 is generally used to perform block prediction and/or obtain a prediction block 365 from the encoded data 21, and to receive or obtain (explicitly or implicitly) prediction-related parameters and/or information about the prediction from the entropy decoding unit 304, for example. Information about the selected prediction mode.
  • the intra-prediction unit 354 of the prediction processing unit 360 is used for the intra-prediction mode based on the signal and the previous decoded block from the current frame or image. Data to generate a prediction block 365 for the image block of the current video slice.
  • the inter-frame prediction unit 344 eg, motion compensation unit
  • the prediction processing unit 360 is used for the motion vector and the received from the entropy decoding unit 304
  • the other syntax elements generate a prediction block 365 for the video block of the current video slice.
  • a prediction block can be generated from a reference image in a reference image list.
  • the video decoder 30 can construct a list of reference frames: list 0 and list 1 based on the reference images stored in the DPB 330 using the default construction technique.
  • the prediction processing unit 360 is configured to determine prediction information for the video block of the current video slice by parsing the motion vector and other syntax elements, and use the prediction information to generate the prediction block for the current video block being decoded.
  • the prediction processing unit 360 uses some syntax elements received to determine the prediction mode (for example, intra or inter prediction) and the inter prediction slice type ( For example, B slice, P slice or GPB slice), construction information for one or more of the reference image list for the slice, motion vector for each inter-coded video block of the slice, The inter prediction status and other information of each inter-encoded video block of the slice to decode the video block of the current video slice.
  • the syntax elements received by the video decoder 30 from the bitstream include receiving adaptive parameter set (APS), sequence parameter set (sequence parameter set, SPS), and picture parameter set (picture parameter set). parameter set, PPS) or a syntax element in one or more of the slice headers.
  • APS adaptive parameter set
  • SPS sequence parameter set
  • PPS picture parameter set
  • the inverse quantization unit 310 may be used to inverse quantize (ie, inverse quantize) the quantized transform coefficients provided in the bitstream and decoded by the entropy decoding unit 304.
  • the inverse quantization process may include using the quantization parameter calculated by the video encoder 20 for each video block in the video slice to determine the degree of quantization that should be applied and also determine the degree of inverse quantization that should be applied.
  • the inverse transform processing unit 312 is used to apply an inverse transform (for example, an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process) to transform coefficients so as to generate a residual block in the pixel domain.
  • an inverse transform for example, an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process
  • the reconstruction unit 314 (for example, the summer 314) is used to add the inverse transform block 313 (that is, the reconstructed residual block 313) to the prediction block 365 to obtain the reconstructed block 315 in the sample domain, for example by adding The sample value of the reconstructed residual block 313 and the sample value of the prediction block 365 are added.
  • the loop filter unit 320 (during the encoding cycle or after the encoding cycle) is used to filter the reconstructed block 315 to obtain the filtered block 321, thereby smoothly performing pixel transformation or improving video quality.
  • the loop filter unit 320 may be used to perform any combination of the filtering techniques described below.
  • the loop filter unit 320 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (SAO) filters or other filters, such as bilateral filters, auto Adaptive loop filter (ALF), or sharpening or smoothing filter, or collaborative filter.
  • the loop filter unit 320 is shown as an in-loop filter in FIG. 5D, in other configurations, the loop filter unit 320 may be implemented as a post-loop filter.
  • the decoded video block 321 in a given frame or image is then stored in a decoded image buffer 330 that stores reference images for subsequent motion compensation.
  • the decoder 30 is used, for example, to output the decoded image 31 through the output 332 for presentation or viewing by the user.
  • the decoder 30 may generate an output video stream without the loop filter unit 320.
  • the non-transform-based decoder 30 may directly inversely quantize the residual signal without the inverse transform processing unit 312 for certain blocks or frames.
  • the video decoder 30 may have an inverse quantization unit 310 and an inverse transform processing unit 312 combined into a single unit.
  • the decoder 30 is used to implement the decoding method described in the following embodiments.
  • the video decoder 30 may generate an output video stream without processing by the filter 320; or, for some image blocks or image frames, the entropy decoding unit 304 of the video decoder 30 does not decode the quantized coefficients, and accordingly does not It needs to be processed by the inverse quantization unit 310 and the inverse transform processing unit 312.
  • the loop filter 320 is optional; and for lossless compression, the inverse quantization unit 310 and the inverse transform processing unit 312 are optional.
  • the inter prediction unit and the intra prediction unit may be selectively activated.
  • the image blocks that contain both the pixel area and the blank area include: a boundary image block located on the right boundary of the current video frame, a boundary image block located on the lower boundary of the current video frame, and a boundary image block located on the right of the current video frame
  • the image block in the lower corner Exemplarily, the boundary image block located at the right boundary of the current video frame is shown in FIG. 6A.
  • the boundary image block located at the lower boundary of the current video frame is shown in FIG. 6B.
  • the image block at the lower right corner of the current video frame is shown in Figure 6C.
  • this application refers to the boundary image block located at the right boundary of the current video frame and the boundary image block located at the lower boundary of the current video frame as “boundary image block”, and the image block located at the lower right corner of the current video frame is called It is the "bottom right corner image block”.
  • FIG. 6A illustrates the distribution relationship between the pixel area and the blank area in the border image block of the right border, and does not specifically refer to a certain border image block of the right border.
  • FIG. 6A can generally refer to the boundary image block of the right boundary.
  • FIG. 6B illustrates the distribution relationship between the pixel area and the blank area in the boundary image block of the lower boundary, and does not specifically refer to a boundary image block of the lower boundary.
  • FIG. 6B can generally refer to the boundary image block at the lower boundary.
  • the image blocks illustrated in FIGS. 6A to 6C are merely presented schematically. In the embodiment of the present application, the distribution ratio of the pixel area and the blank area in the boundary image block and the lower right corner image block can be arbitrary.
  • FIG. 7A shows a method flowchart of the video decoding method 100 provided by the present application.
  • the video decoding method 100 describes a method of dividing boundary image blocks.
  • the video decoding method 100 may be executed by the decoder 30.
  • the video decoding method described in this embodiment may be specifically executed by the prediction processing unit 360 in FIG. 5D. Based on this, the video decoding method 100 includes the following steps:
  • Step S101 Detect whether the ratio of the side length of the first sub-side of the current boundary image block to the side length of the first side of the current video frame is less than or equal to a first threshold.
  • the current video frame refers to the current image of the video to be decoded
  • the current video frame is, for example, the first image of the video to be decoded.
  • the current boundary image block may be the boundary image block illustrated in FIG. 6A or FIG. 6B.
  • the first threshold is a value greater than 0 and less than 1.
  • the first threshold is, for example, 0.75.
  • the first side is the side of the current boundary image block.
  • the first sub-edge is the edge of the pixel area in the current boundary image block. Both the first side and the first sub-side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the current boundary image block is shown in FIG. 6A
  • the first side is the side in the horizontal direction illustrated in FIG. 6A
  • the first sub-side is the side in the horizontal direction of the pixel area in FIG. 6A.
  • the current boundary image block is, for example, as shown in FIG. 6B
  • the first side is the side in the vertical direction illustrated in FIG. 6B
  • the first sub-side is the side in the vertical direction of the pixel area in FIG. 6B.
  • Step S102 When the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the first threshold, divide the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block. Block.
  • the prediction processing unit 360 divides the current boundary image block in a direction perpendicular to the first side, and the first sub-block includes all pixel areas in the current boundary image block.
  • the current boundary image block is, for example, the boundary image block illustrated in FIG. 6A.
  • the edge length of the pixel area in the vertical direction in the boundary image block illustrated in FIG. 6A is the same in the vertical direction, and the pixels in the image block
  • the side of the region in the horizontal direction is smaller than the side length of the image block in the horizontal direction, and the prediction processing unit 360 divides the boundary image block shown in FIG. 6A from the vertical direction and divides all the pixel areas in the image block into one division.
  • the current boundary image block is, for example, the boundary image block shown in FIG. 6B.
  • the vertical side length of the pixel area in the boundary image block shown in FIG. 6B is smaller than the side length of the image block in the vertical direction.
  • the side of the region in the horizontal direction is equal to the side length of the image block in the horizontal direction, and the prediction processing unit 360 divides the boundary image block shown in FIG. 6B from the horizontal direction and divides all the pixel regions in the image block into one division. Block.
  • this step may include: when the ratio of the side length of the first sub-side to the side length of the first side is greater than a second threshold and the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to
  • the first threshold is used, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block.
  • the second threshold is greater than 0 and smaller than the first threshold, and the second threshold is, for example, 0.25.
  • the present embodiment uses "the ratio of the side length of the first sub-side to the side length of the first side to be less than or equal to the first threshold" as the condition for triggering subsequent operations, this description is only a reference to the technology of this application. An expression of the scheme does not constitute a restriction on this application.
  • this step can be described as "the ratio of the side length of the first side to the side length of the first side Greater than or equal to the first threshold” as a condition for triggering subsequent operations.
  • the two trigger conditions are different, they play the same role in this application. Therefore, even if the setting value of the first threshold is modified and the detection conditions of this step are modified, the relevant limitations still belong to the protection of the technical solution of this application. category.
  • Step S103 When the area of the first block is equal to the area of the pixel area, the first block is used as the coding unit, and the reconstruction block of the coding unit is obtained according to the coding information of the coding unit, or the first block is continued to be divided to obtain At least two coding units, and obtaining reconstruction blocks of the at least two coding units according to the coding information of the at least two coding units.
  • the area of the first block is obtained by multiplying the side length in the horizontal direction and the side length in the vertical direction of the first block. Multiply the length of the sides in the direction to get.
  • the area of the first block is equal to the area of the pixel area, indicating that the side length in the horizontal direction of the first block is equal to the side length in the horizontal direction of the pixel area, and the side length in the vertical direction of the first block is equal to the vertical side of the pixel area
  • the sides in the straight direction are equal in length. That is, no blank area is included in the first partition.
  • the prediction processing unit 360 may divide the boundary image block according to one of the derived tree (derived tree, DT) division methods.
  • the DT division method includes a variety of division modes for dividing blocks in the horizontal direction and/or vertical direction.
  • the ratio of the side length of the second side of the first partition to the side length of the third side of the second partition after the first partitioning mode in the DT partitioning method may satisfy, for example, 1: 3;
  • the ratio of the side length of the second side of the first partition to the side length of the third side of the second partition in the first partition mode in the DT partition method may satisfy 3:1, for example.
  • the second side and the third side are sides in the vertical direction.
  • the division mode of block division in the vertical direction included in the DT division method is similar to the division mode of block division in the horizontal direction, and will not be detailed here. Based on this, the boundary image blocks are divided according to a certain division mode or BT division method in DT, the area of the first block may be equal to the area of the pixel area, and the area of the first block may also be larger than the area of the pixel area.
  • the prediction processing unit 360 may use the first block as a CU and obtain the reconstructed block of the CU according to the coding information of the CU. In other embodiments, the prediction processing unit 360 may also continue to divide the first sub-block to obtain the CU according to the pixel information of the pixel area in the first sub-block, such as the texture of the pixel. The information gets the reconstruction block of the corresponding CU. In this embodiment, the prediction processing unit 360 may use the BT division method and/or the QT division method to continue to divide the first block.
  • the encoding information may include encoded image data and associated data.
  • the associated data may include sequence parameter sets, image parameter sets, and other grammatical structures.
  • the sequence parameter set can contain parameters that apply to zero or more sequences.
  • the image parameter set may contain parameters applied to zero or more images.
  • the syntax structure refers to a set of zero or more syntax elements arranged in a specified order in the code stream. The process by which the prediction processing unit 360 obtains the reconstructed block of the CU according to the coding information of the CU is not described in detail here.
  • Step S104 When the area of the first block is greater than the area of the pixel area, continue to divide the first block to obtain a coding unit, and obtain a reconstruction block of the coding unit according to the coding information of the coding unit.
  • step S103 when the area of the first block is greater than the area of the pixel area, the first block is still the boundary image block, and the prediction processing unit 360 regards the first block as the current boundary image block, and continues to divide the first block. Block.
  • the prediction processing unit 360 may divide the first block in a direction perpendicular to the first side to obtain the first side.
  • Block and the second sub-block wherein, the first sub-block is a non-boundary image block, the second sub-block includes a second sub-pixel area, and the second sub-pixel area is a partial area of the pixel area.
  • the second threshold is greater than 0 and smaller than the first threshold, and the second threshold is, for example, 0.5.
  • the prediction processing unit 360 needs to continue to divide the first block may be: when the ratio of the side length of the first sub-side to the side length of the first side is greater than the second threshold, the Perform BT division on the first block in the direction of.
  • the first block when the ratio of the side length of the first sub-side to the side length of the first side is greater than the second threshold, the first block is QT divided.
  • the decoder divides the pixel area in the boundary image block into the first sub-block.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the decoder when the decoder performs block division, it is not limited to the existing BT and/or QT division method, so that in the process of dividing the boundary image block to obtain the CU, the number of divisions can be reduced, and further, the complexity of the division algorithm can be reduced. degree.
  • the prediction processing unit 360 may move in a direction perpendicular to the first side. Divide the current boundary image block to obtain the first block and the second block.
  • the first sub-block is a non-boundary image block
  • the second sub-block is a boundary image block and includes a first sub-pixel area
  • the first sub-pixel area is a partial area of the pixel area.
  • the prediction processing unit 360 may use the second block as the current boundary image block, continue to divide the second block to obtain the coding unit, and obtain the reconstructed block of the coding unit according to the coding information of the coding unit.
  • the method for the prediction processing unit 360 to continue to divide the second block is similar to the method for the prediction processing unit 360 to continue to divide the first block in step S104, and will not be described in detail here.
  • the block may include: when the ratio of the side length of the first sub-side to the side length of the first side is greater than a first threshold and the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the third threshold, The current boundary image block is divided in a direction perpendicular to the first side to obtain a first block and a second block.
  • the third threshold is greater than the first threshold, and the third threshold may be 1, for example.
  • the decoder 30 can divide the boundary image block according to the relationship between the side length of the boundary pixel area and the side length of the boundary image block where the pixel area is located, so that the LCU is divided to obtain In the CU process, the number of divisions is relatively small, and furthermore, the complexity of the division algorithm can be reduced.
  • the side length described in this embodiment is the length of the side perpendicular to the boundary of the current video frame where the current boundary image block is located among the sides of the pixel area and the boundary image block.
  • FIG. 7B shows a method flowchart of a video decoding method 200 provided by the present application.
  • the video decoding method 200 describes a method of dividing boundary image blocks.
  • the video decoding method 200 may be executed by the decoder 30.
  • the video decoding method described in this embodiment may be specifically executed by the prediction processing unit 360 in FIG. 5D. Based on this, the video decoding method 200 includes the following steps:
  • Step S201 Detect whether the ratio of the side length of the first sub-side of the current boundary image block to the side length of the first side of the current video frame is within a preset interval.
  • the numerical range of the preset interval described in this embodiment is greater than the second threshold and less than the first threshold.
  • the first threshold and the second threshold described in this embodiment are the same as those described in the video decoding method 100.
  • the first threshold is, for example, 0.5
  • the second threshold is, for example, 0.
  • the first threshold is, for example, 0.25
  • the second threshold is, for example, 0. No more details here.
  • Step S202 When the ratio of the side length of the first sub-side to the side length of the first side is within a preset interval, divide the current boundary image block in a direction perpendicular to the first side to obtain a first block and a second block.
  • the prediction processing unit 360 Similar to step S102 of the video decoding method 100, in this embodiment, the prediction processing unit 360 also divides the current boundary image block in a direction perpendicular to the first side. Regarding the division direction, it is not repeated here.
  • the first block may include all pixel areas in the current boundary image block, while the second block does not include any pixel areas.
  • the first sub-block may be a non-boundary image block, and the second sub-block is a boundary image block.
  • the pixel area included in the second block is a part of the pixel area of the current boundary image block.
  • Step S203 Use the non-boundary blocks in the first block and the second block as the coding unit, and obtain the reconstructed block of the coding unit according to the coding information of the coding unit, or continue to divide the first block or the second block.
  • the first block contains all the pixel areas in the current boundary image block, and the second block does not contain any pixel areas
  • the prediction processing unit 360 performs the first block
  • the subsequent operation process is the same as that of the video decoding method 100, and will not be repeated here.
  • the first block is a non-boundary image block
  • the second block is a scene of a boundary image block.
  • the prediction processing unit 360 may use the first block as a coding unit, and obtain the CU's information according to the coding information of the CU.
  • the prediction processing unit 360 may continue to partition the second partition to obtain a CU.
  • the prediction processing unit 360 continues to divide the second block division, as described in the video decoding method 100.
  • FIG. 7C shows a method flowchart of a video decoding method 300 provided by the present application.
  • the video decoding method 300 describes a method for dividing an image block in the lower right corner.
  • the video decoding method 300 may be executed by the decoder 30.
  • the video decoding method described in this embodiment may be specifically executed by the prediction processing unit 360 in FIG. 5D. Based on this, the video decoding method 300 includes the following steps:
  • Step S301 Determine that the ratio of the side length of the first sub-side of the lower right corner image block to the side length of the first side of the current video frame is less than or equal to a preset threshold, and the side length of the second sub-side of the lower right corner image block is equal to The ratio of the side lengths of the second side is greater than the preset threshold.
  • the preset threshold is, for example, 0.5.
  • the image block in the lower right corner is shown in Figure 6C.
  • the first side and the second side are both sides of the lower right corner image block, and the first and second sub sides are the sides of the pixel area in the lower right corner image block.
  • the first side includes a first sub-side
  • the second side includes a second sub-side
  • the first side and the second side are perpendicular to each other.
  • Step S302 Use the QT-derived division mode to divide the lower right image block to obtain the first block, the second block, and the third block.
  • the DT described in the technical solution of the present application also includes a division mode derived from QT.
  • the QT-derived division mode may specifically include: Q_A division mode and Q_B division mode.
  • the Q_A division method is that after the horizontal direction BT, the upper half is divided into the vertical direction BT to obtain three divided blocks, as shown in FIGS. 9 and 10 below.
  • the Q_B division method is that after the vertical direction BT, the left half is divided into the horizontal direction BT to obtain three divided blocks, as shown in FIGS. 9 and 10 below.
  • the first sub-block includes the first sub-pixel area of the pixel area
  • the second sub-block includes the second sub-pixel area of the pixel area.
  • the first sub-pixel area and the second sub-pixel area form the lower right image block ⁇ pixel area.
  • the area of the first block and the area of the second block are both one-fourth of the area of the image block in the lower right corner, and the area of the third block is one-half of the area of the boundary image block.
  • Step S303 Continue to divide the second block to obtain the coding unit corresponding to the second block, and obtain the reconstruction block of the coding unit corresponding to the second block according to the coding information of the coding unit corresponding to the second block.
  • the second block is a boundary image block or a lower right corner image block, and the prediction processing unit 360 needs to continue to divide the second block.
  • the manner in which the prediction processing unit 360 divides the second block may be as described in the embodiment in the video decoding method 100.
  • the manner in which the prediction processing unit 360 divides the second block may be as described in the embodiment of the video decoding method 300, which will not be repeated here.
  • Step S304 When the area of the first block is equal to the area of the first sub-pixel area, the first block is used as the coding unit, and the reconstruction block of the first block coding unit is obtained according to the coding information of the first block coding unit , Or continue to divide the first block to obtain the first block coding unit, and obtain the reconstruction block of the first block coding unit according to the coding information of the first block coding unit.
  • Step S305 When the area of the first block is larger than the area of the first sub-pixel area, continue to divide the first block to obtain the first block coding unit, and obtain the first block coding unit according to the coding information of the first block coding unit.
  • the reconstruction block of the block coding unit When the area of the first block is larger than the area of the first sub-pixel area, continue to divide the first block to obtain the first block coding unit, and obtain the first block coding unit according to the coding information of the first block coding unit.
  • the reconstruction block of the block coding unit When the area of the first block is larger than the area of the first sub-pixel area, continue to divide the first block to obtain the first block coding unit, and obtain the first block coding unit according to the coding information of the first block coding unit.
  • continuing to divide the first block includes: detecting the ratio of the side length of the third sub-side of the first block to the side length of the third side Whether it is less than or equal to the first threshold, the third sub-side is a side of the first sub-pixel area, and the third side and the third sub-side are parallel to the first side.
  • the first sub-block is divided in a direction perpendicular to the first side to obtain the first sub-block and the second sub-block.
  • the first sub-block includes the first sub-pixel area.
  • the first threshold is as described in the video decoding method 100.
  • the operation of the prediction processing unit 360 on the first block is the same as the description of step S103 and step S104 in the video decoding method 100, and will not be described in detail here.
  • FIG. 8A shows a method flowchart of a video encoding method 400 provided in this application.
  • the video encoding method 400 describes a method of dividing boundary image blocks.
  • the video encoding method 400 may be executed by the encoder 20.
  • the video encoding method described in this embodiment may be specifically executed by the prediction processing unit 210 in FIG. 5C. Based on this, the video encoding method 400 includes the following steps:
  • Step S401 Detect whether the ratio of the side length of the first sub-side of the current boundary image block to the side length of the first side of the current video frame is less than or equal to a first threshold.
  • Step S402 When the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the first threshold, divide the current boundary image block in the direction perpendicular to the first side to obtain the first block and the second block. Block.
  • the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block.
  • the first sub-block is a non-boundary image block
  • the second sub-block is a boundary image block and includes a first sub-pixel area
  • the first sub-pixel area is a partial area of the pixel area.
  • Step S403 When the area of the first block is equal to the area of the pixel area, use the first block as the coding unit, and obtain the coding information of the coding unit according to the image information of the coding unit, or continue to divide the first block to obtain Coding unit, and obtain coding information of the coding unit according to the image information of the coding unit.
  • Step S405 When the area of the first block is greater than the area of the first sub-pixel area, continue to divide the first block to obtain the first block coding unit, and obtain the first block coding unit according to the image information of the first block coding unit.
  • the coding information of the block coding unit When the area of the first block is greater than the area of the first sub-pixel area, continue to divide the first block to obtain the first block coding unit, and obtain the first block coding unit according to the image information of the first block coding unit.
  • the coding information of the block coding unit is the area of the first block is greater than the area of the first sub-pixel area.
  • the block division operation performed by the prediction processing unit 210 is similar to the block division operation performed by the prediction processing unit 360 in the video decoding method 100, which will not be described in detail here.
  • the process of obtaining the coding information of the corresponding CU by the prediction processing unit 210 according to the image information of the CU is not described in detail here.
  • FIG. 8B shows a method flowchart of a video encoding method 500 provided in this application.
  • the video encoding method 500 describes a method of dividing boundary image blocks.
  • the video encoding method 500 may be executed by the encoder 20.
  • the video encoding method described in this embodiment may be specifically executed by the prediction processing unit 210 in FIG. 5C. Based on this, the video encoding method 500 includes the following steps:
  • Step S501 Detect whether the ratio of the side length of the first sub-side of the current boundary image block to the side length of the first side of the current video frame is within a preset interval.
  • Step S502 When the ratio of the side length of the first sub-side to the side length of the first side is within a preset interval, divide the current boundary image block in a direction perpendicular to the first side to obtain a first block and a second block.
  • Step S503 Use the non-boundary blocks in the first block and the second block as the coding unit, and obtain the coding information of the coding unit according to the image information of the coding unit, or continue to divide the first block or the second block. Block to obtain a coding unit, and obtain coding information of the coding unit according to the image information of the coding unit.
  • the block division operation performed by the prediction processing unit 210 is similar to the block division operation performed by the prediction processing unit 360 in the video decoding method 200, which will not be described in detail here.
  • the process of obtaining the coding information of the corresponding CU by the prediction processing unit 210 according to the image information of the CU is not described in detail here.
  • FIG. 8C shows a method flowchart of a video encoding method 600 provided in this application.
  • the video encoding method 600 describes a method for dividing the image block in the lower right corner.
  • the video encoding method 600 may be executed by the encoder 20.
  • the video encoding method described in this embodiment may be specifically executed by the prediction processing unit 210 in FIG. 5C. Based on this, the video encoding method 600 includes the following steps:
  • Step S601 Determine that the ratio of the side length of the first sub-side of the lower-right image block to the side length of the first side of the current video frame is less than or equal to a preset threshold, and the side length of the second sub-side of the lower-right image block is equal to The ratio of the side lengths of the second side is greater than the preset threshold.
  • the preset threshold is, for example, 0.5.
  • Step S602 Use the QT-derived division mode to divide the lower right corner image block to obtain the first block, the second block, and the third block.
  • Step S603 Continue to divide the second block to obtain the coding unit corresponding to the second block, and obtain the coding information of the coding unit corresponding to the second block according to the image information of the coding unit corresponding to the second block.
  • Step S604 When the area of the first block is equal to the area of the first sub-pixel area, the first block is used as the coding unit, and the coding information of the first block coding unit is obtained according to the image information of the first block coding unit , Or continue to divide the first block to obtain the first block coding unit, and obtain the coding information of the first block coding unit according to the image information of the first block coding unit.
  • Step S605 When the area of the first block is greater than the area of the first sub-pixel area, continue to divide the first block to obtain the first block coding unit, and obtain the first block coding unit according to the image information of the first block coding unit.
  • the coding information of the block coding unit When the area of the first block is greater than the area of the first sub-pixel area, continue to divide the first block to obtain the first block coding unit, and obtain the first block coding unit according to the image information of the first block coding unit.
  • the coding information of the block coding unit is the area of the first block is greater than the area of the first sub-pixel area.
  • continuing to divide the first block includes: detecting the ratio of the side length of the third sub-side of the first block to the side length of the third side Whether it is less than or equal to the first threshold, the third sub-side is a side of the first sub-pixel area, and the third side and the third sub-side are parallel to the first side.
  • the first sub-block is divided in a direction perpendicular to the first side to obtain the first sub-block and the second sub-block.
  • the first sub-block includes the first sub-pixel area.
  • the first threshold is as described in the video encoding method 400.
  • the block division operation performed by the prediction processing unit 210 is similar to the block division operation performed by the prediction processing unit 360 in the video decoding method 300, which will not be described in detail here.
  • the process of obtaining the coding information of the corresponding CU by the prediction processing unit 210 according to the image information of the CU is not described in detail here.
  • the video encoding method and video decoding method described in this application can be divided according to the DT division method and BT division according to the relationship between the side length of the pixel area in the boundary image block and/or the lower right corner image block and the side length of the image block.
  • the method or the QT division method divides the corresponding image blocks, so that the number of divisions in the process from dividing the boundary image block to obtaining the CU can be reduced, and further, the complexity of the division algorithm can be reduced.
  • the DT division method described in the present application may include division mode groups derived from the following three scenarios: division mode group 91, division mode group 92, and division mode group 93.
  • the division mode group 91 includes various division modes for dividing image blocks in the vertical direction
  • the division mode group 92 includes various division modes for dividing image blocks in the horizontal direction
  • the division mode group 93 includes QT division modes and QT-based division modes.
  • Other division modes derived from division mode may include division mode groups derived from the following three scenarios: division mode group 91, division mode group 92, and division mode group 93.
  • the division mode group 91 includes various division modes for dividing image blocks in the vertical direction
  • the division mode group 92 includes various division modes for dividing image blocks in the horizontal direction
  • the division mode group 93 includes QT division modes and QT-based division modes.
  • Other division modes derived from division mode may be included in the division mode group 91, division mode group 92, and division mode group 93.
  • FIG. 9 is only a schematic description of the division mode in this application, and the technical solution of this application is not only the division mode illustrated in FIG. 9.
  • the ratio of the side length of the first partition in the horizontal direction to the side length of the second partition in the horizontal direction is 3:1; for another example, the partitioning mode group 91 is used.
  • the ratio of the side length in the horizontal direction of the first sub-block to the horizontal side length in the second sub-block is 1:3; for another example, the first division mode in the division mode group 91 is used to divide
  • the ratio of the side length in the horizontal direction of the first block to the side length in the horizontal direction of the second block is 1:7 (not shown in FIG. 9).
  • the ratio of the vertical side length of the first partition to the vertical side length of the second partition is 3:1; for another example, the partitioning mode is used
  • the ratio of the vertical side length of the first divided block to the vertical side length of the second divided block is 1:3; for another example, the first division mode group 91 is adopted.
  • the ratio of the vertical side length of the first divided block to the vertical side length of the second divided block is 7:1 (not shown in FIG. 9).
  • the division mode group 93 includes the Q_A division mode. After the Q_A division mode is used, the first block, the second block and the third block are obtained. The area of the first block and the area of the second block are both the area of the original image block And the first and second blocks are arranged side by side in the horizontal direction. The area of the third block is one-half of the area of the original image block; for another example, the division mode group 93 includes Q_B Divide mode, use Q_B division mode to obtain the first block, second block, and third block. The area of the first block and the area of the second block are both a quarter of the area of the original image block. And the first block and the second block are arranged side by side in the vertical direction, and the area of the third block is half of the area of the original image block.
  • the related equipment can maintain multiple DT division modes, so that when dividing the boundary image block and the lower right corner image block, the division mode can be selected from the multiple DT division modes, and further, the boundary image block can be divided And/or the process of obtaining the CU from the lower right corner image block is relatively small.
  • Fig. 10 illustrates the division modes included in an exemplary DT division method.
  • the horizontal division modes include: HOR_TOP division mode, BT-1 division mode, and HOR_DOWN division mode; vertical division modes include: VER_LEFT division mode, BT-2 division mode, and VER_RIGHT division mode.
  • the division modes derived from QT include: Q_A division mode, Q_B division mode and QT division mode.
  • the ratio of the vertical side length of the first block divided by the HOR_TOP division mode to the vertical side length of the second block divided is 1:3.
  • the ratio of the vertical side length of the first block divided by the BT-1 division mode to the vertical side length of the second block divided is 1:1.
  • the ratio of the vertical side length of the first block divided by the HOR_DOWN division mode to the vertical side length of the second block divided is 3:1.
  • the ratio of the side length in the horizontal direction of the first block divided by the VER_LEFT division mode to the side length of the second block in the horizontal direction is 1:3.
  • the ratio of the horizontal side length of the first block divided by the BT-2 division mode to the horizontal side length of the second block divided is 1:1.
  • the ratio of the horizontal side length of the first block divided by the VER_RIGHT division mode to the horizontal side length of the second block divided is 3:1.
  • the area of the first block and the area of the second block obtained by the Q_A division mode are both a quarter of the area of the original image block, and the first block and the second block are arranged side by side in the horizontal direction.
  • the area of the third block is one-half of the area of the original image block.
  • the area of the first block and the area of the second block obtained by the Q_A division mode are both a quarter of the area of the original image block, and the first block and the second block are arranged side by side in the vertical direction,
  • the area of the third block is one-half of the area of the original image block.
  • FIG. 10 is only a schematic description of the application division mode, and does not constitute any limitation to the technical solution of the application.
  • the DT division method may also include other division methods, which will not be described in detail here.
  • FIG. 11A-1 illustrates an exemplary boundary image block 111, which is an image block of the right boundary of the video frame to which the boundary image block belongs.
  • the prediction processing unit 360 detects that the boundary image block 111 It is less than 0.25, where w a is the upper side length of the pixel region 11A in the boundary image block 111 in the horizontal direction, and w b is the upper side length of the boundary image block 111 in the horizontal direction.
  • the prediction processing unit 360 uses the VER_LEFT division mode illustrated in FIG. 10 to divide the boundary image block 111 to obtain a first block 1111 and a second block 1112.
  • the first block 1111 includes the pixel area 11A in the boundary image block 111, and the second block 1112 does not include the pixel area.
  • the first block 1111 includes a pixel area
  • the first block 1111 also includes a blank area, that is, the first block 1111 cannot be used as a CU, and the prediction processing unit 360 needs to continue to divide the first block 1111.
  • the prediction processing unit 360 may select a division mode among the division modes in the vertical direction of FIG. 10, such as the VER_RIGHT division mode, and divide the first block 1111 to obtain the first sub-block and the second sub-block.
  • the first sub-block includes a partial area of the pixel area 11A and is a non-boundary block.
  • the second sub-block includes the remaining partial area and blank area of the pixel area 11A, and is a boundary block.
  • the second threshold is less than the first threshold.
  • it may include, when When it is greater than the second threshold, the prediction processing unit 360 may use the BT-2 division mode to divide the first block 1111. Wherein, the second threshold is less than the first threshold.
  • the prediction processing unit 360 may use the QT division mode to divide the first block 1111.
  • the current boundary image block is divided to obtain the first sub-block and the first sub-side.
  • the dichotomy includes: when the ratio of the side length of the first sub-side to the side length of the first side is greater than the second threshold and the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the first threshold , Dividing the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block.
  • “the prediction processing unit 360 detects the boundary image block 111 Less than 0.25" can be equivalent to "the prediction processing unit 360 detects the boundary image block 111 Greater than 0 and Scenes smaller than 0.25".
  • the prediction processing unit 360 detects the boundary image block 111 Equal to 0.25, the prediction processing unit 360 still uses the VER_LEFT division mode to divide the boundary image block 111 to obtain the first block 1111 and the second block 1112.
  • the area of the first block 1111 is equal to the area of the pixel area 11A, and the prediction processing unit 360 may use the first block 1111 as a CU, and further, the prediction processing unit 360 may obtain the reconstructed block of the CU according to the coding information of the CU.
  • the prediction processing unit 360 continues to divide the first block 1111 to obtain a CU, and obtains the reconstruction block of the corresponding CU according to the obtained coding information of the CU. No more details here.
  • this embodiment corresponds to the embodiment illustrated in FIG. 7A, FIG. 7B, FIG. 8A and FIG. 8B.
  • the "first threshold” is equal to 0.25 and the “second threshold” is equal to 0.
  • FIG. 11B illustrates an exemplary boundary image block 112, and the boundary image block 112 is an image block on the right boundary of the video frame to which the boundary image block belongs.
  • the prediction processing unit 360 detects the Greater than 0.25 and Less than 0.5.
  • w a is the upper side length of the pixel region 11B in the boundary image block 112 in the horizontal direction
  • w b is the upper side length of the boundary image block 112 in the horizontal direction.
  • the prediction processing unit 360 uses the BT-2 division mode illustrated in FIG. 10 to divide the boundary image block 112 to obtain a first block 1121 and a second block 1122.
  • the first block 1121 includes the pixel area 11B, and the second block 1122 does not include the pixel area.
  • the first partition 1121 is a boundary image block.
  • the first block 1121 is still a boundary image block, and the prediction processing unit 360 continues to divide the first block 1121.
  • the implementation manner in which the prediction processing unit 360 continues to divide the first block 1121 is similar to the implementation manner in which the prediction processing unit 360 continues to divide the first block 1111 in the embodiment of FIG. 11A-1, and will not be described in detail here.
  • this embodiment corresponds to the implementation scenario in which the "first threshold” is equal to 0.5 and the “second threshold” is equal to 0.25 in the embodiments illustrated in FIGS. 7A, 7B, 8A, and 8B.
  • the prediction processing unit 360 detects the boundary image block 112 Greater than 0.25 and If equal to 0.5, the prediction processing unit 360 may use the BT-2 division mode shown in FIG. 10 to divide the boundary image block 112 to obtain the first block 1121 and the second block 1122.
  • the first block 1121 includes the pixel area 11B
  • the first block 1121 is a non-boundary block
  • the second block 1122 does not include the pixel area.
  • FIG. 11C illustrates an exemplary boundary image block 113
  • the boundary image block 113 is an image block of the right boundary of the video frame to which the boundary image block belongs.
  • the prediction processing unit 360 detects the boundary image block 113 Greater than 0.5 and Less than 0.75, in this embodiment, w a is the upper side length of the pixel region 11C in the boundary image block 113 in the horizontal direction, and w b is the upper side length of the boundary image block 113 in the horizontal direction.
  • the prediction processing unit 360 uses the VER_RIGHT division mode illustrated in FIG. 10 to divide the boundary image block 113 to obtain a first block 1131 and a second block 1132.
  • the first block 1131 includes the pixel area 11C, and the second block 1132 does not include the pixel area.
  • the first block 1131 is a boundary image block.
  • the prediction processing unit 360 continues to divide the first block 1131.
  • the implementation manner in which the prediction processing unit 360 continues to divide the first block 1131 is similar to the implementation manner in which the prediction processing unit 360 continues to divide the first block 1111 in the embodiment of FIG. 11A-1, and will not be described in detail here.
  • the prediction processing unit 360 may also divide the first sub-block 1131 in the vertical direction to obtain the first sub-block and the second sub-block, wherein the horizontal side length of the first sub-block is the same as the first sub-block.
  • the side length of the two sub-block edges can satisfy 2 to 1.
  • the prediction processing unit 360 detects the boundary image block 113 Greater than 0.5 and Equal to 0.75, the prediction processing unit 360 may use the VER_RIGHT division mode shown in FIG. 10 to divide the boundary image block 113 to obtain the first block 1131 and the second block 1132.
  • the first block 1131 includes a pixel area 11B
  • the first block 1121 is a non-boundary block
  • the second block 1132 does not include a pixel area.
  • this embodiment corresponds to the implementation scenario in which the "first threshold” is equal to 0.75 and the “second threshold” is equal to 0.5 in the embodiments illustrated in FIGS. 7A, 7B, 8A, and 8B.
  • the implementation scenario of this embodiment can also be described as " Greater than 0.5 and less than 1", or described as " Greater than 0.5".
  • Fig. 11D illustrates an exemplary boundary image block 114, which is an image block on the right boundary of the video frame to which the boundary image block belongs.
  • the prediction processing unit 360 detects the boundary image block 114 Greater than 0.75 and It is less than 1.
  • w a is the upper side length of the pixel region 11D in the boundary image block 114 in the horizontal direction
  • w b is the upper side length of the boundary image block 114 in the horizontal direction.
  • the prediction processing unit 360 uses the VER_RIGHT division mode illustrated in FIG. 10 to divide the boundary image block 114 to obtain a first block 1141 and a second block 1142.
  • the first block 1141 includes a part of the pixel area of the pixel area 11D, and the first block 1141 is a non-boundary image block.
  • the second block 1142 includes the remaining part of the pixel area of the pixel area 11D, and the second block 1142 is a boundary image block.
  • the prediction processing unit 360 continues to divide the second block 1142.
  • the implementation manner in which the prediction processing unit 360 continues to divide the second block 1142 is similar to the implementation manner in which the prediction processing unit 360 continues to divide the first block 1111 in the embodiment of FIG. 11A-1, and will not be described in detail here.
  • this embodiment corresponds to the implementation scenario in which the "first threshold” is equal to 0.75 and the “second threshold” is equal to 1 in the embodiments illustrated in FIGS. 7A and 8A.
  • This embodiment corresponds to the implementation scenario in which the "first threshold” is equal to 1, and the “second threshold” is equal to 0.75 in the embodiments illustrated in FIG. 7B and FIG. 8B.
  • the "prediction processing unit 360 detects the boundary image block 114 Greater than 0.75 and Less than 1" can be equivalent to "the prediction processing unit 360 detects the boundary image block 114 Greater than 0.75".
  • the boundary image blocks obtained by dividing in FIG. 11A-1 to FIG. 11D may also continue to be divided in a BT or QT division manner to obtain a CU. This embodiment will not be repeated here.
  • FIGS. 11A-1 to 11D all take the image block at the right boundary of the video frame as an example, and describe the implementation scenario of dividing the image block in this application.
  • the prediction processing unit 360 divides the boundary block at the lower boundary of the video frame into an implementation scenario, and the prediction processing unit 360 detects The relationship between the value of and each threshold, where w x is the length of the side in the vertical direction of the pixel area in the boundary image block 121, and w y is the length of the side in the vertical direction of the boundary image block 121.
  • the prediction processing unit 360 determines the division mode for dividing the boundary image block 121 from the HOR_TOP division mode, the BT-1 division mode, and the HOR_DOWN division mode illustrated in FIG. 10. This application will not be detailed here.
  • FIG. 13A-1 illustrates an exemplary image block 131 in the lower right corner.
  • the prediction processing unit 360 detects the image block 131 in the lower right corner. Less than 0.5, and the image block 131 in the lower right corner Greater than 0.5, where w a level is the length of the upper side in the horizontal direction of the pixel area 13A in the lower right corner image block 131, w b level is the length of the upper side in the horizontal direction of the lower right corner image block 131, and w a vertical is the lower right corner image block
  • the side length of the upper side in the vertical direction of the pixel area 13A in 131, and w b vertical is the side length of the upper side in the vertical direction of the lower right corner image block 131.
  • the prediction processing unit 360 uses the Q_A division mode to divide the lower right image block 131 to obtain a first block 1311, a second block 1312, and a third block 1313.
  • the first sub-block 1311 includes the first sub-pixel area of the pixel area 13A
  • the second sub-block 1312 includes the second sub-pixel area of the pixel area 13A.
  • the first sub-pixel area and the second sub-pixel area constitute the pixel area 13A.
  • the third block 1313 does not include a pixel area.
  • the first block 1311 is the lower boundary image block
  • the second block 1312 is the lower right corner image block.
  • the prediction processing unit 360 uses the above-mentioned method of dividing the lower boundary image block to continue to divide the first block 1311, and uses the method of dividing the lower right corner image block to divide the second block 1312. No more details here.
  • the prediction processing unit 360 uses the Q_A division mode to divide the lower right image block 131 to obtain the first block 1311, the second block 1312, and the third block 1313.
  • the first block 1311 may be used as a CU
  • the second block 1312 is a right boundary image block.
  • the prediction processing unit 360 uses a dividing method of dividing the right boundary image block to divide the second block 1312. No more details here.
  • FIG. 13B illustrates an exemplary image block 132 in the lower right corner.
  • the prediction processing unit 360 detects the image block 132 in the lower right corner. Greater than 0.5, and the image block 132 in the lower right corner Less than 0.5, where w a level is the upper side length of the pixel area 13B in the lower right corner image block 132 in the horizontal direction, w b level is the upper side length of the lower right corner image block 132 in the horizontal direction, w a vertical is the lower right corner image block The side length of the upper side in the vertical direction of the pixel area 13A in 132, and w b vertical is the side length of the upper side in the vertical direction of the lower right corner image block 132.
  • the prediction processing unit 360 uses the Q_B division mode to divide the lower right corner image block 132 to obtain a first block 1321, a second block 1322, and a third block 1323.
  • the first sub-block 1321 includes the first sub-pixel area of the pixel area 13B
  • the second sub-block 1322 includes the second sub-pixel area of the pixel area 13B.
  • the first sub-pixel area and the second sub-pixel area constitute the pixel area 13B.
  • the third block 1323 does not include a pixel area.
  • the first block 1321 is the right boundary image block
  • the second block 1322 is the lower right image block.
  • the prediction processing unit 360 continues to divide the first block 1321 by using the above-mentioned method of dividing the right border image block, and uses the dividing method of dividing the lower right corner image block to divide the second block 1322. No more details here.
  • the prediction processing unit 360 uses the Q_B division mode to divide the lower right image block 132 to obtain the first block 1321, the second block 1322, and the third block 1323.
  • the first partition 1321 may be used as a CU
  • the second partition 1322 is a lower boundary image block.
  • the prediction processing unit 360 uses a dividing method of dividing the lower boundary image block to divide the second block 1322. No more details here.
  • the embodiments illustrated in FIG. 10 to FIG. 13B are only schematic descriptions, and do not limit the technical solutions of the present application.
  • the DT division method may also include other division modes.
  • the first threshold and the second threshold may also be other values, which are not described in detail here in this application.
  • FIGS. 11A-1 to 13B illustrate the decoding side as an example to describe the embodiments of this application.
  • the embodiments illustrated in FIGS. 11A-1 to 13B are also applicable to the encoding side. Operations on image blocks.
  • the prediction processing unit 360 may specifically perform the foregoing operations.
  • the related equipment can maintain multiple DT division modes, so that when dividing the boundary image block and the lower right corner image block, the division mode can be selected from multiple DT division modes, and further, the division mode The number of times of dividing the image block and/or the lower right corner image block until the CU is obtained is relatively small.
  • each device includes a hardware structure and/or software module corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the video decoding device 1400 may include a detection module 1401 and a division module 1402.
  • the video decoding device 1400 may be used to perform the operations of the video decoder 30 in FIGS. 5A, 5B, 5D, 7A, 7B, and 11A-1 to 12 described above.
  • the detection module 1401 detects whether the ratio of the side length of the first sub-side of the current boundary image block of the current video frame to the side length of the first side is less than or equal to a first threshold, wherein the first side is the current boundary image block
  • the first sub-edge is the edge of the pixel area in the current boundary image block, and both the first side and the first sub-edge are perpendicular to the current video frame where the current boundary image block is located
  • the first threshold is a value greater than 0 and less than 1.
  • the dividing module 1402 is configured to: when the ratio of the side length of the first sub-side to the side length of the first side is less than or equal to the first side When a threshold value is set, the current boundary image block is divided in a direction perpendicular to the first side to obtain a first block and a second block, the first block includes the pixel area; the division module 1402 also uses When the area of the first block is equal to the area of the pixel area, the first block is used as a coding unit, and the reconstruction block of the coding unit is obtained according to the coding information of the coding unit, or continues Divide the first block to obtain a coding unit, and obtain a reconstruction block of the coding unit according to the coding information of the coding unit; the dividing module 1402 is also used for when the area of the first block is larger than the When determining the area of the pixel area, continue to divide the first block to obtain a coding unit, and obtain a reconstruction block of the coding unit according to the coding information of the coding unit.
  • the video decoding device 1400 divides the pixel area in the boundary image block into the first sub-block in.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the video decoding device 1400 performs block division, it is not limited to the existing BT and/or QT division method, so that in the process of dividing the boundary image block to obtain the CU, the number of divisions can be reduced, and further, the number of divisions can be reduced. Algorithm complexity.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than the first threshold, take the direction perpendicular to the first side Divide the current boundary image block to obtain a first block and a second block, the first block is a non-boundary image block, the second block is a boundary image block and includes a first sub-pixel area, the first A sub-pixel area is a partial area of the pixel area; it is also used to continue to divide the second block to obtain a coding unit, and obtain a reconstruction block of the coding unit according to the coding information of the coding unit.
  • the dividing module 1402 is further configured to divide the sub-side in a direction perpendicular to the first side when the ratio of the side length of the first subside to the side length of the first side is greater than a second threshold.
  • the first sub-block obtains a first sub-block and a second sub-block, the first sub-block is a non-boundary image block, the second sub-block includes a sub-pixel area, and the sub-pixel area is the Part of the pixel area.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than a second threshold, perform alignment in a direction perpendicular to the first side.
  • the first block performs binary tree BT division.
  • the dividing module 1402 is further configured to perform a quadtree QT on the first partition when the ratio of the side length of the first sub-side to the side length of the first side is greater than a second threshold. Divide.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than a second threshold and the side length of the first sub-side is larger than the first side When the ratio of the side length of one side is less than or equal to the first threshold, dividing the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than zero and the side length of the first sub-side is larger than the first side
  • the side length ratio of is less than or equal to 0.25
  • the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side and the side length of the third side of the second block satisfy 1:3, and both the second side and the third side are perpendicular to the current video where the current boundary image block is located The boundary of the frame.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than 0.25 and the side length of the first sub-side is larger than the first side When the ratio of the side length of is less than or equal to 0.5, dividing the current boundary image block in a direction perpendicular to the first side to obtain the first block and the second block, the first block
  • the side length of the second side and the side length of the third side of the second block satisfy 1:1, and both the second side and the third side are perpendicular to the current video where the current boundary image block is located The boundary of the frame.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than 0.5 and the side length of the first sub-side is larger than the first side
  • the side length ratio of is less than or equal to 0.75
  • the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side and the side length of the third side of the second block satisfy 3 to 1, and both the second side and the third side are perpendicular to the current video where the current boundary image block is located The boundary of the frame.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than the first threshold and the side length of the first sub-side When the ratio of the side length of the first side is less than or equal to a third threshold, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than or equal to 0.75 and the side length of the first sub-side and the first side When the ratio of the side length of one side is less than 1, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side and the side length of the third side of the second block satisfy 3 to 1, and both the second side and the third side are perpendicular to the current video where the current boundary image block is located The boundary of the frame.
  • the video decoding device 1400 shown in FIG. 14A may also be used to perform the operation of the decoder 30 in FIG. 7B described above.
  • the detection module 1401 is configured to detect whether the ratio of the side length of the first sub-side of the current boundary image block of the current video frame to the side length of the first side is within a preset interval, wherein the first side is the current The edge of the boundary image block, the first sub-edge is the edge of the pixel area in the current boundary image block, and the first edge and the first sub-edge are both perpendicular to the current boundary image block.
  • the boundary of the current video frame is further configured to, when the ratio of the side length of the first sub-side to the side length of the first side is in the preset interval, to be perpendicular to the first
  • the current boundary image block is divided in the direction of the edge to obtain a first block and a second block
  • the processing module is further configured to take a block that is a non-boundary block in the first block and the second block as Coding unit, and obtain the reconstruction block of the coding unit according to the coding information of the coding unit, or continue to divide the first block or the second block to obtain the coding unit, and according to the coding information of the coding unit
  • the coding information obtains the reconstruction block of the coding unit.
  • the video decoding device 1400 divides the pixel area in the boundary image block to obtain the CU.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the video decoding device 1400 performs block division, it is not limited to the existing BT and/or QT division method, so that in the process of dividing the boundary image block to obtain the CU, the number of divisions can be reduced, and further, the number of divisions can be reduced. Algorithm complexity.
  • the dividing module 1402 is also used for the value range of the preset interval to be greater than the second threshold and less than the first threshold.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than zero and the side length of the first sub-side is larger than the first side
  • the side length ratio of is less than or equal to 0.25
  • the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side and the side length of the third side of the second block satisfy 1:3, and both the second side and the third side are perpendicular to the current video where the current boundary image block is located At the boundary of the frame, the first block includes the pixel area.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than 0.25 and the side length of the first sub-side is larger than the first side
  • the side length ratio of is less than or equal to 0.5
  • the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side and the side length of the third side of the second block satisfy 1:1, and both the second side and the third side are perpendicular to the current video where the current boundary image block is located At the boundary of the frame, the first block includes the pixel area.
  • the division module 1402 is further configured to perform binary tree division on the first block or quadtree division on the first block in a direction perpendicular to the first side.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than 0.5 and the side length of the first sub-side is larger than the first side
  • the side length ratio of is less than or equal to 0.75
  • the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side and the side length of the third side of the second block satisfy 3 to 1, and both the second side and the third side are perpendicular to the current video where the current boundary image block is located At the boundary of the frame, the first block includes the pixel area.
  • the dividing module 1402 is further configured to divide the first block in a direction perpendicular to the first side to obtain a first sub block and a second sub block.
  • the first sub block The side length of the second sub-side of the block and the side length of the third sub-side of the second sub-block satisfy 2 to 1, and the second sub-side and the third sub-side are both perpendicular to the current boundary image block At the boundary of the current video frame, the first sub-block is a non-boundary image block.
  • the division module 1402 is further configured to perform binary tree division on the first block or quadtree division on the first block in a direction perpendicular to the first side.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than or equal to 0.75 and the side length of the first sub-side and the first side When the ratio of the side length of one side is less than 1, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side and the side length of the third side of the second block satisfy 3 to 1, and both the second side and the third side are perpendicular to the current video where the current boundary image block is located At the boundary of the frame, the first block is a non-boundary block.
  • the dividing module 1402 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than or equal to 0.5 and the side length of the first sub-side is compared with the first side When the ratio of the side length of one side is less than 1, the current boundary image block is divided in a direction perpendicular to the first side to obtain the first block and the second block, and the first block
  • the side length of the second side and the side length of the third side of the second block satisfy 3 to 1, and both the second side and the third side are perpendicular to the current video where the current boundary image block is located The boundary of the frame.
  • the video decoding device 1400 shown in FIG. 14A may also be used to perform the operation of the decoder 30 in FIG. 7C described above.
  • the detection module 1401 is used to determine that the ratio of the side length of the first sub-side of the lower right corner image block of the current video frame to the side length of the first side is less than or equal to a preset threshold, and the first side of the lower right corner image block is The ratio of the side length of the two sub-sides to the side length of the second side is greater than the preset threshold, the first side includes the first sub-side, the second side includes the second sub-side, the The first side is perpendicular to the second side, and the first sub-side and the second sub-side are the sides of the pixel area in the lower right corner image block; the dividing module 1402 is configured to divide by using a QT-derived division mode
  • the lower right corner image block obtains a first block, a second block, and a third block, the first block includes a first sub-pixel area of the pixel area, and the first block is located on the right The upper left corner of the lower corner image block, the second block includes the second sub-pixel area of the pixel
  • the reconstruction block of the coding unit corresponding to the block; the dividing module 1402 is further configured to use the first block as the coding unit when the area of the first block is equal to the area of the first sub-pixel region, and according to Obtain the reconstruction block of the coding unit from the coding information of the coding unit, or continue to divide the first block to obtain the coding unit corresponding to the first block, and according to the coding corresponding to the first block
  • the coding information of the unit obtains the reconstruction block of the coding unit corresponding to the first block; the dividing module 1402 is further configured to continue dividing the block when the area of the first block is larger than the area of the first sub-pixel area
  • the first block is obtained to obtain a coding unit corresponding to the first block, and a reconstruction block of the coding unit corresponding to the first block is obtained according to the coding information of the coding unit corresponding to the first block.
  • the decoder 30 can also more efficiently divide the lower right corner image block of the video frame.
  • the preset threshold is 0.5.
  • the functions of the detection module 1401 and the division module 1402 described in this embodiment may be integrated into the prediction processing unit 360 in the decoder 30 illustrated in FIG. 5D, for example. That is, the detection module 1401 and the division module 1402 described in this embodiment may be the prediction processing unit 360 shown in FIG. 5D in other expressions.
  • FIG. 14B shows another possible structural schematic diagram of the video decoding device 1400 involved in the foregoing embodiment.
  • the video decoding device 1410 includes a processor 1403, a transceiver 1404, and a memory 1405. As shown in FIG. 14B, the transceiver 1404 is used to transmit and receive image data with the video encoding device.
  • the memory 1405 is configured to be coupled with the processor 1403, and it stores a computer program 1406 necessary for the video decoding device 1410.
  • the transceiver 1404 is configured to receive the encoded information sent by the encoder 20.
  • the processor 1403 is configured as a decoding operation or function of the video decoding device 1410.
  • the present application also provides a video encoding device 1500.
  • the terminal device 1500 may include a detection module 1501 and a division module 1502.
  • the video encoding device 1500 may be used to perform the operations of the encoder 20 in FIGS. 5A to 5C, 8A, 8B, and 11A-1 to 12 described above.
  • the detection module 1501 is configured to detect whether the ratio of the side length of the first sub-side of the current boundary image block of the current video frame to the side length of the first side is less than or equal to a first threshold, wherein the first side is the current The edge of the boundary image block, the edge of the first sub-edge pixel area, the pixel area is the pixel area in the current boundary image block, the first edge and the first sub edge are both perpendicular to the At the boundary of the current video frame where the current boundary image block is located, the first threshold is a value greater than 0 and less than 1.
  • the dividing module 1502 is configured to determine when the side length of the first sub-side is the same as the first side When the ratio of the side length of is less than or equal to the first threshold, the current boundary image block is divided in a direction perpendicular to the first side to obtain a first block and a second block, and the first block Contains the pixel area; the dividing module 1502 is further configured to, when the area of the first block is equal to the area of the pixel area, use the first block as a coding unit, and according to the image of the coding unit Information to obtain the coding information of the coding unit, or continue to divide the first block to obtain the coding unit, and obtain the coding information of the coding unit according to the image information of the coding unit; the dividing module 1502 is also used for When the area of the first block is greater than the area of the pixel area, continue to divide the first block to obtain a coding unit, and obtain the coding information of the coding unit according to the image information of the coding unit.
  • the video encoding device 1500 divides the pixel area in the boundary image block into the first sub-block in.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the video encoding device 1500 performs block division, it is not limited to the existing BT and/or QT division method, so that in the process of dividing the boundary image block to obtain the CU, the number of divisions can be reduced, and in turn, the number of divisions can be reduced. Algorithm complexity.
  • the dividing module 1502 is further configured to: when the ratio of the side length of the first sub-side to the side length of the first side is greater than the first threshold, take the direction perpendicular to the first side
  • the current boundary image block is divided above to obtain a first block and a second block, the first block is a non-boundary image block, the second block is a boundary image block and includes a first sub-pixel area, the The first sub-pixel area is a partial area of the pixel area; continue to divide the second block to obtain a coding unit, and obtain the coding information of the coding unit according to the image information of the coding unit.
  • the video encoding device 1500 shown in FIG. 15A may also be used to perform the operation of encoding 30 in FIG. 8B described above.
  • the detection module 1501 is configured to detect whether the ratio of the side length of the first sub-side of the current boundary image block of the current video frame to the side length of the first side is within a preset interval, wherein the first side is the current The edge of the boundary image block, the first sub-edge is the edge of the pixel area in the current boundary image block, and the first edge and the first sub-edge are both perpendicular to the current boundary image block.
  • a dividing module 1502 configured to, when the ratio of the side length of the first sub-side to the side length of the first side is in the preset interval, take the value perpendicular to the first side Divide the current boundary image block in a direction to obtain a first block and a second block; the division module 1502 is further configured to use a block that is a non-boundary block in the first block and the second block as an encoding unit, And obtain the coding information of the coding unit according to the image information of the coding unit, or continue to divide the first block or the second block to obtain at least two coding units, and according to the at least two The image information of the coding unit obtains the coding information of the at least two coding units.
  • the video encoding device 1500 divides the pixel area in the boundary image block to obtain the CU.
  • the first side is the side of the current boundary image block
  • the first sub-side is the side of the pixel area in the current boundary image block
  • both the first side and the first sub side are perpendicular to the boundary of the current video frame where the current boundary image block is located.
  • the video decoding device 1400 performs block division, it is not limited to the existing BT and/or QT division method, so that in the process of dividing the boundary image block to obtain the CU, the number of divisions can be reduced, and further, the number of divisions can be reduced. Algorithm complexity.
  • the video encoding device 1500 shown in FIG. 15A may also be used to perform the operation of encoding 30 in FIG. 8C.
  • the detection module 1501 is also used to determine that the ratio of the side length of the first sub-side of the lower right corner image block of the current video frame to the side length of the first side is less than or equal to a preset threshold, and the lower right corner image block
  • the ratio of the side length of the second side to the side length of the second side is greater than the preset threshold, the first side includes the first child side, the second side includes the second child side, so The first side is perpendicular to the second side, the first sub-side and the second sub-side are sides of a pixel area, and the pixel area is a pixel area in the lower right corner image block; a dividing module 1502 , For dividing the lower right corner image block using a QT-derived division mode to obtain a first block, a second block, and a third block, the first block including the first sub-pixel area of the pixel area, The first block is located in the upper left corner of the lower right corner image block, the second block includes the second sub-pixel area of the pixel
  • the decoder 30 can also more efficiently divide the lower right corner image block of the video frame.
  • the preset threshold is 0.5.
  • the functions of the detection module 1501 and the division module 1502 described in this embodiment may be integrated into the prediction processing unit 210 in the encoder 20 illustrated in FIG. 5C, for example. That is, the detection module 1501 and the division module 1502 described in this embodiment may be the prediction processing unit 210 shown in FIG. 5C in other expressions.
  • FIG. 15B shows another possible structural schematic diagram of the video encoding device 1500 involved in the foregoing embodiment.
  • the video encoding device 1510 includes a processor 1503, a transceiver 1504, and a memory 1505.
  • the memory 1505 is configured to be coupled with the processor 1503, and it stores a computer program 1506 necessary for the video encoding device 1510.
  • the transceiver 1503 is configured to send encoded information to the decoder 30.
  • the processor 1503 is configured as an encoding operation or function of the video encoding device 1510.
  • the present application also provides a computer storage medium, wherein the computer storage medium provided in any device can store a program, and when the program is executed, the implementation includes FIGS. 7A to 7A to Figure 13B provides part or all of the steps in the embodiments of the video encoding method and the video decoding method.
  • the storage medium in any device can be a magnetic disk, an optical disc, a read-only memory (read-only memory, ROM), or a random access memory (random access memory, RAM), etc.
  • the processor may be a central processing unit (CPU), a network processor (NP), or a combination of CPU and NP.
  • the processor may further include a hardware chip.
  • the aforementioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • the aforementioned PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (generic array logic, GAL), or any combination thereof.
  • the memory may include volatile memory (volatile memory), such as random-access memory (RAM); the memory may also include non-volatile memory (non-volatile memory), such as read-only memory (read-only memory).
  • the various illustrative logic units and circuits described in this application can be implemented by general-purpose processors, digital signal processors, application-specific integrated circuits (ASIC), field programmable gate arrays (FPGA) or other programmable logic devices, discrete gates Or transistor logic, discrete hardware components, or any combination of the above are designed to implement or operate the described functions.
  • the general-purpose processor may be a microprocessor, and optionally, the general-purpose processor may also be any traditional processor, controller, microcontroller, or state machine.
  • the processor can also be implemented by a combination of computing devices, such as a digital signal processor and a microprocessor, multiple microprocessors, one or more microprocessors combined with a digital signal processor core, or any other similar configuration achieve.
  • the steps of the method or algorithm described in this application can be directly embedded in hardware, a software unit executed by a processor, or a combination of the two.
  • the software unit can be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM or any other storage medium in the field.
  • the storage medium may be connected to the processor, so that the processor can read information from the storage medium, and can store and write information to the storage medium.
  • the storage medium may also be integrated into the processor.
  • the processor and the storage medium may be set in the ASIC, and the ASIC may be set in the UE.
  • the processor and the storage medium may also be provided in different components in the UE.
  • the size of the sequence number of each process does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not correspond to the implementation process of this application. Constitute any limitation.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请公开了一种视频编码方法、视频解码方法及相关设备。所述视频解码方法包括:检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值,当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块,所述第一分块包含所述像素区域。本申请技术方案,能够根据边界图像块中像素区域的边长与该边界图像块的边长的关系,将边界图像块中的像素区域划分到一个分块中,从而能够减少划分边界图像块至得到CU的过程中的划分次数,进而,能够降低划分的算法复杂度。

Description

视频编码方法、视频解码方法及相关设备
本申请要求在2019年3月30日提交中国专利局、申请号为201910254106.7、发明名称为“视频编码方法、视频解码方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及多媒体领域,尤其涉及一种视频编码方法、视频解码方法及相关设备。
背景技术
视频编解码是处理视频数据的常用操作。视频编解码通常以编码单元(coding unit,CU)为单位,执行编解码操作。其中,CU是将视频中的图像分块得到的。
具体的,一帧图像被执行编解码之前,可以被按照视频编解码标准划分为连续且不重叠的多个最大编码单元(largest coding unit,LCU)。视频编解码标准例如规定LCU是128*128的像素区域。由于该帧图像横向和/或纵向的像素总数量可能不是128的整数倍,导致最后一行LCU和/或最右一列的LCU(本领域也可称为边界图像块)中,每个图像块既包含像素区域,也包含空白区域。如图1所示,边界图像块中阴影部分示意像素区域,没有阴影的部分示意空白区域。基于此,边界图像块需要被继续划分,以得到CU。
现有的图像块划分方法包括四叉树划分方法和二叉树划分方法。然而,无论使用四叉树划分方法、二叉树划分方法或者两种划分方法结合,划分边界图像块至得到CU的过程中,需要划分的次数均相对较多,从而导致划分的算法复杂度较高。
发明内容
本申请提供了一种视频编码方法、视频解码方法及相关设备,能够解决划分边界图像块至得到CU的过程中,划分次数多的问题。
第一方面,本申请提供了一种视频解码方法,所述方法包括:检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一阈值是大于0且小于1的数值;当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块,所述第一分块包含所述像素区域;当所述第一分块的面积等于所述像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块,以得到至少两个编码单元,并根据所述至少两个编码单元的编码信息得到所述至少两个编码单元的重建块;或者,当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
其中,本申请示意的视频解码方法及相关设备,在第一子边的边长与第一边的边长之 比小于或者等于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块,第一分块包含所述像素区域。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。可见,采用本实现方式,根据边界图像块中像素区域的边长与该边界图像块的边长的关系,将边界图像块中的像素区域划分到一个分块中,从而能够减少划分边界图像块至得到CU的过程中的划分次数,进而,能够降低划分的算法复杂度。
一种可能的实现方式中,还包括:当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块,所述第一分块是非边界图像块,所述第二分块为边界图像块并包括子像素区域,所述子像素区域是所述像素区域的部分区域;继续划分所述第二分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
可见,当第一子边的边长与第一边的边长之比小于或者等于第一阈值时,解码器将边界图像块中的像素区域划分到第一分块中。当第一子边的边长与第一边的边长之比大于第一阈值时,解码器可以以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。采用本实现方式,执行分块时,不受限于现有的BT和/或QT的划分方法,从而在划分边界图像块至得到CU的过程中,能够减少划分次数,进而,能够降低划分的算法复杂度。
一种可能的实现方式中,当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块包括:当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,以垂直于所述第一边的方向划分所述第一分块得到第一子分块和第二子分块,所述第一子分块是非边界图像块,所述第二子分块包括子像素区域,所述子像素区域是所述像素区域的部分区域。
本申请技术方案在划分边界图像块时,解码器能够根据边界像素区域的边长与该像素区域所在边界图像块的边长的关系划分边界图像块,从而使得在划分LCU至得到CU的过程中,划分次数相对较少,进而,能够降低划分算法的复杂度。本实施例中所述的边长是像素区域和边界图像块的边中垂直于当前边界图像块所在的当前视频帧的边界的边的长度。
一种可能的实现方式中,当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块包括:当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,以垂直于所述第一边的方向上对所述第一分块执行二叉树BT划分,或者对所述第一分块执行四叉树QT划分。
可见,本申请技术方案中,相关设备可以维护多种DT划分模式,从而能够在划分边界图像块和右下角图像块时,从多种DT划分模式选择划分模式,进而,使得在划分边界图像块和/或右下角图像块直到得到CU的过程中,划分次数相对较少。
可见,本申请技术方案中,相关设备可以维护多种DT划分模式,从而能够在划分边界图像块和右下角图像块时,从多种DT划分模式选择划分模式,进而,使得在划分边界图像块和/或右下角图像块直到得到CU的过程中,划分次数相对较少。
一种可能的实现方式中,当所述第一子边的边长与所述第一边的边长之比小于或者等 于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块。
一种可能的实现方式中,所述第一阈值为0.25,所述第二阈值为零,当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于零并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.25时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比3,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
一种可能的实现方式中,所述第一阈值为0.5,所述第二阈值为0.25,当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于0.25并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.5时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
一种可能的实现方式中,所述第一阈值为0.75,所述第二阈值为0.5,当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.75时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
一种可能的实现方式中,当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于第三阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块。
一种可能的实现方式中,所述第一阈值为0.75,所述第三阈值为1,当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于第三阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于或者等于0.75并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分 块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
采用本实现方式,解码器可以维护多种DT划分模式,从而能够在划分边界图像块和右下角图像块时,从多种DT划分模式选择划分模式,进而,使得在划分边界图像块和/或右下角图像块直到得到CU的过程中,划分次数相对较少。
第二方面,本申请提供了一种视频解码方法,所述方法包括:检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界;当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块;将所述第一分块和第二分块中为非边界块的分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块或所述第二分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
其中,一些实施例中,第一分块中可以包含当前边界图像块中的全部像素区域,而第二分块中不包含任何像素区域。该场景下,解码设备可以按照第一方面所述的方式执行后续操作。在另一些实施例中,第一分块可以是非边界图像块,而第二分块是边界图像块。第二分块中包含的像素区域是当前边界图像块像素区域的一部分。该场景下,解码设备可以将第一分块作为编码单元,并根据编码单元的编码信息得到编码单元的重建块,或者继续划分第一分块,得到至少两个编码单元,并根据该至少两个编码单元的编码信息得到该至少两个编码单元的重建块。解码设备可以继续划分第二分块,以得到编码单元。采用本实现方式,当第一子边的边长与第一边的边长之比在预设区间时,视频解码设备划分边界图像块中的像素区域得到编码单元。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。这样视频解码设备执行分块时,不受限于现有的BT和/或QT的划分方法,从而在划分边界图像块至得到编码单元的过程中,能够减少划分次数,进而,能够降低划分的算法复杂度。
一种可能的实现方式中,所述预设区间的数值范围为大于第二阈值并且小于第一阈值。
一种可能的实现方式中,所述第一阈值为0.25,所述第二阈值为零,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于零并且所述第一子边的边长与所述第一边的边长之比小于0.25时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比3,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
一种可能的实现方式中,所述第一阈值为0.5,所述第二阈值为0.25,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:当所述第一子边的边长与所述第 一边的边长之比大于0.25并且所述第一子边的边长与所述第一边的边长之比小于0.5时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
一种可能的实现方式中,所述继续划分所述第一分块或所述第二分块包括:以垂直于所述第一边的方向对所述第一分块执行二叉树划分或者对所述第一分块执行四叉树划分。
一种可能的实现方式中,所述第一阈值为0.75,所述第二阈值为0.5,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于0.75时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
一种可能的实现方式中,所述继续划分所述第一分块或所述第二分块包括:以垂直于所述第一边的方向对所述第一分块进行划分,得到第一子分块和第二子分块,所述第一子分块第二子边的边长与所述第二子分块第三子边的边长满足2比1,所述第二子边和所述第三子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一子分块为非边界图像块。
一种可能的实现方式中,所述继续划分所述第一分块或所述第二分块包括:以垂直于所述第一边的方向对所述第一分块执行二叉树划分或者对所述第一分块执行四叉树划分。
一种可能的实现方式中,所述第一阈值为1,所述第二阈值为0.75,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于0.75并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块为非边界块。
一种可能的实现方式中,所述第一阈值为1,所述第二阈值为0.5,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
第三方面,本申请提供了一种视频解码方法,所述方法包括:确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且所述右下角图 像块的第二子边的边长与第二边的边长之比大于所述预设阈值,所述第一边包含所述第一子边,所述第二边包含所述第二子边,所述第一边垂直于所述第二边,所述第一子边和所述第二子边是所述右下角图像块中像素区域的边;采用QT衍生的划分模式划分所述右下角图像块得到第一分块、第二分块和第三分块,所述第一分块包含所述像素区域的第一子像素区域,所述第一分块位于所述右下角图像块的左上角,所述第二分块包含所述像素区域的第二子像素区域,所述第一分块的面积和所述第二分块的面积均是所述右下角图像块面积的四分之一,所述第三分块的面积是所述边界图像块面积的二分之一,所述第一子像素区域和所述第二子像素区域构成了所述像素区域;继续划分所述第二分块,以得到所述第二分块对应的编码单元,并根据所述第二分块对应的编码单元的编码信息得到所述第二分块对应的编码单元的重建块;当所述第一分块的面积等于所述第一子像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的编码信息得到所述第一分块对应的编码单元的重建块;或者,当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的编码信息得到所述第一分块对应的编码单元的重建块。
采用本实现方式,视频解码设备能够根据右下角图像块中像素区域边长与图像块边长的关系,根据DT划分方法、BT划分方法或者QT划分方法划分相应图像块,从而能够减少划分边界图像块至得到CU的过程中的划分次数,进而,能够降低划分的算法复杂度。
一种可能的实现方式中,所述预设阈值是0.5。
一种可能的实现方式中,当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块包括:检测所述第一分块第三子边的边长与第三边的边长之比是否小于或者等于第一阈值,所述第三子边是所述第一子像素区域的边,所述第三边和所述第三子边垂直于所述第一分块对应的所述当前视频帧的边界;当所述第三子边的边长与所述第三边的边长之比小于或者等于所述第一阈值时,从垂直于所述第一边的方向上划分所述第一分块得到第一子分块和第二子分块,所述第一子分块包含所述第一子像素区域;当所述第一子块的面积等于所述第一子像素区域的面积时,将所述第一子块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一子块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块;或者,当所述第一子块的面积大于所述第一子像素区域的面积时,继续划分所述第一子块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
第四方面,本申请还提供了一种视频编码方法,所述方法包括:检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值,其中,所述第一边是所述当前边界图像块的边,所述第一子边像素区域的边,所述像素区域是所述当前边界图像块内的像素区域,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一阈值是大于0且小于1的数值;当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块,所述第一分块包含所述像素区域;当所述第一分块的面积等于所述像素区域的面积时,将所述第一分块作为编码单元, 并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息;或者,当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
其中,本申请示意的视频编码方法及相关设备,在第一子边的边长与第一边的边长之比小于或者等于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块,第一分块包含所述像素区域。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。可见,采用本实现方式,根据边界图像块中像素区域的边长与该边界图像块的边长的关系,将边界图像块中的像素区域划分到一个分块中,从而能够减少划分边界图像块至得到CU的过程中的划分次数,进而,能够降低划分的算法复杂度。
一种可能的实现方式中,还包括:当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块,所述第一分块是非边界图像块,所述第二分块为边界图像块并包括第一子像素区域,所述第一子像素区域是所述像素区域的部分区域;继续划分所述第二分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
采用本实现方式,编码器可以维护多种DT划分模式,从而能够在划分边界图像块和右下角图像块时,从多种DT划分模式选择划分模式,进而,使得在划分边界图像块和/或右下角图像块直到得到CU的过程中,划分次数相对较少。
第五方面,本申请还提供了一种视频编码方法,所述方法包括:检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界;当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块;将所述第一分块和第二分块中为非边界块的分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块或所述第二分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
第六方面,本申请还提供了一种视频编码方法,所述方法包括:确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且所述右下角图像块的第二子边的边长与第二边的边长之比大于所述预设阈值,所述第一边包含所述第一子边,所述第二边包含所述第二子边,所述第一边垂直于所述第二边,所述第一子边和所述第二子边是像素区域的边,所述像素区域是所述右下角图像块中的像素区域;采用QT衍生的划分模式划分所述右下角图像块得到第一分块、第二分块和第三分块,所述第一分块包含所述像素区域的第一子像素区域,所述第一分块位于所述右下角图像块的左上角,所述第二分块包含所述像素区域的第二子像素区域,所述第一分块的面积和所述第二分块的面积均是所述右下角图像块面积的四分之一,所述第三分块的面积是所述边界图像块面积的二分之一,所述第一子像素区域和所述第二子像素区域构成了所述像素区域;继续划分所述第二分块,以得到所述第二分块对应的编码单元,并根据所述第二分块对应的编码 单元的图像信息得到所述第二分块对应的编码单元的编码信息;当所述第一分块的面积等于所述第一子像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的图像信息得到所述第一分块对应的编码单元的编码信息;或者,当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的图像信息得到所述第一分块对应的编码单元的编码信息。
采用本实现方式,视频编码设备能够根据右下角图像块中像素区域边长与图像块边长的关系,根据DT划分方法、BT划分方法或者QT划分方法划分相应图像块,从而能够减少划分边界图像块至得到CU的过程中的划分次数,进而,能够降低划分的算法复杂度。
一种可能的实现方式中,所述预设阈值是0.5。
一种可能的实现方式中,当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块包括:检测所述第一分块第三子边的边长与第三边的边长之比是否小于或者等于第一阈值,所述第三子边是所述第一子像素区域的边,所述第三边和所述第三子边垂直于所述第一分块对应的所述当前视频帧的边界;当所述第三子边的边长与所述第三边的边长之比小于或者等于所述第一阈值时,从垂直于所述第一边的方向上划分所述第一分块得到第一子分块和第二子分块,所述第一子分块包含所述第一子像素区域;当所述第一子块的面积等于所述第一子像素区域的面积时,将所述第一子块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一子块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息;或者,当所述第一子块的面积大于所述第一子像素区域的面积时,继续划分所述第一子块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
第七方面,本申请提供了一种视频解码设备,该视频解码设备具有实现上述方法中视频解码设备行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,上述视频解码设备的结构中包括处理器和收发器,所述收发器被配置为与视频编码设备进行图像数据的接收和发送,所述处理器被配置为处理该视频解码设备执行上述方法中相应的功能。所述视频解码设备还可以包括存储器,所述存储器用于与处理器耦合,其保存该视频解码设备必要的程序指令和数据。
第八方面,本申请提供了一种视频编码设备,该视频编码设备具有实现上述方法中视频编码设备行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,上述视频编码设备的结构中包括处理器和收发器,所述收发器被配置为与视频解码设备进行图像数据的接收和发送,所述处理器被配置为处理该视频编码设备执行上述方法中相应的功能。所述视频编码设备还可以包括存储器,所述存储器用于与处理器耦合,其保存该视频编码设备必要的程序指令和数据。
第九方面,本申请还提供了一种芯片,所述芯片包括处理器和接口,所述接口与所述处理器耦合,所述接口用于与所述芯片之外的其它模块进行通信,所述处理器用于执行计算机程序或指令,以实现第一方面、第二方面、第三方面、第一方面任意可能的设计中、 第二方面任意可能的设计中以及第三方面任意可能的设计中的视频解码方法。
第十方面,本申请还提供了一种芯片,所述芯片包括处理器和接口,所述接口与所述处理器耦合,所述接口用于与所述芯片之外的其它模块进行通信,所述处理器用于执行计算机程序或指令,以实现第三方面、第四方面、第五方面、第三方面任意可能的设计中以及第四方面任意可能的设计中视频编码的方法。
第十一方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行第一方面、第二方面、第三方面、第四方面、第五方面、第六方面、第一方面任意可能的设计中、第二方面任意可能的设计中、第三方面任意可能的设计中、第四方面任意可能的设计中以及第六方面任意可能的设计中的方法。
为解决现有划分方法产生的问题,本申请技术方案可以检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值。当第一子边的边长与第一边的边长之比小于或者等于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块,第一分块包含所述像素区域。进一步的,当第一分块的面积等于像素区域的面积时,将第一分块作为编码单元,或者继续划分第一分块,以得到编码单元。当第一分块的面积大于像素区域的面积时,说明第一分块依然是边界图像块,继续划分第一分块,以得到编码单元。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。可见,本申请技术方案,能够根据边界图像块中像素区域的边长与该边界图像块的边长的关系,将边界图像块中的像素区域划分到一个分块中,从而能够减少划分边界图像块至得到CU的过程中的划分次数,进而,能够降低划分的算法复杂度。
附图说明
为了更清楚地说明本申请的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请提供的LCU的示意图;
图2是本申请提供的QT划分方法对应的划分示意图;
图3A是本申请提供的BT划分方法对应的一种实施方式的划分示意图;
图3B是本申请提供的BT划分方法对应的另一种实施方式的划分示意图;
图4A是本申请提供的使用QT划分方法划分边界图像块的示例性示意图;
图4B是本申请提供的使用BT划分方法划分边界图像块的示例性示意图;
图5A是用于实现本申请编码方法和解码方法的视频编码及解码系统10的示例性结构示意图;
图5B是用于实现本申请编码方法和解码方法的视频译码系统40的示例性结构示意图;
图5C是用于实现本申请编码方法的编码器20的示例性结构示意图;
图5D是用于实现本申请解码方法的解码器30的示例性结构示意图;
图6A是本申请提供的边界像素块的第一种示例性示意图;
图6B是本申请提供的边界像素块的第二种示例性示意图;
图6C是本申请提供的右下角像素块的示意图;
图7A是本申请提供的视频解码方法100的示例性方法流程图;
图7B是本申请提供的视频解码方法200的示例性方法流程图;
图7C是本申请提供的视频解码方法300的示例性方法流程图;
图8A是本申请提供的视频编码方法400的示例性的方法流程图;
图8B是本申请提供的视频编码方法500的示例性的方法流程图;
图8C是本申请提供的视频编码方法600的示例性方法流程图;
图9是本申请提供的划分模式的示例性分块示意图;
图10是本申请提供的DT划分方法的示例性划分模式示意图;
图11A-1是本申请提供的边界图像块111的示意图;
图11A-2是本申请提供的边界图像块1111的示意图;
图11B是本申请提供的边界图像块112的示意图;
图11C是本申请提供的边界图像块113的示意图;
图11D是本申请提供的边界图像块114的示意图;
图12是本申请提供的边界图像块121的示意图;
图13A-1是本申请提供的右下角图像块131第一种实施方式的示意图;
图13A-2是本申请提供的右下角图像块131第二种实施方式的示意图;
图13B是本申请提供的右下角图像块132的示意图;
图14A是本申请提供的视频解码装置1400的结构示意图;
图14B是本申请提供的视频解码装置1410的结构示意图;
图15A是本申请提供的视频解码装置1500的结构示意图;
图15B是本申请提供的视频解码装置1510的结构示意图。
具体实施方式
下面将结合本申请中的附图,对本申请中的技术方案进行清楚的描述。
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,尽管在以下实施例中可能采用术语第一、第二等来描述某一类对象,但所述对象不应限于这些术语。这些术语仅用来将该类对象的具体对象进行区分。例如,以下实施例中可能采用术语第一、第二等来描述图像块的分块(以下简称分块),但分块不应限于这些术语。这些术语仅用来将多个不同的分块进行区分。以下实施例中可能采用术语第一、第二等来描述的其他类对象同理,此处不再赘述。另外,在以下实施例的描述中,“多个”是指两个或两个以上。
在对本申请的技术方案进行说明之前,首先结合附图对本申请的技术场景进行说明。
视频可以理解为按照一定顺序和帧速率播放的若干帧图像(本领域也可以描述为图像)。视频数据中包含大量的空间冗余,时间冗余,视觉冗余,信息熵冗余、结构冗余、知识冗余和重要性冗余等冗余信息。视频编码实质是对视频中每帧图像执行编 码操作,得到每帧图像的编码信息的过程。视频编码在源侧执行。视频解码是根据每帧图像的编码信息重构每帧图像的过程。视频解码在目的地侧执行。编码部分和解码部分的组合也称为编解码(编码和解码)。
视频编解码可根据视频编解码标准(例如,高效率视频编解码H.265标准)而操作,且可遵照高效视频编解码标准(high efficiency video coding standard,HEVC)测试模型。或者,视频编解码可根据其它专属或行业标准而操作,所述标准包含ITU-TH.261、ISO/IECMPEG-1Visual、ITU-TH.262或ISO/IECMPEG-2Visual、ITU-TH.263、ISO/IECMPEG-4Visual,ITU-TH.264(还称为ISO/IECMPEG-4AVC),包含可分级视频编解码及多视图视频编解码扩展。应理解,本申请的技术不限于任何特定编解码标准或技术。
编码和解码均以编码单元(coding unit,CU)为单位。其中,编码可以是将图像划分得到CU,然后对CU中的像素数据进行编码,得到该CU的编码信息。解码可以是将图像划分得到CU,然后根据CU对应的编码信息重建该CU,得到该CU的重建块。以下对CU相关的技术进行描述。
相关设备执行编解码时,可以将图像划分成编码树型块的栅格。在一些例子中,编码树型块可被称作“树型块”、“最大编码单元”(largest coding unit,LCU)或“编码树型单元”。编码树型块还可被继续划分为多个CU,每个CU也可以被继续划分为更小的CU。例如,视频编码器可对编码树型块相关联的像素区域递归地执行四叉树(quadtree,QT)划分,或者,二叉树(binary tree,BT)划分。可以理解的是,QT划分和BT划分是对任意图像块的划分方法,使用QT划分方法和BT划分方法不仅限于划分CU。以下结合附图对QT划分方法和BT划分方法进行介绍。
如图2所示,图2示意的实线方块01可以视为图像块01。以信源编码标准(audio video coding standard,AVS)为例,四叉树划分即一次将图像块01划分为四个大小相同的分块。其中,大小相同是指长和宽均相同,且均为划分前的一半。该四个分块如图2示意的分块011,分块012,分块013和分块014。
以AVS为例,二叉树划分方法即一次将一个图像块划分为两个大小相同的分块。一种可能的实现方式中,如图3A所示,视频编码器可以一次将图像块02水平划分为上、下两个大小相同的分块。本实施例中,两个分块例如是图3A示意的分块021和分块022。另一种可能的实现方式中,如图3B所示,视频编码器可以一次将图像块02垂直划分为左、右两个大小相同的分块。本实施例中,两个分块例如是图3B示意的分块023和分块024。
进一步的,图4A示意了一种使用QT划分方法划分边界图像块的示例。本示例中,边界图像块40第一次被执行QT划分得到分块41、分块42、分块43和分块44。分块41和分块43中依然既包含像素区域又包含空白区域,可以被视为边界图像块并可以被继续执行QT划分。以分块41为例,分块41被划分得到分块411、分块412、分块413和分块414。本示例中,分块411中不包含空白区域,可以被作为CU411进而继续被执行编解码操作,同理,分块413中不包含空白区域,可以被作为CU413进而继续被执行编解码操作。而分块412和分块414中不包含像素区域,可以被丢弃。其他实施例中,若分块411和分块413中依然包含空白区域,或者,分块412和分块414中 依然包含像素区域,相应分块依然需要被继续执行QT划分。
图4B示意了一种使用BT划分方法划分边界图像块的示例。本示例中,边界图像块40被执行BT划分得到分块45和分块46。分块45中依然既包含像素区域又包含空白区域,所以,分块45继续被执行BT划分得到分块451和分块452。本示例中,分块451中不包含空白区域,可以被作为CU451进而继续被执行编解码操作。分块452中不包含像素区域,可以被丢弃。其他实施例中,若分块451中依然包含空白区域,或者,分块452中依然包含像素区域,相应分块依然需要被继续执行BT划分。
可见,QT划分方法和BT划分方法,划分方式单一。采用QT划分方法和/或BT划分方法划分边界图像块至得到CU,需要执行多次划分,导致划分的算法复杂度较高。
为解决QT划分方法和BT划分方法带来的问题,本申请提供了一种视频编码方法、视频解码方法及相关设备,在第一子边的边长与第一边的边长之比小于或者等于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块,第一分块包含所述像素区域。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。可见,本申请技术方案,根据边界图像块中像素区域的边长与该边界图像块的边长的关系,将边界图像块中的像素区域划分到一个分块中,从而能够减少划分边界图像块至得到CU的过程中的划分次数,进而,能够降低划分的算法复杂度。
以下对本申请的技术方案进行介绍。
参见图5A,图5A示例性地给出了本申请所应用的视频编码及解码系统10的示意性框图。如图5A所示,视频编码及解码系统10可包括源设备12和目的地设备14,源设备12产生经编码视频数据,因此,源设备12可被称为视频编码装置。目的地设备14可对由源设备12所产生的经编码的视频数据进行解码,因此,目的地设备14可被称为视频解码装置。源设备12、目的地设备14或两个的各种实施方案可包含一或多个处理器以及耦合到所述一或多个处理器的存储器。所述存储器可包含但不限于随机存储记忆体(random access memory,RAM)、只读存储记忆体(read-only memory,ROM)、带电可擦可编程只读存储器(electrically erasable programmable read only memory,EEPROM)、快闪存储器或可用于以可由计算机存取的指令或数据结构的形式存储所要的程序代码的任何其它媒体,如本文所描述。源设备12和目的地设备14可以包括各种装置,包含桌上型计算机、移动计算装置、笔记型(例如,膝上型)计算机、平板计算机、机顶盒、例如所谓的“智能”电话等电话手持机、电视机、相机、显示装置、数字媒体播放器、视频游戏控制台、车载计算机、无线通信设备或其类似者。
源设备12和目的地设备14之间可通过链路13进行通信连接,目的地设备14可经由链路13从源设备12接收经编码视频数据。链路13可包括能够将经编码视频数据从源设备12移动到目的地设备14的一或多个媒体或装置。在一个实例中,链路13可包括使得源设备12能够实时将经编码视频数据直接发射到目的地设备14的一或多个通信媒体。在此实例中,源设备12可根据通信标准(例如无线通信协议)来调制经编码视频数据,且可将经调制的视频数据发射到目的地设备14。所述一或多个通信媒体可包含无线和/或有线通信媒体,例如射频(RF)频谱或一或多个物理传输线。所述一 或多个通信媒体可形成基于分组的网络的一部分,基于分组的网络例如为局域网、广域网或全球网络(例如,因特网)。所述一或多个通信媒体可包含路由器、交换器、基站或促进从源设备12到目的地设备14的通信的其它设备。
源设备12包括编码器20,另外可选地,源设备12还可以包括图像源16、图像预处理器18、以及通信接口22。具体实现形态中,所述编码器20、图像源16、图像预处理器18、以及通信接口22可能是源设备12中的硬件部件,也可能是源设备12中的软件程序。分别描述如下:
图像源16,可以包括或可以为任何类别的图像捕获设备,用于例如捕获现实世界图像,和/或任何类别的图像或评论(对于屏幕内容编码,屏幕上的一些文字也认为是待编码的图像或图像的一部分)生成设备,例如,用于生成计算机动画图像的计算机图形处理器,或用于获取和/或提供现实世界图像、计算机动画图像(例如,屏幕内容、虚拟现实(virtual reality,VR)图像)的任何类别设备,和/或其任何组合(例如,实景(augmented reality,AR)图像)。图像源16可以为用于捕获图像的相机或者用于存储图像的存储器,图像源16还可以包括存储先前捕获或产生的图像和/或获取或接收图像的任何类别的(内部或外部)接口。当图像源16为相机时,图像源16可例如为本地的或集成在源设备中的集成相机;当图像源16为存储器时,图像源16可为本地的或例如集成在源设备中的集成存储器。当所述图像源16包括接口时,接口可例如为从外部视频源接收图像的外部接口,外部视频源例如为外部图像捕获设备,比如相机、外部存储器或外部图像生成设备,外部图像生成设备例如为外部计算机图形处理器、计算机或服务器。接口可以为根据任何专有或标准化接口协议的任何类别的接口,例如有线或无线接口、光接口。
其中,图像可以视为像素点(picture element)的二维阵列或矩阵。阵列中的像素点也可以称为采样点。阵列或图像在水平和垂直方向(或轴线)上的采样点数目定义图像的尺寸和/或分辨率。为了表示颜色,通常采用三个颜色分量,即图像可以表示为或包含三个采样阵列。例如在RBG格式或颜色空间中,图像包括对应的红色、绿色及蓝色采样阵列。但是,在视频编码中,每个像素通常以亮度/色度格式或颜色空间表示,例如对于YUV格式的图像,包括Y指示的亮度分量(有时也可以用L指示)以及U和V指示的两个色度分量。亮度(luma)分量Y表示亮度或灰度水平强度(例如,在灰度等级图像中两者相同),而两个色度(chroma)分量U和V表示色度或颜色信息分量。相应地,YUV格式的图像包括亮度采样值(Y)的亮度采样阵列,和色度值(U和V)的两个色度采样阵列。RGB格式的图像可以转换或变换为YUV格式,反之亦然,该过程也称为色彩变换或转换。如果图像是黑白的,该图像可以只包括亮度采样阵列。本申请中,由图像源16传输至图像处理器的图像也可称为原始图像数据17。
图像预处理器18,用于接收原始图像数据17并对原始图像数据17执行预处理,以获取经预处理的图像19或经预处理的图像数据19。例如,图像预处理器18执行的预处理可以包括整修、色彩格式转换(例如,从RGB格式转换为YUV格式)、调色或去噪。
编码器20(或称视频编码器20),用于接收经预处理的图像数据19,采用预测模式对经预处理的图像数据19进行处理,从而提供经编码图像数据21(下文将进一 步基于图5B描述编码器20的结构细节)。在一些实施例中,编码器20可以用于执行后文所描述的各个视频编码方法的实施例,以实现本申请所描述的边界图像块划分和右下角图像块划分在编码侧的应用。
通信接口22,可用于接收经编码图像数据21,并可通过链路13将经编码图像数据21传输至目的地设备14或任何其它设备(如存储器),以用于存储或直接重构,所述其它设备可为任何用于解码或存储的设备。通信接口22可例如用于将经编码图像数据21封装成合适的格式,例如数据包,以在链路13上传输。
目的地设备14包括解码器30,另外可选地,目的地设备14还可以包括通信接口28、图像后处理器32和显示设备34。分别描述如下:
通信接口28,可用于从源设备12或任何其它源接收经编码图像数据21,所述任何其它源例如为存储设备,存储设备例如为经编码图像数据存储设备。通信接口28可以用于藉由源设备12和目的地设备14之间的链路13或藉由任何类别的网络传输或接收经编码图像数据21,链路13例如为直接有线或无线连接,任何类别的网络例如为有线或无线网络或其任何组合,或任何类别的私网和公网,或其任何组合。通信接口28可以例如用于解封装通信接口22所传输的数据包以获取经编码图像数据21。
通信接口28和通信接口22都可以配置为单向通信接口或者双向通信接口,以及可以用于例如发送和接收消息来建立连接、确认和交换任何其它与通信链路和/或例如经编码图像数据传输的数据传输有关的信息。
解码器30(或称为解码器30),用于接收经编码图像数据21并提供经解码图像数据31或经解码图像31(下文将进一步基于图5C描述解码器30的结构细节)。在一些实施例中,解码器30可以用于执行后文所描述的各个视频解码方法的实施例,以实现本申请所描述的边界图像块划分和右下角图像块划分在解码侧的应用。
图像后处理器32,用于对经解码图像数据31(也称为经重构图像数据)执行后处理,以获得经后处理图像数据33。图像后处理器32执行的后处理可以包括:色彩格式转换(例如,从YUV格式转换为RGB格式)、调色、整修或重采样,或任何其它处理,还可用于将将经后处理图像数据33传输至显示设备34。
显示设备34,用于接收经后处理图像数据33以向例如用户或观看者显示图像。显示设备34可以为或可以包括任何类别的用于呈现经重构图像的显示器,例如,集成的或外部的显示器或监视器。例如,显示器可以包括液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light emitting diode,OLED)显示器、等离子显示器、投影仪、微LED显示器、硅基液晶(liquid crystal on silicon,LCoS)、数字光处理器(digital light processor,DLP)或任何类别的其它显示器。
虽然图5A将源设备12和目的地设备14绘示为单独的设备,但设备实施例也可以同时包括源设备12和目的地设备14或同时包括两者的功能性,即源设备12或对应的功能性以及目的地设备14或对应的功能性。在此类实施例中,可以使用相同硬件和/或软件,或使用单独的硬件和/或软件,或其任何组合来实施源设备12或对应的功能性以及目的地设备14或对应的功能性。
本领域技术人员基于描述明显可知,不同单元的功能性或图5A所示的源设备12和/或目的地设备14的功能性的存在和(准确)划分可能根据实际设备和应用有所不 同。源设备12和目的地设备14可以包括各种设备中的任一个,包含任何类别的手持或静止设备,例如,笔记本或膝上型计算机、移动电话、智能手机、平板或平板计算机、摄像机、台式计算机、机顶盒、电视机、相机、车载设备、显示设备、数字媒体播放器、视频游戏控制台、视频流式传输设备(例如内容服务服务器或内容分发服务器)、广播接收器设备、广播发射器设备等,并可以不使用或使用任何类别的操作系统。
编码器20和解码器30都可以实施为各种合适电路中的任一个,例如,一个或多个微处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)、离散逻辑、硬件或其任何组合。如果部分地以软件实施所述技术,则设备可将软件的指令存储于合适的非暂时性计算机可读存储介质中,且可使用一或多个处理器以硬件执行指令从而执行本公开的技术。前述内容(包含硬件、软件、硬件与软件的组合等)中的任一者可视为一或多个处理器。
在一些情况下,图5A中所示视频编码及解码系统10仅为示例,本申请的技术可以适用于不必包含编码和解码设备之间的任何数据通信的视频编码设置(例如,视频编码或视频解码)。在其它实例中,数据可从本地存储器检索、在网络上流式传输等。视频编码设备可以对数据进行编码并且将数据存储到存储器,和/或视频解码设备可以从存储器检索数据并且对数据进行解码。在一些实例中,由并不彼此通信而是仅编码数据到存储器和/或从存储器检索数据且解码数据的设备执行编码和解码。
参见图5B,图5B是根据一示例性实施例的包含图5C的编码器20和/或图5D的解码器30的视频译码系统40的实例的说明图。视频译码系统40可以实现本申请的各种技术的组合。在所说明的实施方式中,视频译码系统40可以包含成像设备41、编码器20、解码器30(和/或藉由处理单元46实施的视频编/解码器)、天线42、一个或多个处理器43、一个或多个存储器44和/或显示设备45。
如图5B所示,成像设备41、天线42、处理单元46、编码器20、解码器30、处理器43、存储器44和/或显示设备45能够互相通信。如所论述,虽然用编码器20和解码器30绘示视频译码系统40,但在不同实例中,视频译码系统40可以只包含编码器20或只包含解码器30。
在一些实例中,天线42可以用于传输或接收视频数据的经编码比特流。另外,在一些实例中,显示设备45可以用于呈现视频数据。在一些实例中,处理单元46可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。视频译码系统40也可以包含可选的处理器43,该可选处理器43类似地可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。在一些实例中,处理单元46可以通过硬件实施,如视频编码专用硬件等,处理器43可以通过通用软件、操作系统等实施。另外,存储器44可以是任何类型的存储器,例如易失性存储器(例如,静态随机存取存储器(Static Random Access Memory,SRAM)、动态随机存储器(Dynamic Random Access Memory,DRAM)等)或非易失性存储器(例如,闪存等)等。在非限制性实例中,存 储器44可以由超速缓存内存实施。在一些实例中,处理单元46可以访问存储器44(例如用于实施图像缓冲器)。在其它实例中,处理单元46可以包含存储器(例如,缓存等)用于实施图像缓冲器等。
在一些实例中,通过逻辑电路实施的编码器20可以包含(例如,通过处理单元46或存储器44实施的)图像缓冲器和(例如,通过处理单元46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理单元46实施的编码器20,以实施参照图5C和/或本文中所描述的任何其它编码器系统或子系统所论述的各种模块。逻辑电路可以用于执行本文所论述的各种操作。
在一些实例中,解码器30可以以类似方式通过处理单元46实施,以实施参照图5D的解码器30和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。在一些实例中,逻辑电路实施的解码器30可以包含(通过处理单元2820或存储器44实施的)图像缓冲器和(例如,通过处理单元46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理单元46实施的解码器30,以实施参照图5C和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。
在一些实例中,天线42可以用于接收视频数据的经编码比特流。如所论述,经编码比特流可以包含本文所论述的与编码视频帧相关的数据、指示符、索引值、模式选择数据等,例如与编码分割相关的数据(例如,变换系数或经量化变换系数,(如所论述的)可选指示符,和/或定义编码分割的数据)。视频译码系统40还可包含耦合至天线42并用于解码经编码比特流的解码器30。显示设备45用于呈现视频帧。
应理解,本申请中对于参考编码器20所描述的实例,解码器30可以用于执行相反过程。关于信令语法元素,解码器30可以用于接收并解析这种语法元素,相应地解码相关视频数据。在一些例子中,编码器20可以将语法元素熵编码成经编码视频比特流。在此类实例中,解码器30可以解析这种语法元素,并相应地解码相关视频数据。
需要说明的是,本申请描述的解码方法主要用于解码过程,此过程在编码器20和解码器30均存在。
参见图5C,图5C示出用于实现本申请的编码器20的实例的示意性/概念性框图。在图5C的实例中,编码器20包括残差计算单元201、变换处理单元202、量化单元203、逆量化单元204、逆变换处理单元205、重构单元206、缓冲器207、环路滤波器单元208、经解码图像缓冲器(decoded picture buffer,DPB)209、预测处理单元210和熵编码单元211。预测处理单元210可以包含帧间预测单元2101、帧内预测单元2102和模式选择单元2103。帧间预测单元2101可以包含运动估计单元和运动补偿单元(未图示)。图5C所示的编码器20也可以称为混合型视频编码器或根据混合型视频编解码器的视频编码器。
例如,残差计算单元201、变换处理单元202、量化单元203、预测处理单元210和熵编码单元211形成编码器20的前向信号路径,而例如逆量化单元204、逆变换处理单元205、重构单元206、缓冲器207、环路滤波器、经解码图像缓冲器(decoded picture buffer,DPB)209、预测处理单元210形成编码器的后向信号路径,其中编 码器的后向信号路径对应于解码器的信号路径(参见图5D中的解码器30)。
编码器20通过例如输入,接收图像或图像的图像块,例如,形成视频或视频序列的图像序列中的图像。图像块也可以称为当前图像块或待编码图像块,图像可以称为当前图像或待编码图像(尤其是在视频编码中将当前图像与其它图像区分开时,其它图像例如同一视频序列亦即也包括当前图像的视频序列中的先前经编码和/或经解码图像)。
编码器20的实施例可以包括分割单元(图5C中未绘示),用于将图像分割成多个例如图像块的块,通常分割成多个不重叠的块。分割单元可以用于对视频序列中所有图像使用相同的块大小以及定义块大小的对应栅格,或用于在图像或子集或图像群组之间更改块大小,并将每个图像分割成对应的块。
在一个实例中,编码器20的预测处理单元210可以用于执行上述分割技术的任何组合。
如图像,图像块也是或可以视为具有采样值的采样点的二维阵列或矩阵,虽然其尺寸比图像小。换句话说,图像块可以包括,例如,一个采样阵列(例如黑白图像情况下的亮度阵列)或三个采样阵列(例如,彩色图像情况下的一个亮度阵列和两个色度阵列)或依据所应用的色彩格式的任何其它数目和/或类别的阵列。图像块的水平和垂直方向(或轴线)上采样点的数目定义图像块的尺寸。
如图5C所示的编码器20用于逐块编码图像,例如,对每个图像块执行编码和预测。
残差计算单元201用于基于图像块和预测块(下文提供预测块的其它细节)计算残差块,例如,通过逐样本(逐像素)将图像块的样本值减去预测块的样本值,以在样本域中获取残差块。
变换处理单元202用于在残差块的样本值上应用例如离散余弦变换(discrete cosine transform,DCT)或离散正弦变换(discrete sine transform,DST)的变换,以在变换域中获取变换系数207。变换系数207也可以称为变换残差系数,并在变换域中表示残差块。
变换处理单元202可以用于应用DCT/DST的整数近似值,例如为AVS,AVS2,AVS3指定的变换。与正交DCT变换相比,这种整数近似值通常由某一因子按比例缩放。为了维持经正变换和逆变换处理的残差块的范数,应用额外比例缩放因子作为变换过程的一部分。比例缩放因子通常是基于某些约束条件选择的,例如,比例缩放因子是用于移位运算的2的幂、变换系数的位深度、准确性和实施成本之间的权衡等。例如,在解码器30侧通过例如逆变换处理单元205为逆变换(以及在编码器20侧通过例如逆变换处理单元205为对应逆变换)指定具体比例缩放因子,以及相应地,可以在编码器20侧通过变换处理单元202为正变换指定对应比例缩放因子。
量化单元203用于例如通过应用标量量化或向量量化来量化变换系数207,以获取经量化变换系数209。经量化变换系数209也可以称为经量化残差系数209。量化过程可以减少与部分或全部变换系数207有关的位深度。例如,可在量化期间将n位变换系数向下舍入到m位变换系数,其中n大于m。可通过调整量化参数(quantization parameter,QP)修改量化程度。例如,对于标量量化,可以应用不同的标度来实现较 细或较粗的量化。较小量化步长对应较细量化,而较大量化步长对应较粗量化。可以通过量化参数(quantization parameter,QP)指示合适的量化步长。例如,量化参数可以为合适的量化步长的预定义集合的索引。例如,较小的量化参数可以对应精细量化(较小量化步长),较大量化参数可以对应粗糙量化(较大量化步长),反之亦然。量化可以包含除以量化步长以及例如通过逆量化210执行的对应的量化或逆量化,或者可以包含乘以量化步长。根据例如AVS,AVS2,AVS3的一些标准的实施例可以使用量化参数来确定量化步长。一般而言,可以基于量化参数使用包含除法的等式的定点近似来计算量化步长。可以引入额外比例缩放因子来进行量化和反量化,以恢复可能由于在用于量化步长和量化参数的等式的定点近似中使用的标度而修改的残差块的范数。在一个实例实施方式中,可以合并逆变换和反量化的标度。或者,可以使用自定义量化表并在例如比特流中将其从编码器通过信号发送到解码器。量化是有损操作,其中量化步长越大,损耗越大。
逆量化单元204用于在经量化系数上应用量化单元203的逆量化,以获取经反量化系数211,例如,基于或使用与量化单元203相同的量化步长,应用量化单元203应用的量化方案的逆量化方案。经反量化系数211也可以称为经反量化残差系数,对应于变换系数207,虽然由于量化造成的损耗通常与变换系数不相同。
逆变换处理单元205用于应用变换处理单元202应用的变换的逆变换,例如,逆离散余弦变换(discrete cosine transform,DCT)或逆离散正弦变换(discrete sine transform,DST),以在样本域中获取逆变换块。逆变换块也可以称为逆变换经反量化块或逆变换残差块。
重构单元206(例如,求和器)用于将逆变换块(即经重构残差块)添加至预测块,以在样本域中获取经重构块,例如,将经重构残差块的样本值与预测块的样本值相加。
可选地,例如线缓冲器207的缓冲器单元(或简称“缓冲器”)用于缓冲或存储经重构块和对应的样本值,用于例如帧内预测。在其它的实施例中,编码器可以用于使用存储在缓冲器单元中的未经滤波的经重构块和/或对应的样本值来进行任何类别的估计和/或预测,例如帧内预测。
例如,编码器20的实施例可以经配置以使得缓冲器单元不只用于存储用于帧内预测的经重构块,也用于环路滤波器单元208(在图5C中未示出),和/或,例如使得缓冲器单元和经解码图像缓冲器单元形成一个缓冲器。其它实施例可以用于将经滤波块和/或来自经解码图像缓冲器209的块或样本(图5C中均未示出)用作帧内预测的输入或基础。
环路滤波器单元208(或简称“环路滤波器”)用于对经重构块进行滤波以获取经滤波块,从而顺利进行像素转变或提高视频质量。环路滤波器单元208旨在表示一个或多个环路滤波器,例如去块滤波器、样本自适应偏移(sample-adaptive offset,SAO)滤波器或其它滤波器,例如双边滤波器、自适应环路滤波器(adaptive loop filter,ALF),或锐化或平滑滤波器,或协同滤波器。尽管环路滤波器单元208在图5C中示出为环内滤波器,但在其它配置中,环路滤波器单元208可实施为环后滤波器。经滤波块也可以称为经滤波的经重构块。经解码图像缓冲器209可以在环路滤波器单元208 对经重构编码块执行滤波操作之后存储经重构编码块。
编码器20(对应地,环路滤波器单元208)的实施例可以用于输出环路滤波器参数(例如,样本自适应偏移信息),例如,直接输出或由熵编码单元211或任何其它熵编码单元熵编码后输出,例如使得解码器30可以接收并应用相同的环路滤波器参数用于解码。
经解码图像缓冲器(decoded picture buffer,DPB)209可以为存储参考图像数据供编码器20编码视频数据之用的参考图像存储器。DPB可由多种存储器设备中的任一个形成,例如动态随机存储器(dynamic random access memory,DRAM)(包含同步DRAM(synchronous DRAM,SDRAM)、磁阻式RAM(magnetoresistive RAM,MRAM)、电阻式RAM(resistive RAM,RRAM))或其它类型的存储器设备。可以由同一存储器设备或单独的存储器设备提供DPB和缓冲器207。在某一实例中,经解码图像缓冲器(decoded picture buffer,DPB)209用于存储经滤波块。经解码图像缓冲器209可以进一步用于存储同一当前图像或例如先前经重构图像的不同图像的其它先前的经滤波块,例如先前经重构和经滤波块,以及可以提供完整的先前经重构亦即经解码图像(和对应参考块和样本)和/或部分经重构当前图像(和对应参考块和样本),例如用于帧间预测。在某一实例中,如果经重构块无需环内滤波而得以重构,则经解码图像缓冲器(decoded picture buffer,DPB)209用于存储经重构块。
预测处理单元210,也称为块预测处理单元210,用于接收或获取图像块(当前图像的当前图像块)和经重构图像数据,例如来自缓冲器207的同一(当前)图像的参考样本和/或来自经解码图像缓冲器209的一个或多个先前经解码图像的参考图像数据,以及用于处理这类数据进行预测,即提供可以为经帧间预测块或经帧内预测块的预测块。
模式选择单元2103可以用于选择预测模式(例如帧内或帧间预测模式)和/或对应的用作预测块的预测块,以计算残差块和重构经重构块。
模式选择单元2103的实施例可以用于选择预测模式(例如,从预测处理单元210所支持的那些预测模式中选择),所述预测模式提供最佳匹配或者说最小残差(最小残差意味着传输或存储中更好的压缩),或提供最小信令开销(最小信令开销意味着传输或存储中更好的压缩),或同时考虑或平衡以上两者。模式选择单元2103可以用于基于码率失真优化(rate distortion optimization,RDO)确定预测模式,即选择提供最小码率失真优化的预测模式,或选择相关码率失真至少满足预测模式选择标准的预测模式。
下文将详细解释编码器20的实例(例如,通过预测处理单元210)执行的预测处理和(例如,通过模式选择单元2103)执行的模式选择。
如上文所述,编码器20用于从(预先确定的)预测模式集合中确定或选择最好或最优的预测模式。预测模式集合可以包括例如帧内预测模式和/或帧间预测模式。
帧内预测模式集合可以包括35种不同的帧内预测模式,例如,如DC(或均值)模式和平面模式的非方向性模式,或如H.265中定义的方向性模式,或者可以包括67种不同的帧内预测模式,例如,如DC(或均值)模式和平面模式的非方向性模式,或如正在发展中的H.266中定义的方向性模式。
在可能的实现中,帧间预测模式集合取决于可用参考图像(即,例如前述存储在DBP中的至少部分经解码图像)和其它帧间预测参数,例如取决于是否使用整个参考图像或只使用参考图像的一部分,例如围绕当前块的区域的搜索窗区域,来搜索最佳匹配参考块,和/或例如取决于是否应用如半像素和/或四分之一像素内插的像素内插,帧间预测模式集合例如可包括先进运动矢量(Advanced Motion Vector Prediction,AMVP)模式和融合(merge)模式。具体实施中,帧间预测模式集合可包括本申请改进的基于控制点的AMVP模式,以及,改进的基于控制点的merge模式。在一个实例中,帧内预测单元2102可以用于执行下文描述的帧间预测技术的任意组合。
除了以上预测模式,本申请也可以应用跳过模式和/或直接模式。
预测处理单元210可以进一步用于将图像块分割成较小的分块或子块,例如,通过迭代使用本申请所述的划分方法,以及用于例如为分块或子块中的每一个执行预测,其中模式选择包括选择分割的图像块的树结构和选择应用于分块或子块中的每一个的预测模式。
帧间预测单元2101可以包含运动估计(motion estimation,ME)单元(图5C中未示出)和运动补偿(motion compensation,MC)单元(图5C中未示出)。运动估计单元用于接收或获取图像块(当前图像的当前图像块)和经解码图像,或至少一个或多个先前经重构块,例如,一个或多个其它/不同先前经解码图像的经重构块,来进行运动估计。例如,视频序列可以包括当前图像和先前经解码图像31,或换句话说,当前图像和先前经解码图像31可以是形成视频序列的图像序列的一部分,或者形成该图像序列。
例如,编码器20可以用于从多个其它图像中的同一或不同图像的多个参考块中选择参考块,并向运动估计单元(图5C中未示出)提供参考图像和/或提供参考块的位置(X、Y坐标)与当前块的位置之间的偏移(空间偏移)作为帧间预测参数。该偏移也称为运动向量(motion vector,MV)。
运动补偿单元用于获取帧间预测参数,并基于或使用帧间预测参数执行帧间预测来获取帧间预测块。由运动补偿单元(图5C中未示出)执行的运动补偿可以包含基于通过运动估计(可能执行对子像素精确度的内插)确定的运动/块向量取出或生成预测块。内插滤波可从已知像素样本产生额外像素样本,从而潜在地增加可用于编码图像块的候选预测块的数目。一旦接收到用于当前图像块的PU的运动向量,运动补偿单元可以在一个参考图像列表中定位运动向量指向的预测块。运动补偿单元还可以生成与块和视频条带相关联的语法元素,以供解码器30在解码视频条带的图像块时使用。
具体的,上述帧间预测单元2101可向熵编码单元211传输语法元素,所述语法元素包括帧间预测参数(比如遍历多个帧间预测模式后选择用于当前块预测的帧间预测模式的指示信息)。可能应用场景中,如果帧间预测模式只有一种,那么也可以不在语法元素中携带帧间预测参数,此时解码端30可直接使用默认的预测模式进行解码。可以理解的,帧间预测单元2101可以用于执行帧间预测技术的任意组合。
帧内预测单元2102用于获取,例如接收同一图像的图像块(当前图像块)和一个或多个先前经重构块,例如经重构相相邻块,以进行帧内估计。例如,编码器20可以用于从多个(预定)帧内预测模式中选择帧内预测模式。
编码器20的实施例可以用于基于优化标准选择帧内预测模式,例如基于最小残差(例如,提供最类似于当前图像块的预测块的帧内预测模式)或最小码率失真。
帧内预测单元2102进一步用于基于如所选择的帧内预测模式的帧内预测参数确定帧内预测块。在任何情况下,在选择用于块的帧内预测模式之后,帧内预测单元2102还用于向熵编码单元211提供帧内预测参数,即提供指示所选择的用于块的帧内预测模式的信息。在一个实例中,帧内预测单元2102可以用于执行帧内预测技术的任意组合。
具体的,上述帧内预测单元2102可向熵编码单元211传输语法元素,所述语法元素包括帧内预测参数(比如遍历多个帧内预测模式后选择用于当前块预测的帧内预测模式的指示信息)。可能应用场景中,如果帧内预测模式只有一种,那么也可以不在语法元素中携带帧内预测参数,此时解码端30可直接使用默认的预测模式进行解码。
熵编码单元211用于将熵编码算法或方案(例如,可变长度编码(variable length coding,VLC)方案、上下文自适应VLC(context adaptive VLC,CAVLC)方案、算术编码方案、上下文自适应二进制算术编码(context adaptive binary arithmetic coding,CABAC)、基于语法的上下文自适应二进制算术编码(syntax-based context-adaptive binary arithmetic coding,SBAC)、概率区间分割熵(probability interval partitioning entropy,PIPE)编码或其它熵编码方法或技术)应用于经量化残差系数209、帧间预测参数、帧内预测参数和/或环路滤波器参数中的单个或所有上(或不应用),以获取可以通过输出以例如经编码比特流21的形式输出的经编码图像数据21。可以将经编码比特流传输到视频解码器30,或将其存档稍后由视频解码器30传输或检索。熵编码单元211还可用于熵编码正被编码的当前视频条带的其它语法元素。
视频编码器20的其它结构变型可用于编码视频流。例如,基于非变换的编码器20可以在没有针对某些块或帧的变换处理单元202的情况下直接量化残差信号。在另一实施方式中,编码器20可具有组合成单个单元的量化单元203和逆量化单元204。
具体的,在本申请中,编码器20可用于实现后文实施例中描述的编码方法。
应当理解的是,视频编码器20的其它的结构变化可用于编码视频流。例如,对于某些图像块或者图像帧,视频编码器20可以直接地量化残差信号而不需要经变换处理单元202处理,相应地也不需要经逆变换处理单元205处理;或者,对于某些图像块或者图像帧,视频编码器20没有产生残差数据,相应地不需要经变换处理单元202、量化单元203、逆量化单元204和逆变换处理单元205处理;或者,视频编码器20可以将经重构图像块作为参考块直接地进行存储而不需要经滤波器处理;或者,视频编码器20中量化单元203和逆量化单元204可以合并在一起。环路滤波器是可选的,以及针对无损压缩编码的情况下,变换处理单元202、量化单元203、逆量化单元204和逆变换处理单元205是可选的。应当理解的是,根据不同的应用场景,帧间预测单元2101和帧内预测单元2102可以是被选择性的启用。
参见图5D,图5D示出用于实现本申请的解码器30的实例的示意性/概念性框图。视频解码器30用于接收例如由编码器20编码的经编码图像数据(例如,经编码比特 流)21,以获取经解码图像。在解码过程期间,视频解码器30从视频编码器20接收视频数据,例如表示经编码视频条带的图像块的经编码视频比特流及相关联的语法元素。
在图5D的实例中,解码器30包括熵解码单元304、逆量化单元310、逆变换处理单元312、重构单元314(例如求和器314)、缓冲器316、环路滤波器320、经解码图像缓冲器330以及预测处理单元360。预测处理单元360可以包含帧间预测单元344、帧内预测单元354和模式选择单元362。在一些实例中,视频解码器30可执行大体上与参照图5C的视频编码器20描述的编码遍次互逆的解码遍次。
熵解码单元304用于对经编码图像数据21执行熵解码,以获取例如经量化系数309和/或经解码的编码参数(图5D中未示出),例如,帧间预测、帧内预测参数、环路滤波器参数和/或其它语法元素中(经解码)的任意一个或全部。熵解码单元304进一步用于将帧间预测参数、帧内预测参数和/或其它语法元素转发至预测处理单元360。视频解码器30可接收视频条带层级和/或视频块层级的语法元素。
逆量化单元310功能上可与逆量化单元110相同,逆变换处理单元312功能上可与逆变换处理单元205相同,重构单元314功能上可与重构单元206相同,缓冲器316功能上可与缓冲器207相同,环路滤波器320功能上可与环路滤波器相同,经解码图像缓冲器330功能上可与经解码图像缓冲器209相同。
预测处理单元360可以包括帧间预测单元344和帧内预测单元354,其中帧间预测单元344功能上可以类似于帧间预测单元2101,帧内预测单元354功能上可以类似于帧内预测单元2102。预测处理单元360通常用于执行块预测和/或从经编码数据21获取预测块365,以及从例如熵解码单元304(显式地或隐式地)接收或获取预测相关参数和/或关于所选择的预测模式的信息。
当视频条带经编码为经帧内编码(I)条带时,预测处理单元360的帧内预测单元354用于基于信号表示的帧内预测模式及来自当前帧或图像的先前经解码块的数据来产生用于当前视频条带的图像块的预测块365。当视频帧经编码为经帧间编码(即B或P)条带时,预测处理单元360的帧间预测单元344(例如,运动补偿单元)用于基于运动向量及从熵解码单元304接收的其它语法元素生成用于当前视频条带的视频块的预测块365。对于帧间预测,可从一个参考图像列表内的一个参考图像中产生预测块。视频解码器30可基于存储于DPB 330中的参考图像,使用默认建构技术来建构参考帧列表:列表0和列表1。
预测处理单元360用于通过解析运动向量和其它语法元素,确定用于当前视频条带的视频块的预测信息,并使用预测信息产生用于正经解码的当前视频块的预测块。在本申请的一实例中,预测处理单元360使用接收到的一些语法元素确定用于编码视频条带的视频块的预测模式(例如,帧内或帧间预测)、帧间预测条带类型(例如,B条带、P条带或GPB条带)、用于条带的参考图像列表中的一个或多个的建构信息、用于条带的每个经帧间编码视频块的运动向量、条带的每个经帧间编码视频块的帧间预测状态以及其它信息,以解码当前视频条带的视频块。在本公开的另一实例中,视频解码器30从比特流接收的语法元素包含接收自适应参数集(adaptive parameter set,APS)、序列参数集(sequence parameter set,SPS)、图像参数集(picture  parameter set,PPS)或条带标头中的一个或多个中的语法元素。
逆量化单元310可用于逆量化(即,反量化)在比特流中提供且由熵解码单元304解码的经量化变换系数。逆量化过程可包含使用由视频编码器20针对视频条带中的每一视频块所计算的量化参数来确定应该应用的量化程度并同样确定应该应用的逆量化程度。
逆变换处理单元312用于将逆变换(例如,逆DCT、逆整数变换或概念上类似的逆变换过程)应用于变换系数,以便在像素域中产生残差块。
重构单元314(例如,求和器314)用于将逆变换块313(即经重构残差块313)添加到预测块365,以在样本域中获取经重构块315,例如通过将经重构残差块313的样本值与预测块365的样本值相加。
环路滤波器单元320(在编码循环期间或在编码循环之后)用于对经重构块315进行滤波以获取经滤波块321,从而顺利进行像素转变或提高视频质量。在一个实例中,环路滤波器单元320可以用于执行下文描述的滤波技术的任意组合。环路滤波器单元320旨在表示一个或多个环路滤波器,例如去块滤波器、样本自适应偏移(sample-adaptive offset,SAO)滤波器或其它滤波器,例如双边滤波器、自适应环路滤波器(adaptive loop filter,ALF),或锐化或平滑滤波器,或协同滤波器。尽管环路滤波器单元320在图5D中示出为环内滤波器,但在其它配置中,环路滤波器单元320可实施为环后滤波器。
随后将给定帧或图像中的经解码视频块321存储在存储用于后续运动补偿的参考图像的经解码图像缓冲器330中。
解码器30用于例如,藉由输出332输出经解码图像31,以向用户呈现或供用户查看。
视频解码器30的其它变型可用于对压缩的比特流进行解码。例如,解码器30可以在没有环路滤波器单元320的情况下生成输出视频流。例如,基于非变换的解码器30可以在没有针对某些块或帧的逆变换处理单元312的情况下直接逆量化残差信号。在另一实施方式中,视频解码器30可以具有组合成单个单元的逆量化单元310和逆变换处理单元312。
具体的,在本申请中,解码器30用于实现后文实施例中描述的解码方法。
应当理解的是,视频解码器30的其它结构变化可用于解码经编码视频位流。例如,视频解码器30可以不经滤波器320处理而生成输出视频流;或者,对于某些图像块或者图像帧,视频解码器30的熵解码单元304没有解码出经量化的系数,相应地不需要经逆量化单元310和逆变换处理单元312处理。环路滤波器320是可选的;以及针对无损压缩的情况下,逆量化单元310和逆变换处理单元312是可选的。应当理解的是,根据不同的应用场景,帧间预测单元和帧内预测单元可以是被选择性的启用。
以下结合图1对本申请编码方法和解码方法中被划分的图像块进行描述。
根据图1示意的LCU的示意图可知,同时包含像素区域和空白区域的图像块包括:位于当前视频帧右边界的边界图像块,位于当前视频帧下边界的边界图像块,和位于当前视频帧右下角的图像块。示例性的,位于当前视频帧右边界的边界图像块如图6A 所示。位于当前视频帧下边界的边界图像块如图6B所示。位于当前视频帧右下角的图像块如图6C所示。为便于描述,本申请将位于当前视频帧右边界的边界图像块,以及位于当前视频帧下边界的边界图像块,均称为“边界图像块”,将位于当前视频帧右下角的图像块称为“右下角图像块”。
可以理解的是,当当前视频帧右边界的图像块和下边界的图像块均是“边界图像块”时,当前视频帧右下角的图像块如图6C所示,也即,当前视频帧中存在“右下角图像块”。当当前视频帧右边界的图像块是“边界图像块”,而当前视频帧下边界的图像块是“非边界图像块”时,当前视频帧右下角的图像块则是如图6A所示的“边界图像块”。当当前视频帧下边界的图像块是“边界图像块”,而当前视频帧右边界的图像块是“非边界图像块”时,当前视频帧右下角的图像块则是如图6B所示的“边界图像块”。
可以理解的是,图6A示意的是右边界的边界图像块中像素区域和空白区域的分布关系,并不特指某一个右边界的边界图像块。本实施例中,图6A可以泛指右边界的边界图像块。同理,图6B示意的是下边界的边界图像块中像素区域和空白区域的分布关系,并不特指某一个下边界的边界图像块。本实施例中,图6B可以泛指下边界的边界图像块。此外,图6A至图6C示意的图像块仅仅是示意性呈现,本申请实施例中,边界图像块和右下角图像块中像素区域和空白区域的分布比例可以任意。
以下结合图6A至图6C示意的图像块,对本申请的视频编码方法和视频解码方法分别进行描述。
图7A示出了本申请提供的视频解码方法100的方法流程图。视频解码方法100描述了对边界图像块的划分方法。结合图5A至图5D的描述,视频解码方法100可以由解码器30执行。结合图5D所述的解码器30,本实施例所述的视频解码方法具体可以由图5D中预测处理单元360执行。基于此,视频解码方法100包括以下步骤:
步骤S101,检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值。
其中,当前视频帧是指待解码视频的当前图像,当前视频帧例如是待解码视频的第一帧图像。当前边界图像块可以是图6A或者图6B示意的边界图像块。第一阈值是大于0且小于1的数值。第一阈值例如是0.75。
第一边是当前边界图像块的边。第一子边是当前边界图像块内像素区域的边。第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。当前边界图像块例如如图6A所示,第一边是图6A中示意的水平方向上的边,第一子边则是图6A中像素区域的水平方向上的边。当前边界图像块例如如图6B所示,第一边是图6B中示意的竖直方向上的边,第一子边则是图6B中像素区域的竖直方向上的边。
步骤S102,当第一子边的边长与第一边的边长之比小于或者等于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块。
其中,本实施例中,预测处理单元360以垂直于第一边的方向划分当前边界图像块,且第一分块包含当前边界图像块中的全部像素区域。
当前边界图像块例如是图6A示意的边界图像块,图6A示意的边界图像块中像素区域在竖直方向上的边长该图像块在竖直方向上的边长相等,该图像块中像素区域在 水平方向上的边上小于该图像块在水平方向上的边长,预测处理单元360从竖直方向划分图6A示意的边界图像块并将该图像块中的像素区域全部划分到一个分块中。当前边界图像块例如是图6B示意的边界图像块,图6B示意的边界图像块中像素区域在竖直方向上的边长小于该图像块在竖直方向上的边长,该图像块中像素区域在水平方向上的边上与该图像块在水平方向上的边长相等,预测处理单元360从水平方向划分图6B示意的边界图像块并将该图像块中的像素区域全部划分到一个分块中。
可选的,本步骤可以包括:当第一子边的边长与第一边的边长之比大于第二阈值并且第一子边的边长与第一边的边长之比小于或者等于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块。其中,第二阈值大于0小于第一阈值,第二阈值例如是0.25。
可以理解的是,本实施例虽然以“第一子边的边长与第一边的边长之比小于或者等于第一阈值”作为触发后续操作的条件,但是该描述方式仅仅是对本申请技术方案的一种表达方式,不构成对本申请的限制。在另一些实施例中,若将第一阈值的设定修改,例如,将第一阈值设定为0,本步骤可以描述为“第一子边的边长与第一边的边长之比大于或者等于第一阈值”作为触发后续操作的条件。该两种触发条件虽然不同,但是在本申请中起到的作用相同,因此,即使修改第一阈值的设定值,并修改本步骤的检测条件,但是相关限定依然属于本申请技术方案保护的范畴。
步骤S103,当第一分块的面积等于像素区域的面积时,将第一分块作为编码单元,并根据编码单元的编码信息得到编码单元的重建块,或者继续划分第一分块,以得到至少两个编码单元,并根据至少两个编码单元的编码信息得到至少两个编码单元的重建块。
其中,第一分块的面积由第一分块水平方向上的边长与竖直方向上的边长相乘得到,相应的,像素区域的面积由像素区域水平方向上的边长与竖直方向上的边长相乘得到。第一分块的面积等于像素区域的面积,说明第一分块水平方向上的边长与像素区域水平方向上的边长相等,且第一分块竖直方向上的边长与像素区域竖直方向上的边长相等。即,第一分块中不包含空白区域。
需要指出的是,本申请技术方案中,预测处理单元360可以按照衍生树(derived tree,DT)划分方法中的一种划分模式划分边界图像块。DT划分方法包括在水平方向和/或竖直方向分块的多种划分模式。以从水平方向上分块为例,DT划分方法中的第一种划分模式划分后第一分块第二边的边长与第二分块第三边的边长之比例如可以满足1:3;DT划分方法中的第一种划分模式第一分块第二边的边长与第二分块第三边的边长之比例如可以满足3:1。本示例中,第二边和第三边是竖直方向上的边。DT划分方法所包含的从竖直方向上分块的划分模式与水平方向上分块的划分模式类似,此处不再详述。基于此,按照DT中的某种划分模式或者BT划分方法划分边界图像块,第一分块的面积可能等于像素区域的面积,第一分块的面积也可能大于像素区域的面积。
实际操作中,当第一分块的面积等于像素区域的面积时,一些实施例中,预测处理单元360可以将第一分块作为CU并根据该CU的编码信息得到该CU的重建块。另一些实施例中,预测处理单元360还可以根据第一分块中像素区域的像素信息,例如像素的纹理等,继续划分第一分块得到CU,进而,预测处理单元360根据相应CU的编 码信息得到相应CU的重建块。本实施例中,预测处理单元360可以采用BT划分方法和/或QT划分方法继续划分第一分块。
其中,编码信息可以包含编码图像数据及相关联数据。相关联数据可包含序列参数集、图像参数集及其它语法结构。序列参数集可含有应用于零个或多个序列的参数。图像参数集可含有应用于零个或多个图像的参数。语法结构是指码流中以指定次序排列的零个或多个语法元素的集合。预测处理单元360根据CU的编码信息得到CU的重建块的过程此处不详述。
步骤S104,当第一分块的面积大于像素区域的面积时,继续划分第一分块,以得到编码单元,并根据编码单元的编码信息得到编码单元的重建块。
根据步骤S103的描述,当第一分块的面积大于像素区域的面积时,第一分块依然是边界图像块,预测处理单元360将第一分块作为当前边界图像块,继续划分该第一分块。
一些实施例,当第一子边的边长与第一边的边长之比大于第二阈值时,预测处理单元360可以以垂直于第一边的方向上划分第一分块得到第一子分块和第二子分块。其中,第一子分块是非边界图像块,第二子分块包括第二子像素区域,第二子像素区域是所述像素区域的部分区域。第二阈值大于0小于第一阈值,第二阈值例如是0.5。
另一些实施例,预测处理单元360需要继续划分第一分块可以是,当第一子边的边长与第一边的边长之比大于第二阈值时,在垂直于所述第一边的方向上对第一分块执行BT划分。
再一些实施例中,当第一子边的边长与第一边的边长之比大于第二阈值时,对第一分块QT划分。
可见,采用本实现方式,当第一子边的边长与第一边的边长之比小于或者等于第一阈值时,解码器将边界图像块中的像素区域划分到第一分块中。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。这样解码器执行分块时,不受限于现有的BT和/或QT的划分方法,从而在划分边界图像块至得到CU的过程中,能够减少划分次数,进而,能够降低划分的算法复杂度。
在视频解码方法100的另一种实施场景中,当第一子边的边长与第一边的边长之比大于第一阈值时,预测处理单元360可以以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块。本实施例中,第一分块是非边界图像块,第二分块为边界图像块并包括第一子像素区域,第一子像素区域是像素区域的部分区域。进而,预测处理单元360可以将第二分块作为当前边界图像块,继续划分所述第二分块,以得到编码单元,并根据编码单元的编码信息得到编码单元的重建块。本实施例中,预测处理单元360继续划分所述第二分块的方法,与步骤S104中预测处理单元360继续划分所述第一分块的方法类似,此处不再详述。
可选的,当第一子边的边长与第一边的边长之比大于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块可以包括:当第一子边的边长与第一边的边长之比大于第一阈值并且第一子边的边长与第一边的边长之比小于或者等于第三阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第 二分块。其中,第三阈值大于第一阈值,第三阈值例如可以是1。
综上,本申请技术方案在划分边界图像块时,解码器30能够根据边界像素区域的边长与该像素区域所在边界图像块的边长的关系划分边界图像块,从而使得在划分LCU至得到CU的过程中,划分次数相对较少,进而,能够降低划分算法的复杂度。本实施例中所述的边长是像素区域和边界图像块的边中垂直于当前边界图像块所在的当前视频帧的边界的边的长度。
图7B示出了本申请提供的视频解码方法200的方法流程图。视频解码方法200描述了对边界图像块的划分方法。结合图5A至图5D的描述,视频解码方法200可以由解码器30执行。结合图5D所述的解码器30,本实施例所述的视频解码方法具体可以由图5D中预测处理单元360执行。基于此,视频解码方法200包括以下步骤:
步骤S201,检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间。
其中,本实施例所述的预设区间的数值范围为大于第二阈值并且小于第一阈值。本实施例所述第一阈值和第二阈值与视频解码方法100所述的相同,第一阈值例如是0.5,第二阈值例如是0。或者,第一阈值例如是0.25,第二阈值例如是0。此处不再详述。
步骤S202,当第一子边的边长与第一边的边长之比位于预设区间时,以垂直于第一边的方向划分当前边界图像块得到第一分块和第二分块。
与视频解码方法100的步骤S102相同的,本实施例中,预测处理单元360同样以垂直于第一边的方向划分当前边界图像块。对于划分方向,此处不再赘述。
本实施方式的一些实施例中,第一分块中可以包含当前边界图像块中的全部像素区域,而第二分块中不包含任何像素区域。在另一些实施例中,第一分块可以是非边界图像块,而第二分块是边界图像块。第二分块中包含的像素区域是当前边界图像块像素区域的一部分。该两种实施例的具体场景,详见本说明书下文的示例性描述,此处不再详述。
步骤S203,将第一分块和第二分块中为非边界块的分块作为编码单元,并根据编码单元的编码信息得到编码单元的重建块,或者继续划分第一分块或第二分块,以得到编码单元,并根据编码单元的编码信息得到编码单元的重建块。
结合步骤S202的描述,一些实施例中,第一分块中包含当前边界图像块中的全部像素区域,而第二分块中不包含任何像素区域的场景,预测处理单元360对第一分块的后续操作过程与视频解码方法100相同,此处不再赘述。另一些实施例中,第一分块是非边界图像块,而第二分块是边界图像块的场景,预测处理单元360可以将第一分块作为编码单元,并根据CU的编码信息得到CU的重建块,或者继续划分第一分块,得到至少两个CU,并根据该至少两个CU的编码信息得到该至少两个CU的重建块。对于第二分块,预测处理单元360可以继续划分第二分块,以得到CU。预测处理单元360继续划分第二分块的划分方式,如视频解码方法100所述。
图7C示出了本申请提供的视频解码方法300的方法流程图,视频解码方法300 描述了对右下角图像块的划分方法。结合图5A至图5D的描述,视频解码方法300可以由解码器30执行。结合图5D所述的解码器30,本实施例所述的视频解码方法具体可以由图5D中预测处理单元360执行。基于此,视频解码方法300包括以下步骤:
步骤S301,确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且右下角图像块的第二子边的边长与第二边的边长之比大于预设阈值。
其中,预设阈值例如是0.5。右下角图像块如图6C所示。第一边和第二边均是该右下角图像块的边,第一子边和第二子边是该右下角图像块中像素区域的边。第一边包含第一子边,第二边包含第二子边,第一边与第二边相互垂直。
步骤S302,采用QT衍生的划分模式划分右下角图像块得到第一分块、第二分块和第三分块。
其中,本申请技术方案所述的DT还包括基于QT衍生的划分模式。基于QT衍生的划分模式具体可以包括:Q_A划分模式和Q_B划分模式。Q_A划分方法是水平方向BT之后,对上半部分分块在竖直方向BT,得到三个分块,如下文中图9和图10所示。Q_B划分方法是竖直方向BT之后,对左半部分分块在水平方向BT,得到三个分块,如下文中图9和图10中所示。
本实施例中,第一分块包含像素区域的第一子像素区域,第二分块包含像素区域的第二子像素区域,第一子像素区域和第二子像素区域构成右下角图像块中的像素区域。
基于此,本实施例中,第一分块的面积和第二分块的面积均是右下角图像块面积的四分之一,第三分块的面积是边界图像块面积的二分之一。
步骤S303,继续划分第二分块,以得到第二分块对应的编码单元,并根据第二分块对应的编码单元的编码信息得到第二分块对应的编码单元的重建块。
第二分块是边界图像块或者右下角图像块,预测处理单元360需要继续划分第二分块。当第二分块是边界图像块时,预测处理单元360划分第二分块的方式可以如视频解码方法100中的实施例所述。当第二分块是右下角图像块时,预测处理单元360划分第二分块的方式可以如视频解码方法300中的实施例所述,此处不再赘述。
步骤S304,当第一分块的面积等于第一子像素区域的面积时,将第一分块作为编码单元,并根据第一分块编码单元的编码信息得到第一分块编码单元的重建块,或者继续划分第一分块,以得到第一分块编码单元,并根据第一分块编码单元的编码信息得到第一分块编码单元的重建块。
步骤S305,当第一分块的面积大于第一子像素区域的面积时,继续划分第一分块,以得到第一分块编码单元,并根据第一分块编码单元的编码信息得到第一分块编码单元的重建块。
可选的,当第一分块的面积大于第一子像素区域的面积时,继续划分第一分块包括:检测第一分块第三子边的边长与第三边的边长之比是否小于或者等于第一阈值,第三子边是第一子像素区域的边,第三边和第三子边平行于第一边。当第三子边的边长与第三边的边长之比小于或者等于第一阈值时,以垂直于第一边的方向上划分所述第一分块得到第一子分块和第二子分块,第一子分块包含第一子像素区域。当第一子 块的面积等于第一子像素区域的面积时,将第一子块作为编码单元,并根据编码单元的编码信息得到编码单元的重建块,或者继续划分第一子块,以得到编码单元,并根据编码单元的编码信息得到编码单元的重建块。当第一子块的面积大于第一子像素区域的面积时,继续划分第一子块,以得到编码单元,并根据编码单元的编码信息得到编码单元的重建块。本实施例中,第一阈值如视频解码方法100所述。
其中,本实施例中,预测处理单元360对第一分块的操作与视频解码方法100中步骤S103和步骤S104的描述相同,此处不再详述。
图8A示出了本申请提供的视频编码方法400的方法流程图。视频编码方法400描述了对边界图像块的划分方法。结合图5A至图5D的描述,视频编码方法400可以由编码器20执行。结合图5C所述的编码器20,本实施例所述的视频编码方法具体可以由图5C中预测处理单元210执行。基于此,视频编码方法400包括以下步骤:
步骤S401,检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值。
步骤S402,当第一子边的边长与第一边的边长之比小于或者等于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块。
此外,当第一子边的边长与第一边的边长之比大于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和第二分块。其中,第一分块是非边界图像块,第二分块为边界图像块并包括第一子像素区域,第一子像素区域是像素区域的部分区域。然后,预测处理单元210继续划分第二分块,以得到编码单元,并根据编码单元的图像信息得到编码单元的编码信息。
步骤S403,当第一分块的面积等于像素区域的面积时,将第一分块作为编码单元,并根据编码单元的图像信息得到编码单元的编码信息,或者继续划分第一分块,以得到编码单元,并根据编码单元的图像信息得到编码单元的编码信息。
步骤S405,当第一分块的面积大于第一子像素区域的面积时,继续划分第一分块,以得到第一分块编码单元,并根据第一分块编码单元的图像信息得到第一分块编码单元的编码信息。
其中,本实施例中,预测处理单元210执行分块的操作与视频解码方法100中预测处理单元360执行分块的操作过程相似,此处不再详述。此外,预测处理单元210根据CU的图像信息得到相应CU的编码信息的过程此处不详述。
图8B示出了本申请提供的视频编码方法500的方法流程图。视频编码方法500描述了对边界图像块的划分方法。结合图5A至图5D的描述,视频编码方法500可以由编码器20执行。结合图5C所述的编码器20,本实施例所述的视频编码方法具体可以由图5C中预测处理单元210执行。基于此,视频编码方法500包括以下步骤:
步骤S501,检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间。
步骤S502,当第一子边的边长与第一边的边长之比位于预设区间时,以垂直于第一边的方向划分当前边界图像块得到第一分块和第二分块。
步骤S503,将第一分块和第二分块中为非边界块的分块作为编码单元,并根据编码单元的图像信息得到编码单元的编码信息,或者继续划分第一分块或第二分块,以得到编码单元,并根据编码单元的图像信息得到编码单元的编码信息。
其中,本实施例中,预测处理单元210执行分块的操作与视频解码方法200中预测处理单元360执行分块的操作过程相似,此处不再详述。此外,预测处理单元210根据CU的图像信息得到相应CU的编码信息的过程此处不详述。
图8C示出了本申请提供的视频编码方法600的方法流程图。视频编码方法600描述了对右下角图像块的划分方法。结合图5A至图5D的描述,视频编码方法600可以由编码器20执行。结合图5C所述的编码器20,本实施例所述的视频编码方法具体可以由图5C中预测处理单元210执行。基于此,视频编码方法600包括以下步骤:
步骤S601,确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且右下角图像块的第二子边的边长与第二边的边长之比大于预设阈值。
其中,预设阈值例如是0.5。
步骤S602,采用QT衍生的划分模式划分右下角图像块得到第一分块、第二分块和第三分块。
步骤S603,继续划分第二分块,以得到第二分块对应的编码单元,并根据第二分块对应的编码单元的图像信息得到第二分块对应的编码单元的编码信息。
步骤S604,当第一分块的面积等于第一子像素区域的面积时,将第一分块作为编码单元,并根据第一分块编码单元的图像信息得到第一分块编码单元的编码信息,或者继续划分第一分块,以得到第一分块编码单元,并根据第一分块编码单元的图像信息得到第一分块编码单元的编码信息。
步骤S605,当第一分块的面积大于第一子像素区域的面积时,继续划分第一分块,以得到第一分块编码单元,并根据第一分块编码单元的图像信息得到第一分块编码单元的编码信息。
可选的,当第一分块的面积大于第一子像素区域的面积时,继续划分第一分块包括:检测第一分块第三子边的边长与第三边的边长之比是否小于或者等于第一阈值,第三子边是第一子像素区域的边,第三边和第三子边平行于第一边。当第三子边的边长与第三边的边长之比小于或者等于第一阈值时,以垂直于第一边的方向上划分所述第一分块得到第一子分块和第二子分块,第一子分块包含第一子像素区域。当第一子块的面积等于第一子像素区域的面积时,将第一子块作为编码单元,并根据编码单元的图像信息得到编码单元的编码信息,或者继续划分第一子块,以得到编码单元,并根据编码单元的图像信息得到编码单元的编码信息。当第一子块的面积大于第一子像素区域的面积时,继续划分第一子块,以得到编码单元,并根据编码单元的图像信息得到编码单元的编码信息。本实施例中,第一阈值如视频编码方法400所述。
其中,本实施例中,预测处理单元210执行分块的操作与视频解码方法300中预测处理单元360执行分块的操作过程相似,此处不再详述。此外,预测处理单元210根据CU的图像信息得到相应CU的编码信息的过程此处不详述。
综合上述可知,本申请所述视频编码方法和视频解码方法,相关设备能够根据边界图像块和/或右下角图像块中像素区域边长与图像块边长的关系,根据DT划分方法、BT划分方法或者QT划分方法划分相应图像块,从而能够减少划分边界图像块至得到CU的过程中的划分次数,进而,能够降低划分的算法复杂度。
以下结合本申请划分模式的示例性附图,对本申请的图像块划分方法进行描述。
如图9所示,本申请所述的DT划分方法可以包括以下三种场景下衍生的划分模式组:划分模式组91、划分模式组92和划分模式组93。其中,划分模式组91包括从竖直方向上划分图像块的各种划分模式,划分模式组92包括从水平方向上划分图像块的各种划分模式,划分模式组93包括QT划分模式以及基于QT划分模式衍生的其他划分模式。
可以理解的是,图9仅是本申请中划分模式的示意性的描述,本申请技术方案中不仅于图9中示意的划分模式。例如,采用上述划分模式组91所包括的划分模式分块后,第一分块水平方向的边长与第二分块水平方向的边长之比可以满足:j:2 N-2-i-j,其中,N是大于2的正整数,i=1,2,3……N-3,j是小于2 N-2-i的正整数,N例如是7。例如,采用划分模式组91中的VER_RIGHT划分模式分块之后,第一分块水平方向的边长与第二分块水平方向的边长之比是3:1;再如,采用划分模式组91中的VER_LEFT划分模式分块之后,第一分块水平方向的边长与第二分块水平方向的边长之比是1:3;再如,采用划分模式组91中的第一划分模式分块之后,第一分块水平方向的边长与第二分块水平方向的边长之比是1:7(图9中未示出)。
同理,采用上述划分模式组92所包括的划分模式分块后,第一分块竖直方向的边长与第二分块竖直方向的边长之比可以满足:j:2 N-2-i-j,其中,N是大于2的正整数,i=1,2,3……N-3,j是小于2 N-2-i的正整数,N例如是7。例如,采用划分模式组92中的HOR_DOWN划分模式分块之后,第一分块竖直方向的边长与第二分块竖直方向的边长之比是3:1;再如,采用划分模式组91中的HOR_TOP划分模式分块之后,第一分块竖直方向的边长与第二分块竖直方向的边长之比是1:3;再如,采用划分模式组91中的第一划分模式分块之后,第一分块竖直方向的边长与第二分块竖直方向的边长之比是7:1(图9中未示出)。
划分模式组93包括Q_A划分模式,采用Q_A划分模式划分之后得到第一分块、第二分块和第三分块,第一分块的面积和第二分块的面积均是原图像块面积的四分之一,且第一分块和第二分块在水平方向上并列排布,第三分块的面积是原图像块面积的二分之一;再如,划分模式组93包括Q_B划分模式,采用Q_B划分模式划分之后得到第一分块、第二分块和第三分块,第一分块的面积和第二分块的面积均是原图像块面积的四分之一,且第一分块和第二分块在竖直方向上并列排布,第三分块的面积是原图像块面积的二分之一。
此外,本申请DT划分方法的其他划分模式,划分得到的分块的关系与相应划分模式对应,且与图9各划分模式组示意的类似,本申请此处不再一一赘述。
可见,本申请技术方案中,相关设备可以维护多种DT划分模式,从而能够在划分边界图像块和右下角图像块时,从多种DT划分模式选择划分模式,进而,使得在划分 边界图像块和/或右下角图像块直到得到CU的过程中,划分次数相对较少。
示例性的,以下结合实例对本申请示意的图像块划分方法进行描述。
图10示意了一种示例性DT划分方法包含的划分模式。本实施例中,从水平方向划分的划分模式包括:HOR_TOP划分模式、BT-1划分模式和HOR_DOWN划分模式;从竖直方向划分的划分模式包括:VER_LEFT划分模式、BT-2划分模式和VER_RIGHT划分模式。基于QT衍生的划分模式包括:Q_A划分模式、Q_B划分模式和QT划分模式。
其中,HOR_TOP划分模式划分得到的第一分块竖直方向的边长与第二分块竖直方向的边长之比是1:3。BT-1划分模式划分得到的第一分块竖直方向的边长与第二分块竖直方向的边长之比是1:1。HOR_DOWN划分模式划分得到的第一分块竖直方向的边长与第二分块竖直方向的边长之比是3:1。VER_LEFT划分模式划分得到的第一分块水平方向的边长与第二分块水平方向的边长之比是1:3。BT-2划分模式划分得到的第一分块水平方向的边长与第二分块水平方向的边长之比是1:1。VER_RIGHT划分模式划分得到的第一分块水平方向的边长与第二分块水平方向的边长之比是3:1。Q_A划分模式划分得到的第一分块的面积和第二分块的面积均是原图像块面积的四分之一,且第一分块和第二分块在水平方向上并列排布,第三分块的面积是原图像块面积的二分之一。Q_A划分模式划分得到的第一分块的面积和第二分块的面积均是原图像块面积的四分之一,且第一分块和第二分块在竖直方向上并列排布,第三分块的面积是原图像块面积的二分之一。
可以理解的是,图10仅是对申请划分模式的示意性描述,对本申请的技术方案不构成任何限制。在本申请其他的实施方式中,DT划分方法还可以包括其他的划分方式,此处不再详述。
图11A-1示意了一个示例性的边界图像块111,边界图像块111是该边界图像块所属视频帧的右边界的图像块。
以解码为例,预测处理单元360检测到边界图像块111的
Figure PCTCN2020081486-appb-000001
小于0.25,其中,w a是边界图像块111中像素区域11A水平方向上边的边长,w b是边界图像块111水平方向上边的边长。预测处理单元360采用图10示意的VER_LEFT划分模式划分边界图像块111,得到第一分块1111和第二分块1112。其中,第一分块1111包含边界图像块111中的像素区域11A,第二分块1112不包含像素区域。
进一步的,虽然第一分块1111包含像素区域,但是第一分块1111还包含空白区域,也即第一分块1111无法被作为CU,预测处理单元360需要继续划分第一分块1111。
一些实施例中,当
Figure PCTCN2020081486-appb-000002
大于第二阈值时,预测处理单元360可以采用图10的竖直方向划分的划分模式中选择划分模式,例如VER_RIGHT划分模式,划分第一分块1111得到第一子块和第二子块。本实施例中,如图11A-2所示,第一子块包含像素区域11A的部分区域,是非边界块。第二子块包含像素区域11A剩余的部分区域和空白区域,是边界块。第二阈值小于第一阈值。另一些实施例中,可以包括,当
Figure PCTCN2020081486-appb-000003
大于第二阈值时,预测处理单元360可以采用BT-2划分模式划分第一分块1111。其中,第二阈值小于第一阈值。再一些实施例中,当
Figure PCTCN2020081486-appb-000004
大于第二阈值时,预测处理单元360可以采用 QT划分模式划分第一分块1111。
根据视频解码方法100和视频编码方法400的描述,当第一子边的边长与第一边的边长之比小于或者等于第一阈值时,划分当前边界图像块得到第一分块和第二分块包括:当第一子边的边长与第一边的边长之比大于第二阈值并且第一子边的边长与第一边的边长之比小于或者等于第一阈值时,以垂直于第一边的方向上划分当前边界图像块得到第一分块和所述第二分块。相应的,本实施例中,“预测处理单元360检测边界图像块111的
Figure PCTCN2020081486-appb-000005
小于0.25”可以等同于“预测处理单元360检测边界图像块111的
Figure PCTCN2020081486-appb-000006
大于0且
Figure PCTCN2020081486-appb-000007
小于0.25”的场景。
在另一个实施例中,如图11A-2示意的示例性边界图像块111,预测处理单元360检测边界图像块111的
Figure PCTCN2020081486-appb-000008
等于0.25,预测处理单元360依然采用VER_LEFT划分模式划分边界图像块111,得到第一分块1111和第二分块1112。其中,第一分块1111的面积等于像素区域11A的面积,预测处理单元360可以将第一分块1111作为CU,进而,预测处理单元360可以根据该CU的编码信息得到该CU的重建块。或者,预测处理单元360继续划分第一分块1111,以得到CU,并根据所得到的CU的编码信息得到相应CU的重建块。此处不再详述。
可以看出,本实施例对应图7A,图7B,图8A和图8B示意的实施例,“第一阈值”等于0.25,“第二阈值”等于0的实施场景。
图11B示意了一个示例性的边界图像块112,边界图像块112是该边界图像块所属视频帧的右边界的图像块。
以解码为例,预测处理单元360检测边界图像块112的
Figure PCTCN2020081486-appb-000009
大于0.25且
Figure PCTCN2020081486-appb-000010
小于0.5,本实施例中,w a是边界图像块112中像素区域11B水平方向上边的边长,w b是边界图像块112水平方向上边的边长。预测处理单元360采用图10示意的BT-2划分模式划分边界图像块112,得到第一分块1121和第二分块1122。其中,第一分块1121包含像素区域11B,第二分块1122不包含像素区域。第一分块1121是边界图像块。
进而,与图11A-1所述的实施例相类似的,第一分块1121依然是边界图像块,预测处理单元360继续划分第一分块1121。预测处理单元360继续划分第一分块1121的实施方式,与预测处理单元360继续划分图11A-1实施例中第一分块1111的实施方式类似,此处不再详述。
可以看出,本实施例对应图7A、图7B、图8A和图8B示意的实施例中,“第一阈值”等于0.5,“第二阈值”等于0.25的实施场景。
可以理解的是,在另一个实施例中,预测处理单元360检测边界图像块112的
Figure PCTCN2020081486-appb-000011
大于0.25且
Figure PCTCN2020081486-appb-000012
等于0.5,预测处理单元360可以采用图10示意的BT-2划分模式划分边界图像块112,得到第一分块1121和第二分块1122。本实施例中,第一分块1121包含像素区域11B且第一分块1121是非边界块,第二分块1122不包含像素区域。
图11C示意了一个示例性的边界图像块113,边界图像块113是该边界图像块所属视频帧的右边界的图像块。
以解码为例,预测处理单元360检测边界图像块113的
Figure PCTCN2020081486-appb-000013
大于0.5且
Figure PCTCN2020081486-appb-000014
小于0.75,本实施例中,w a是边界图像块113中像素区域11C水平方向上边的边长,w b是边界图 像块113水平方向上边的边长。预测处理单元360采用图10示意的VER_RIGHT划分模式划分边界图像块113,得到第一分块1131和第二分块1132。其中,第一分块1131包含像素区域11C,第二分块1132不包含像素区域。第一分块1131是边界图像块。
进而,预测处理单元360继续划分第一分块1131。一些实施例中,预测处理单元360继续划分第一分块1131的实施方式,与预测处理单元360继续划分图11A-1实施例中第一分块1111的实施方式类似,此处不再详述。另一些实施例中,预测处理单元360还可以以垂直方向划分第一分块1131,得到第一子分块和第二子分块,其中,第一子分块水平方向边的边长与第二子分块边的边长可以满足2比1。
可以理解的是,在另一个实施例中,预测处理单元360检测边界图像块113的
Figure PCTCN2020081486-appb-000015
大于0.5且
Figure PCTCN2020081486-appb-000016
等于0.75,预测处理单元360可以采用图10示意的VER_RIGHT划分模式划分边界图像块113,得到第一分块1131和第二分块1132。本实施例中,第一分块1131包含像素区域11B且第一分块1121是非边界块,第二分块1132不包含像素区域。
可以看出,本实施例对应图7A、7B、图8A和图8B示意的实施例中,“第一阈值”等于0.75,“第二阈值”等于0.5的实施场景。此外,对应图7B和图8B示意的实施例,本实施例的实施场景也可以描述为“
Figure PCTCN2020081486-appb-000017
大于0.5小于1”,或者描述为“
Figure PCTCN2020081486-appb-000018
大于0.5”。
图11D示意了一个示例性的边界图像块114,边界图像块114是该边界图像块所属视频帧的右边界的图像块。
以解码为例,预测处理单元360检测边界图像块114的
Figure PCTCN2020081486-appb-000019
大于0.75且
Figure PCTCN2020081486-appb-000020
小于1,本实施例中,w a是边界图像块114中像素区域11D水平方向上边的边长,w b是边界图像块114水平方向上边的边长。预测处理单元360采用图10示意的VER_RIGHT划分模式划分边界图像块114,得到第一分块1141和第二分块1142。其中,第一分块1141包含像素区域11D的一部分像素区域,第一分块1141是非边界图像块。第二分块1142包含像素区域11D剩余的部分像素区域,第二分块1142是边界图像块。
进而,预测处理单元360继续划分第二分块1142。预测处理单元360继续划分第二分块1142的实施方式,与预测处理单元360继续划分图11A-1实施例中第一分块1111的实施方式类似,此处不再详述。
可以看出,本实施例对应图7A和图8A示意的实施例中,“第一阈值”等于0.75,“第二阈值”等于1的实施场景。本实施例对应图7B和图8B示意的实施例中,“第一阈值”等于1,“第二阈值”等于0.75的实施场景。
此外,图11D所述的实施例中“预测处理单元360检测到边界图像块114的
Figure PCTCN2020081486-appb-000021
大于0.75且
Figure PCTCN2020081486-appb-000022
小于1”,可以等同于“预测处理单元360检测边界图像块114的
Figure PCTCN2020081486-appb-000023
大于0.75”。
一些实施例中,图11A-1至图11D中划分得到的边界图像块,还可以继续采用BT或者QT的划分方式继续划分,以得到CU。本实施例此处不再赘述。
可以理解的是,图11A-1至图11D均是以视频帧右边界的图像块为例,对本申请划分图像块的实施场景的描述。参见图12示意的边界图像块121,在本申请的其他实施例中,预测处理单元360对视频帧下边界的边界块进行划分的实施场景,预测处理单元360检测
Figure PCTCN2020081486-appb-000024
的值与各阈值的关系,其中,w x是边界图像块121中像素区域的竖直方向上边的边长,w y是边界图像块121竖直方向上边的边长。进而,预测处理单元360 从图10示意的HOR_TOP划分模式、BT-1划分模式和HOR_DOWN划分模式中确定划分边界图像块121的划分模式。本申请此处不再详述。
图13A-1示意了一个示例性的右下角图像块131。以解码为例,预测处理单元360检测右下角图像块131的
Figure PCTCN2020081486-appb-000025
小于0.5,且右下角图像块131的
Figure PCTCN2020081486-appb-000026
大于0.5,其中,w a水平是右下角图像块131中像素区域13A水平方向上边的边长,w b水平是右下角图像块131水平方向上边的边长,w a竖直是右下角图像块131中像素区域13A竖直方向上边的边长,w b竖直是右下角图像块131竖直方向上边的边长。预测处理单元360采用Q_A划分模式划分右下角图像块131,得到第一分块1311、第二分块1312和第三分块1313。其中,第一分块1311包含像素区域13A的第一子像素区域,第二分块1312包含像素区域13A的第二子像素区域,第一子像素区域和第二子像素区域构成了像素区域13A。第三分块1313不包含像素区域。第一分块1311是下边界图像块,第二分块1312是右下角图像块。
进而,预测处理单元360采用上述划分下边界图像块的方法继续划分第一分块1311,并采用划分右下角图像块的划分方法划分第二分块1312。此处不再详述。
如图13A-2所示,在另一个实施例中,若右下角图像块131的
Figure PCTCN2020081486-appb-000027
等于0.5,且右下角图像块131的
Figure PCTCN2020081486-appb-000028
大于0.5,预测处理单元360采用Q_A划分模式划分右下角图像块131,得到第一分块1311、第二分块1312和第三分块1313。本实施例中,第一分块1311可以被作为CU,第二分块1312是右边界图像块。本实施例中,预测处理单元360采用划分右边界图像块的划分方法划分第二分块1312。此处不再详述。
图13B示意了一个示例性的右下角图像块132。以解码为例,预测处理单元360检测右下角图像块132的
Figure PCTCN2020081486-appb-000029
大于0.5,且右下角图像块132的
Figure PCTCN2020081486-appb-000030
小于0.5,其中,w a水平是右下角图像块132中像素区域13B水平方向上边的边长,w b水平是右下角图像块132水平方向上边的边长,w a竖直是右下角图像块132中像素区域13A竖直方向上边的边长,w b竖直是右下角图像块132竖直方向上边的边长。预测处理单元360采用Q_B划分模式划分右下角图像块132,得到第一分块1321、第二分块1322和第三分块1323。其中,第一分块1321包含像素区域13B的第一子像素区域,第二分块1322包含像素区域13B的第二子像素区域,第一子像素区域和第二子像素区域构成了像素区域13B。第三分块1323不包含像素区域。第一分块1321是右边界图像块,第二分块1322是右下角图像块。
进而,预测处理单元360采用上述划分右边界图像块的方法继续划分第一分块1321,并采用划分右下角图像块的划分方法划分第二分块1322。此处不再详述。
与图13A-2示意的实施例相类似的,在另一个实施例中,若右下角图像块132的
Figure PCTCN2020081486-appb-000031
大于0.5,且右下角图像块132的
Figure PCTCN2020081486-appb-000032
等于0.5,预测处理单元360采用Q_B划分模式划分右下角图像块132,得到第一分块1321、第二分块1322和第三分块1323。本实施例中,第一分块1321可以被作为CU,第二分块1322是下边界图像块。进而, 本实施例中,预测处理单元360采用划分下边界图像块的划分方法划分第二分块1322。此处不再详述。
可以理解的是,图10至图13B示意的实施例仅是示意性描述,对本申请的技术方案不构成限制。在本申请其他的实施方式中,DT划分方法还可以包括其他划分模式。此外,在另一些实施场景中,第一阈值和第二阈值也可以是其他值,本申请此处不再详述。
可以理解的是,图11A-1至图13B示意的实施例示意解码侧为例对本申请实施例的描述,在实际操作中,图11A-1至图13B示意的实施例同样适用于编码侧的对图像块的操作。其中,当编码侧执行图11A-1至图13B示意的实施例时,具体可以由预测处理单元360执行上述操作。
综上可知,本申请的技术方案,相关设备可以维护多种DT划分模式,从而能够在划分边界图像块和右下角图像块时,从多种DT划分模式选择划分模式,进而,使得在划分边界图像块和/或右下角图像块直到得到CU的过程中,划分次数相对较少。
上述本申请提供的实施例中,分别从各个设备本身、以及从各个设备之间交互的角度对本申请实施例提供的视频编码方法和视频解码方法的各方案进行了介绍。例如,各设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
例如,若上述设备通过软件模块来实现相应的功能。如图14A所示,所述视频解码设备1400可包括检测模块1401和划分模块1402。
在一个实施例中,该视频解码设备1400可用于执行上述图5A、图5B、图5D、图7A、图7B、图11A-1至图12中视频解码器30的操作。例如:
检测模块1401检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一阈值是大于0且小于1的数值;划分模块1402,用于当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块,所述第一分块包含所述像素区域;划分模块1402,还用于当所述第一分块的面积等于所述像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块;划分模块1402,还用于当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
可见,采用本实现方式,当第一子边的边长与第一边的边长之比小于或者等于第一阈值时,视频解码设备1400将边界图像块中的像素区域划分到第一分块中。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。这样视频解码设备1400执行分块时,不受限于现有的BT和/或QT的划分方法,从而在划分边界图像块至得到CU的过程中,能够减少划分次数,进而,能够降低划分的算法复杂度。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块,所述第一分块是非边界图像块,所述第二分块为边界图像块并包括第一子像素区域,所述第一子像素区域是所述像素区域的部分区域;还用于继续划分所述第二分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,以垂直于所述第一边的方向划分所述第一分块得到第一子分块和第二子分块,所述第一子分块是非边界图像块,所述第二子分块包括子像素区域,所述子像素区域是所述像素区域的部分区域。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,在垂直于所述第一边的方向上对所述第一分块执行二叉树BT划分。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,对所述第一分块执行四叉树QT划分。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于零并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.25时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比3,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于0.25并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.5时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.75时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于第三阈值时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于或者等于0.75并且所述第一子边的边长与所述第一边的边长之比小于1时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
在另一个实施例中,图14A所示的视频解码设备1400还可用于执行上述图7B中解码器30的操作。
例如:检测模块1401,用于检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界;所述处理模块,还用于当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块;所述处理模块,还用于将所述第一分块和第二分块中为非边界块的分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块或所述第二分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
可见,采用本实现方式,当第一子边的边长与第一边的边长之比在预设区间时,视频解码设备1400划分边界图像块中的像素区域得到CU。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。这样视频解码设备1400执行分块时,不受限于现有的BT和/或QT的划分方法,从而在划分边界图像块至得到CU的过程中,能够减少划分次数,进而,能够降低划分的算法复杂度。
可选的,划分模块1402,还用于所述预设区间的数值范围为大于第二阈值并且小于第一阈值。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于零并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.25时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比3,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于0.25并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.5时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第 一分块包括所述像素区域。
可选的,划分模块1402,还用于以垂直于所述第一边的方向对所述第一分块执行二叉树划分或者对所述第一分块执行四叉树划分。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.75时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
可选的,划分模块1402,还用于以垂直于所述第一边的方向对所述第一分块进行划分,得到第一子分块和第二子分块,所述第一子分块第二子边的边长与所述第二子分块第三子边的边长满足2比1,所述第二子边和所述第三子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一子分块为非边界图像块。
可选的,划分模块1402,还用于以垂直于所述第一边的方向对所述第一分块执行二叉树划分或者对所述第一分块执行四叉树划分。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于或者等于0.75并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块为非边界块。
可选的,划分模块1402,还用于当所述第一子边的边长与所述第一边的边长之比大于或者等于0.5并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
在又一个实施例中,图14A所示的视频解码设备1400还可用于执行上述图7C中解码器30的操作。
例如:检测模块1401,用于确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且所述右下角图像块的第二子边的边长与第二边的边长之比大于所述预设阈值,所述第一边包含所述第一子边,所述第二边包含所述第二子边,所述第一边垂直于所述第二边,所述第一子边和所述第二子边是所述右下角图像块中像素区域的边;划分模块1402,用于采用QT衍生的划分模式划分所述右下角图像块得到第一分块、第二分块和第三分块,所述第一分块包含所述像素区域的第一子像素区域,所述第一分块位于所述右下角图像块的左上角,所述第二分块包含所述像素区域的第二子像素区域,所述第一分块的面积和所述第二分块的面积均是所述右下角图像块面积的四分之一,所述第三分块的面积是所述边界图像块面积的二分之一,所述第一子像素区域和所述第二子像素区域构成了所述像素区域;划分模块1402,还用于继续划分所述第二分块,以得到所述第二分块对应的编码单元,并 根据所述第二分块对应的编码单元的编码信息得到所述第二分块对应的编码单元的重建块;划分模块1402,还用于当所述第一分块的面积等于所述第一子像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的编码信息得到所述第一分块对应的编码单元的重建块;划分模块1402,还用于当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的编码信息得到所述第一分块对应的编码单元的重建块。
可见,采用本实现方式,解码器30还能够更高效的划分视频帧的右下角图像块。
可选的,所述预设阈值是0.5。
与图5D示意的解码器30对应的,本实施例所述检测模块1401和划分模块1402的功能例如可以集成到图5D示意的解码器30中的预测处理单元360中。即,本实施例所述检测模块1401和划分模块1402在其他表达方式中可以是图5D示意中的预测处理单元360。
图14B示出了上述实施例中所涉及的视频解码设备1400的另一种可能的结构示意图。视频解码设备1410包括处理器1403、收发器1404和存储器1405。如图14B所示,所述收发器1404用于与视频编码设备进行图像数据的收发。所述存储器1405用于与处理器1403耦合,其保存该视频解码设备1410必要的计算机程序1406。
例如,在一个实施例中,所述收发器1404被配置为接收编码器20发送的编码信息。处理器1403被配置为视频解码设备1410的解码操作或功能。
相应的,如图15A所示,本申请还提供了一种视频编码设备1500。所述终端设备1500可以包括检测模块1501和划分模块1502。
在一个实施例中,该视频编码设备1500可用于执行上述图5A至图5C、图8A、图8B、图11A-1至图12中编码器20的操作。例如:
检测模块1501,用于检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值,其中,所述第一边是所述当前边界图像块的边,所述第一子边像素区域的边,所述像素区域是所述当前边界图像块内的像素区域,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一阈值是大于0且小于1的数值;划分模块1502,用于当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块,所述第一分块包含所述像素区域;划分模块1502,还用于当所述第一分块的面积等于所述像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息;划分模块1502,还用于当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
可见,采用本实现方式,当第一子边的边长与第一边的边长之比小于或者等于第一阈值时,视频编码设备1500将边界图像块中的像素区域划分到第一分块中。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。这样视频编码设备1500执行分块时,不受限于现有的BT和/或QT的划分方法,从而在划分边界图像块至得到CU的过程中,能够减少划分次数,进而,能够降低划分的算法复杂度。
可选的,划分模块1502,还用于当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块,所述第一分块是非边界图像块,所述第二分块为边界图像块并包括第一子像素区域,所述第一子像素区域是所述像素区域的部分区域;继续划分所述第二分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
在另一个实施例中,图15A所示的视频编码设备1500还可用于执行上述图8B中编码30的操作。
例如,检测模块1501,用于检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界;划分模块1502,用于当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块;划分模块1502,还用于将所述第一分块和第二分块中为非边界块的分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块或所述第二分块,以得到至少两个编码单元,并根据所述至少两个编码单元的图像信息得到所述至少两个编码单元的编码信息。
可见,采用本实现方式,当第一子边的边长与第一边的边长之比在预设区间时,视频编码设备1500划分边界图像块中的像素区域得到CU。其中,第一边是当前边界图像块的边,第一子边是当前边界图像块内像素区域的边,第一边和第一子边均垂直于当前边界图像块所在的当前视频帧的边界。这样视频解码设备1400执行分块时,不受限于现有的BT和/或QT的划分方法,从而在划分边界图像块至得到CU的过程中,能够减少划分次数,进而,能够降低划分的算法复杂度。
在又一个实施例中,图15A所示的视频编码设备1500还可用于执行上述图8C中编码30的操作。
例如,检测模块1501,还用于确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且所述右下角图像块的第二子边的边长与第二边的边长之比大于所述预设阈值,所述第一边包含所述第一子边,所述第二边包含所述第二子边,所述第一边垂直于所述第二边,所述第一子边和所述第二子边是像素区域的边,所述像素区域是所述右下角图像块中的像素区域;划分模块1502,用于采用QT衍生的划分模式划分所述右下角图像块得到第一分块、第二分块和第三分块,所述第一分块包含所述像素区域的第一子像素区域,所述第一分块位于所述右下角图 像块的左上角,所述第二分块包含所述像素区域的第二子像素区域,所述第一分块的面积和所述第二分块的面积均是所述右下角图像块面积的四分之一,所述第三分块的面积是所述边界图像块面积的二分之一,所述第一子像素区域和所述第二子像素区域构成了所述像素区域;继续划分所述第二分块,以得到所述第二分块对应的编码单元,并根据所述第二分块对应的编码单元的图像信息得到所述第二分块对应的编码单元的编码信息;划分模块1502,还用于当所述第一分块的面积等于所述第一子像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的图像信息得到所述第一分块对应的编码单元的编码信息;划分模块1502,还用于当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的图像信息得到所述第一分块对应的编码单元的编码信息。
可见,采用本实现方式,解码器30还能够更高效的划分视频帧的右下角图像块。
可选的,所述预设阈值是0.5。
与图5C示意的编码器20对应的,本实施例所述检测模块1501和划分模块1502的功能例如可以集成到图5C示意的编码器20中的预测处理单元210中。即,本实施例所述检测模块1501和划分模块1502在其他表达方式中可以是图5C示意中的预测处理单元210。
图15B示出了上述实施例中所涉及的视频编码设备1500的另一种可能的结构示意图。视频编码设备1510包括处理器1503、收发器1504和存储器1505。如图15B所示,所述存储器1505用于与处理器1503耦合,其保存该视频编码设备1510必要的计算机程序1506。
例如,在一个实施例中,所述收发器1503被配置为向解码器30发送编码信息。处理器1503被配置为视频编码设备1510的编码操作或功能。
具体实现中,对应视频编码设备和视频解码设备,本申请还分别提供一种计算机存储介质,其中,设置在任意设备中计算机存储介质可存储有程序,该程序执行时,可实施包括图7A至图13B提供的视频编码方法和视频解码方法的各实施例中的部分或全部步骤。任意设备中的存储介质均可为磁碟、光盘、只读存储记忆体(read-only memory,ROM)或随机存储记忆体(random access memory,RAM)等。
本申请中,处理器可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。处理器还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。存储器可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器也可以包括 非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器还可以包括上述种类的存储器的组合。
本领域技术任何还可以了解到本申请列出的各种说明性逻辑块(illustrative logical block)和步骤(step)可以通过电子硬件、电脑软件,或两者的结合进行实现。这样的功能是通过硬件还是软件来实现取决于特定的应用和整个系统的设计要求。本领域技术人员可以对于每种特定的应用,可以使用各种方法实现所述的功能,但这种实现不应被理解为超出本申请保护的范围。
本申请中所描述的各种说明性的逻辑单元和电路可以通过通用处理器,数字信号处理器,专用集成电路(ASIC),现场可编程门阵列(FPGA)或其它可编程逻辑装置,离散门或晶体管逻辑,离散硬件部件,或上述任何组合的设计来实现或操作所描述的功能。通用处理器可以为微处理器,可选地,该通用处理器也可以为任何传统的处理器、控制器、微控制器或状态机。处理器也可以通过计算装置的组合来实现,例如数字信号处理器和微处理器,多个微处理器,一个或多个微处理器联合一个数字信号处理器核,或任何其它类似的配置来实现。
本申请中所描述的方法或算法的步骤可以直接嵌入硬件、处理器执行的软件单元、或者这两者的结合。软件单元可以存储于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、可移动磁盘、CD-ROM或本领域中其它任意形式的存储媒介中。示例性地,存储媒介可以与处理器连接,以使得处理器可以从存储媒介中读取信息,并可以向存储媒介存写信息。可选地,存储媒介还可以集成到处理器中。处理器和存储媒介可以设置于ASIC中,ASIC可以设置于UE中。可选地,处理器和存储媒介也可以设置于UE中的不同的部件中。
应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施过程构成任何限定。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
本说明书的各个部分均采用递进的方式进行描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点介绍的都是与其他实施例不同之处。尤其,对于装 置和系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例部分的说明即可。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (56)

  1. 一种视频解码方法,其特征在于,所述方法包括:
    检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一阈值是大于0且小于1的数值;
    当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块,所述第一分块包含所述像素区域;
    当所述第一分块的面积等于所述像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块,以得到至少两个编码单元,并根据所述至少两个编码单元的编码信息得到所述至少两个编码单元的重建块;或者,
    当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
  2. 如权利要求1所述的方法,其特征在于,还包括:
    当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块,所述第一分块是非边界图像块,所述第二分块为边界图像块并包括子像素区域,所述子像素区域是所述像素区域的部分区域;
    继续划分所述第二分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
  3. 如权利要求1所述的方法,其特征在于,当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,以垂直于所述第一边的方向划分所述第一分块得到第一子分块和第二子分块,所述第一子分块是非边界图像块,所述第二子分块包括子像素区域,所述子像素区域是所述像素区域的部分区域。
  4. 如权利要求1所述的方法,其特征在于,当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,在垂直于所述第一边的方向上对所述第一分块执行二叉树BT划分,或者对所述第一分块执行四叉树QT划分。
  5. 如权利要求1所述的方法,其特征在于,当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块。
  6. 如权利要求5所述的方法,其特征在于,所述第一阈值为0.25,所述第二阈值为零,当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于零并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.25时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比3,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  7. 如权利要求5所述的方法,其特征在于,所述第一阈值为0.5,所述第二阈值为0.25,当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于0.25并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.5时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  8. 如权利要求5所述的方法,其特征在于,所述第一阈值为0.75,所述第二阈值为0.5,当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.75时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  9. 如权利要求2所述的方法,其特征在于,当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于第三阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块。
  10. 如权利要求9所述的方法,其特征在于,所述第一阈值为0.75,所述第三阈值为1,当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于第三阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于或者等于0.75并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分 所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  11. 一种视频解码方法,其特征在于,所述方法包括:
    检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界;
    当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块;
    将所述第一分块和第二分块中为非边界块的分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块或所述第二分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
  12. 如权利要求11所述的方法,其特征在于,所述预设区间的数值范围为大于第二阈值并且小于第一阈值。
  13. 如权利要求12所述的方法,其特征在于,所述第一阈值为0.25,所述第二阈值为零,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于零并且所述第一子边的边长与所述第一边的边长之比小于0.25时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比3,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
  14. 如权利要求12所述的方法,其特征在于,所述第一阈值为0.5,所述第二阈值为0.25,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于0.25并且所述第一子边的边长与所述第一边的边长之比小于0.5时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
  15. 根据权利要求14所述的方法,其特征在于,所述继续划分所述第一分块或所述第二分块包括:
    以垂直于所述第一边的方向对所述第一分块执行二叉树划分或者对所述第一分块执行四叉树划分。
  16. 如权利要求12所述的方法,其特征在于,所述第一阈值为0.75,所述第二阈值为0.5,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于0.75时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
  17. 根据权利要求16所述的方法,其特征在于,所述继续划分所述第一分块或所述第二分块包括:
    以垂直于所述第一边的方向对所述第一分块进行划分,得到第一子分块和第二子分块,所述第一子分块第二子边的边长与所述第二子分块第三子边的边长满足2比1,所述第二子边和所述第三子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一子分块为非边界图像块。
  18. 根据权利要求16所述的方法,其特征在于,所述继续划分所述第一分块或所述第二分块包括:
    以垂直于所述第一边的方向对所述第一分块执行二叉树划分或者对所述第一分块执行四叉树划分。
  19. 如权利要求12所述的方法,其特征在于,所述第一阈值为1,所述第二阈值为0.75,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于0.75并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块为非边界块。
  20. 如权利要求12所述的方法,其特征在于,所述第一阈值为1,所述第二阈值为0.5,所述当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块包括:
    当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  21. 一种视频解码方法,其特征在于,所述方法包括:
    确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且所述右下角图像块的第二子边的边长与第二边的边长之比大于所述预设阈值,所述第一边包含所述第一子边,所述第二边包含所述第二子边,所述第一边垂直于所述第二边,所述第一子边和所述第二子边是所述右下角图像块中像素区域的边;
    采用QT衍生的划分模式划分所述右下角图像块得到第一分块、第二分块和第三分 块,所述第一分块包含所述像素区域的第一子像素区域,所述第一分块位于所述右下角图像块的左上角,所述第二分块包含所述像素区域的第二子像素区域,所述第一分块的面积和所述第二分块的面积均是所述右下角图像块面积的四分之一,所述第三分块的面积是所述边界图像块面积的二分之一,所述第一子像素区域和所述第二子像素区域构成了所述像素区域;
    继续划分所述第二分块,以得到所述第二分块对应的编码单元,并根据所述第二分块对应的编码单元的编码信息得到所述第二分块对应的编码单元的重建块;
    当所述第一分块的面积等于所述第一子像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块,以得到所述第一分块对应的至少两个编码单元,并根据所述第一分块对应的至少两个编码单元的编码信息得到所述第一分块对应的至少两个编码单元的重建块;或者,
    当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的编码信息得到所述第一分块对应的编码单元的重建块。
  22. 如权利要求21所述的方法,其特征在于,所述预设阈值是0.5。
  23. 一种视频编码方法,其特征在于,所述方法包括:
    检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值,其中,所述第一边是所述当前边界图像块的边,所述第一子边像素区域的边,所述像素区域是所述当前边界图像块内的像素区域,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一阈值是大于0且小于1的数值;
    当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块,所述第一分块包含所述像素区域;
    当所述第一分块的面积等于所述像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块,以得到至少两个编码单元,并根据所述至少两个编码单元的图像信息得到所述至少两个编码单元的编码信息;或者,
    当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
  24. 如权利要求23所述的方法,其特征在于,还包括:
    当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块,所述第一分块是非边界图像块,所述第二分块为边界图像块并包括子像素区域,所述子像素区域是所述像素区域的部分区域;
    继续划分所述第二分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
  25. 一种视频编码方法,其特征在于,所述方法包括:
    检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界;
    当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块;
    将所述第一分块和第二分块中为非边界块的分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块或所述第二分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
  26. 一种视频编码方法,其特征在于,所述方法包括:
    确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且所述右下角图像块的第二子边的边长与第二边的边长之比大于所述预设阈值,所述第一边包含所述第一子边,所述第二边包含所述第二子边,所述第一边垂直于所述第二边,所述第一子边和所述第二子边是像素区域的边,所述像素区域是所述右下角图像块中的像素区域;
    采用QT衍生的划分模式划分所述右下角图像块得到第一分块、第二分块和第三分块,所述第一分块包含所述像素区域的第一子像素区域,所述第一分块位于所述右下角图像块的左上角,所述第二分块包含所述像素区域的第二子像素区域,所述第一分块的面积和所述第二分块的面积均是所述右下角图像块面积的四分之一,所述第三分块的面积是所述边界图像块面积的二分之一,所述第一子像素区域和所述第二子像素区域构成了所述像素区域;
    继续划分所述第二分块,以得到所述第二分块对应的编码单元,并根据所述第二分块对应的编码单元的图像信息得到所述第二分块对应的编码单元的编码信息;
    当所述第一分块的面积等于所述第一子像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的图像信息得到所述第一分块对应的编码单元的编码信息;或者,
    当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的图像信息得到所述第一分块对应的编码单元的编码信息。
  27. 如权利要求26所述的方法,其特征在于,所述预设阈值是0.5。
  28. 一种视频解码设备,其特征在于,所述设备包括:
    检测模块,用于检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值,其中,所述第一边是所述当前边界图像块的边, 所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一阈值是大于0且小于1的数值;
    划分模块,用于当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块,所述第一分块包含所述像素区域;
    所述划分模块,还用于当所述第一分块的面积等于所述像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块,以得到至少两个编码单元,并根据所述至少两个编码单元的编码信息得到所述至少两个编码单元的重建块;
    所述划分模块,还用于当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
  29. 如权利要求28所述的设备,其特征在于,还包括:
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块,所述第一分块是非边界图像块,所述第二分块为边界图像块并包括子像素区域,所述子像素区域是所述像素区域的部分区域;
    所述划分模块,还用于继续划分所述第二分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
  30. 如权利要求28所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,以垂直于所述第一边的方向划分所述第一分块得到第一子分块和第二子分块,所述第一子分块是非边界图像块,所述第二子分块包括子像素区域,所述子像素区域是所述像素区域的部分区域。
  31. 如权利要求28所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,在垂直于所述第一边的方向上对所述第一分块执行二叉树BT划分;
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于第二阈值时,对所述第一分块执行四叉树QT划分。
  32. 如权利要求28所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于第二阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块。
  33. 如权利要求32所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于零并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.25时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第 一分块第二边的边长与所述第二分块第三边的边长满足1比3,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  34. 如权利要求32所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于0.25并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.5时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  35. 如权利要求32所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于或者等于0.75时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  36. 如权利要求29所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值并且所述第一子边的边长与所述第一边的边长之比小于或者等于第三阈值时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块。
  37. 如权利要求36所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于或者等于0.75并且所述第一子边的边长与所述第一边的边长之比小于1时,从垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  38. 一种视频解码设备,其特征在于,所述解码设备包括:
    检测模块,用于检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界;
    划分模块,用于当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块;
    所述划分模块,还用于将所述第一分块和第二分块中为非边界块的分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块或所述第二分块,以得到编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块。
  39. 如权利要求38所述的设备,其特征在于,所述预设区间的数值范围为大于第二阈值并且小于第一阈值。
  40. 如权利要求39所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于零并且所述第一子边的边长与所述第一边的边长之比小于0.25时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比3,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
  41. 如权利要求39所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于0.25并且所述第一子边的边长与所述第一边的边长之比小于0.5时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足1比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
  42. 根据权利要求41所述的设备,其特征在于,
    所述划分模块,还用于以垂直于所述第一边的方向对所述第一分块执行二叉树划分或者对所述第一分块执行四叉树划分。
  43. 如权利要求39所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于0.75时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块包括所述像素区域。
  44. 根据权利要求43所述的设备,其特征在于,
    所述划分模块,还用于以垂直于所述第一边的方向对所述第一分块进行划分,得到第一子分块和第二子分块,所述第一子分块第二子边的边长与所述第二子分块第三子边的边长满足2比1,所述第二子边和所述第三子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一子分块为非边界图像块。
  45. 根据权利要求43所述的设备,其特征在于,
    所述划分模块,还用于以垂直于所述第一边的方向对所述第一分块执行二叉树划分或者对所述第一分块执行四叉树划分。
  46. 如权利要求39所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于0.75并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一分块为非边界块。
  47. 如权利要求39所述的设备,其特征在于,
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于0.5并且所述第一子边的边长与所述第一边的边长之比小于1时,以垂直于所述第一边的方向上划分所述当前边界图像块得到所述第一分块和所述第二分块,所述第一分块第二边的边长与所述第二分块第三边的边长满足3比1,所述第二边和所述第三边均垂直于所述当前边界图像块所在的所述当前视频帧的边界。
  48. 一种视频解码设备,其特征在于,所述设备包括:
    检测模块,用于确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且所述右下角图像块的第二子边的边长与第二边的边长之比大于所述预设阈值,所述第一边包含所述第一子边,所述第二边包含所述第二子边,所述第一边垂直于所述第二边,所述第一子边和所述第二子边是所述右下角图像块中像素区域的边;
    划分模块,用于采用QT衍生的划分模式划分所述右下角图像块得到第一分块、第二分块和第三分块,所述第一分块包含所述像素区域的第一子像素区域,所述第一分块位于所述右下角图像块的左上角,所述第二分块包含所述像素区域的第二子像素区域,所述第一分块的面积和所述第二分块的面积均是所述右下角图像块面积的四分之一,所述第三分块的面积是所述边界图像块面积的二分之一,所述第一子像素区域和所述第二子像素区域构成了所述像素区域;
    所述划分模块,还用于继续划分所述第二分块,以得到所述第二分块对应的编码单元,并根据所述第二分块对应的编码单元的编码信息得到所述第二分块对应的编码单元的重建块;
    所述划分模块,还用于当所述第一分块的面积等于所述第一子像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的编码信息得到所述编码单元的重建块,或者继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的编码信息得到所述第一分块对应的编码单元的重建块;
    所述划分模块,还用于当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的编码信息得到所述第一分块对应的编码单元的重建块。
  49. 如权利要求48所述的设备,其特征在于,所述预设阈值是0.5。
  50. 一种视频编码设备,其特征在于,所述设备包括:
    检测模块,用于检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否小于或者等于第一阈值,其中,所述第一边是所述当前边界图像块的边,所述第一子边像素区域的边,所述像素区域是所述当前边界图像块内的像素区域,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界,所述第一阈值是大于0且小于1的数值;
    划分模块,用于当所述第一子边的边长与所述第一边的边长之比小于或者等于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块,所述第一分块包含所述像素区域;
    所述划分模块,还用于当所述第一分块的面积等于所述像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块,以得到至少两个编码单元,并根据所述至少两个编码单元的图像信息得到所述至少两个编码单元的编码信息;
    所述划分模块,还用于当所述第一分块的面积大于所述像素区域的面积时,继续划分所述第一分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
  51. 如权利要求50所述的设备,其特征在于,还包括:
    所述划分模块,还用于当所述第一子边的边长与所述第一边的边长之比大于所述第一阈值时,以垂直于所述第一边的方向上划分所述当前边界图像块得到第一分块和第二分块,所述第一分块是非边界图像块,所述第二分块为边界图像块并包括第一子像素区域,所述第一子像素区域是所述像素区域的部分区域;
    所述划分模块,还用于继续划分所述第二分块,以得到编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息。
  52. 一种视频编码设备,其特征在于,所述设备包括:
    检测模块,用于检测当前视频帧的当前边界图像块第一子边的边长与第一边的边长之比是否位于预设区间,其中,所述第一边是所述当前边界图像块的边,所述第一子边是所述当前边界图像块内像素区域的边,所述第一边和所述第一子边均垂直于所述当前边界图像块所在的所述当前视频帧的边界;
    划分模块,用于当所述第一子边的边长与所述第一边的边长之比位于所述预设区间时,以垂直于所述第一边的方向划分所述当前边界图像块得到第一分块和第二分块;
    所述划分模块,还用于将所述第一分块和第二分块中为非边界块的分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块或所述第二分块,以得到至少两个编码单元,并根据所述至少两个编码单元的图像信息得到所述至少两个编码单元的编码信息。
  53. 一种视频编码设备,其特征在于,所述设备包括:
    检测模块,用于确定当前视频帧的右下角图像块的第一子边的边长与第一边的边长之比小于或者等于预设阈值,且所述右下角图像块的第二子边的边长与第二边的边长之比大于所述预设阈值,所述第一边包含所述第一子边,所述第二边包含所述第二子边,所述第一边垂直于所述第二边,所述第一子边和所述第二子边是像素区域的边,所述像素区域是所述右下角图像块中的像素区域;
    划分模块,用于采用QT衍生的划分模式划分所述右下角图像块得到第一分块、第二分块和第三分块,所述第一分块包含所述像素区域的第一子像素区域,所述第一分块位于所述右下角图像块的左上角,所述第二分块包含所述像素区域的第二子像素区域,所述第一分块的面积和所述第二分块的面积均是所述右下角图像块面积的四分之一,所述第三分块的面积是所述边界图像块面积的二分之一,所述第一子像素区域和所述第二子像素区域构成了所述像素区域;
    所述划分模块,还用于继续划分所述第二分块,以得到所述第二分块对应的编码单元,并根据所述第二分块对应的编码单元的图像信息得到所述第二分块对应的编码单元的编码信息;
    所述划分模块,还用于当所述第一分块的面积等于所述第一子像素区域的面积时,将所述第一分块作为编码单元,并根据所述编码单元的图像信息得到所述编码单元的编码信息,或者继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的图像信息得到所述第一分块对应的编码单元的编码信息;
    所述划分模块,还用于当所述第一分块的面积大于所述第一子像素区域的面积时,继续划分所述第一分块,以得到所述第一分块对应的编码单元,并根据所述第一分块对应的编码单元的图像信息得到所述第一分块对应的编码单元的编码信息。
  54. 如权利要求53所述的设备,其特征在于,所述预设阈值是0.5。
  55. 一种视频解码设备,其特征在于,包括:处理器,所述处理器用于与存储器耦合,调用并执行存储器中的指令,以使所述视频解码设备执行权利要求1至22中任一项所述的视频解码方法。
  56. 一种视频编码设备,其特征在于包括:处理器,所述处理器用于与存储器耦合,调用并执行存储器中的指令,以使所述视频解码设备执行权利要求23至27中任一项所述的视频编码方法。
PCT/CN2020/081486 2019-03-30 2020-03-26 视频编码方法、视频解码方法及相关设备 WO2020200052A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910254106.7A CN111770337B (zh) 2019-03-30 2019-03-30 视频编码方法、视频解码方法及相关设备
CN201910254106.7 2019-03-30

Publications (1)

Publication Number Publication Date
WO2020200052A1 true WO2020200052A1 (zh) 2020-10-08

Family

ID=72664433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/081486 WO2020200052A1 (zh) 2019-03-30 2020-03-26 视频编码方法、视频解码方法及相关设备

Country Status (2)

Country Link
CN (1) CN111770337B (zh)
WO (1) WO2020200052A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102611884A (zh) * 2011-01-19 2012-07-25 华为技术有限公司 图像编解码方法及编解码设备
CN103503451A (zh) * 2011-05-06 2014-01-08 西门子公司 用于对经编码的图像分区进行滤波的方法和设备
US20140133768A1 (en) * 2012-11-13 2014-05-15 Hon Hai Precision Industry Co., Ltd. Electronic device and method for splitting image
CN107409226A (zh) * 2015-03-02 2017-11-28 寰发股份有限公司 用于视频编解码中的具有分数像素块向量分辨率的IntraBC模式的方法和装置
CN108668136A (zh) * 2017-03-28 2018-10-16 华为技术有限公司 图像编/解码方法、视频编/解码器及视频编解码系统
CN109151468A (zh) * 2017-06-28 2019-01-04 华为技术有限公司 一种图像数据的编码、解码方法及装置
CN109479131A (zh) * 2016-06-24 2019-03-15 世宗大学校产学协力团 视频信号处理方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101452860B1 (ko) * 2009-08-17 2014-10-23 삼성전자주식회사 영상의 부호화 방법 및 장치, 영상 복호화 방법 및 장치
CN103220529B (zh) * 2013-04-15 2016-02-24 北京大学 一种视频编解码环路滤波的实现方法
JP2015106747A (ja) * 2013-11-28 2015-06-08 富士通株式会社 動画像符号化装置、動画像符号化方法及び動画像符号化用コンピュータプログラム
KR20180075517A (ko) * 2015-11-24 2018-07-04 삼성전자주식회사 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치
KR20200117071A (ko) * 2017-07-17 2020-10-13 한양대학교 산학협력단 영상 부호화/복호화 방법 및 장치

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102611884A (zh) * 2011-01-19 2012-07-25 华为技术有限公司 图像编解码方法及编解码设备
CN103503451A (zh) * 2011-05-06 2014-01-08 西门子公司 用于对经编码的图像分区进行滤波的方法和设备
US20140133768A1 (en) * 2012-11-13 2014-05-15 Hon Hai Precision Industry Co., Ltd. Electronic device and method for splitting image
CN107409226A (zh) * 2015-03-02 2017-11-28 寰发股份有限公司 用于视频编解码中的具有分数像素块向量分辨率的IntraBC模式的方法和装置
CN109479131A (zh) * 2016-06-24 2019-03-15 世宗大学校产学协力团 视频信号处理方法及装置
CN108668136A (zh) * 2017-03-28 2018-10-16 华为技术有限公司 图像编/解码方法、视频编/解码器及视频编解码系统
CN109151468A (zh) * 2017-06-28 2019-01-04 华为技术有限公司 一种图像数据的编码、解码方法及装置

Also Published As

Publication number Publication date
CN111770337B (zh) 2022-08-19
CN111770337A (zh) 2020-10-13

Similar Documents

Publication Publication Date Title
CN110881126B (zh) 色度块预测方法以及设备
WO2020125595A1 (zh) 视频译码器及相应方法
WO2020244579A1 (zh) Mpm列表构建方法、色度块的帧内预测模式获取方法及装置
EP3840378A1 (en) Video decoding method and video decoder
WO2020119814A1 (zh) 图像重建方法和装置
WO2020114394A1 (zh) 视频编解码方法、视频编码器和视频解码器
WO2020143589A1 (zh) 视频图像解码、编码方法及装置
WO2020038378A1 (zh) 色度块预测方法及装置
US20230370597A1 (en) Picture partitioning method and apparatus
WO2020224476A1 (zh) 一种图像划分方法、装置及设备
WO2020147514A1 (zh) 视频编码器、视频解码器及相应方法
CN110868590B (zh) 图像划分方法及装置
WO2020259353A1 (zh) 语法元素的熵编码/解码方法、装置以及编解码器
WO2020143684A1 (zh) 图像预测方法、装置、设备、系统及存储介质
CN112135128B (zh) 图像预测方法、编码树节点划分方法及其装置
WO2020135371A1 (zh) 一种标志位的上下文建模方法及装置
WO2020200052A1 (zh) 视频编码方法、视频解码方法及相关设备
WO2020114393A1 (zh) 变换方法、反变换方法以及视频编码器和视频解码器
WO2020135615A1 (zh) 视频图像解码方法及装置
WO2020114508A1 (zh) 视频编解码方法及装置
CN110944180B (zh) 色度块预测方法及装置
WO2020259330A1 (zh) 非可分离变换方法以及设备
WO2020143292A1 (zh) 一种帧间预测方法及装置
WO2020119742A1 (zh) 块划分方法、视频编解码方法、视频编解码器
WO2020135409A1 (zh) 视频解码方法、装置及解码设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20783406

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20783406

Country of ref document: EP

Kind code of ref document: A1